What is DevOps (2019)

What is DevOps

What is DevOps? (70+ DevOps Hacks 2019)

DevOps is all about automating the application deployment process. It addresses the drawbacks associated with manual application deployment. The application deployment process contains several steps—from writing code to deploying the created release to the target environment, i.e., Microsoft Azure Cloud.

 

Understanding DevOps Tools

devops

There are several DevOps tools available that can help you develop an effective automated environment. You can also use separate tools for performing specific operations in DevOps.

 

A list of tools, based on the broad level functionality, follows. Note that to demonstrate the DevOps principles, we selected a set of tools to use as an example.

 

Build automation tools: These tools automate the process of creating a software build, compiling the source code, and packaging the code. Some build automation tools are:

  • Apache Ant (https://ant.apache.org/bindownload.cgi)
  • Apache Maven (https://maven.apache.org/download.cgi)
  • Boot (http://boot-clj.com/)
  • Gradle (https://gradle.org/)
  • Grunt (https://gruntjs.com/)
  • MSBuild (https://www.microsoft.com/en-in/ download/details.aspx?id=48159)
  • Waf (https://waf.io/)

 

Continuous Integration tools: These tools create builds and run tests automatically when the code changes are checked in to the central repository. Some CI tools are:

  • Bamboo (https://www.atlassian.com/software/ bamboo/download)
  • Buildbot (https://buildbot.net/)
  • Hudson (http://hudson-ci.org/)
  • TeamCity (https://www.jetbrains.com/ teamcity/download/). We focus on this tool in this blog.

 

Testing tools: These tools automate the testing process. These tools help organizations achieve configuration and delivery management needs in a specified time frame. Some commonly used testing tools are:

 

  • Selenium (http://www.seleniumhq.org/)
  • Watir (http://watir.com/)
  • Wapt (https://www.loadtestingtool.com/)
  • Apache JMeter (http://jmeter.apache.org/ download_jmeter.cgi)
  • QTest (https://www.qasymphony.com/qtest-trial-qascom/)

 

Version control system: This is a configuration management system that takes care of all the changes made to documents, codes, files, etc. Some commonly used version control systems are:

  • Subversion (https://subversion.apache.org/)
  • Team Foundation Server (TFS) (https://www. visualstudio.com/tfs/). We focus on this tool in this blog.
  • GIT (https://git-scm.com/)
  • Mercurial (https://www.mercurial-scm.org/)
  • Perforce (https://www.perforce.com/)
  • Code review tools: These tools help organizations improve the quality of their code. Some code review tools are:
  • Crucible (https://www.atlassian.com/software/ crucible)
  • Gerrit (https://www.gerritcodereview.com/)
  • GitHub (https://github.com/)
  • Bitbucket Server (https://www.atlassian.com/ software/bitbucket/server)

 

Continuous Delivery/release management tools:

These tools automate the process of building and testing code changes for release to production. Some of these tools are:

  • XL Release (https://xebialabs.com/products/xl-release/)
  • ElectricFlow (http://electric-cloud.com/products/electricflow/)
  • Serena Release (https://www.microfocus.com/serena/)
  • Octopus Deploy (https://octopus.com/downloads). We focus on this tool in this blog.
  • All-in-one platforms: These tools combine the functionalities of previously listed tools. Some all-in-­ one platforms are:
  • ProductionMap (http://www.productionmap.com/)
  • Jenkins (https://jenkins.io/)
  • Microsoft Visual Studio Team Services (Visual Studio Team Services) (https://www.visualstudio.com/team-services/). 
  • AWS CodePipeline (­https://aws.amazon.com/codepipeline/getting-started/)

 

With a basic understanding of the fundamentals, you’re ready to move forward and pe deeper into the specifics. We start by discussing stand-­ alone tools and thereafter discuss an all-in-one integrated platform.

 

Understanding TeamCity

teamcity


TeamCity is a CI server for developers and is powered by JetBrains.

 

It provides several relevant features:

Supports different platforms/tools/languages Automates the build and deployment processes Enhances quality and standards across teams Works as an artifact and NuGet repository Provides a reporting and statistics feature

 

Definition According to Martin Fowler, “Continuous Integration is a software development practice in which developers commit code changes into a shared repository several times a day.

Each commit is followed by an automated build to ensure that new changes integrate well into the existing code base and to detect problems early.”

 

[Note: You can free download the complete Office 365 and Office 2019 com setup Guide for here]

 

Understanding Magento

Understanding Magento

Magento is an open source e-commerce platform. It allows developers to easily create a shopping cart for their online stores.

 

It also allows developers to have better control over the content, appearance, and functionality of their online stores. It provides features such as search engine optimization and support for catalog-management tools.

 

Magento is extremely simple to use and can be used by inpiduals who are not experienced, developers. The availability of a number of themes and plug-ins makes it effective in enhancing the customers’ experience. Considerable support is available through its large volunteer community.

 

Benefits of Using Magento

Benefits of Using Magento

There are several benefits of using Magento. Some of them are as follows:

  • Easy installation.
  • Provides several layouts and plug-ins that can be used to add more functionality to the e-commerce solution.
  • Supports many payment gateways.
  • It is an open source technology, which means that it can be modified based on user requirements.

Disadvantages of Magento

The following disadvantages/limitations are associated with Magento:

  • A more complex system compared to Drupal.
  • Requires complex programming to add custom functionality.
  • Requires experienced developers to enable it to integrate with other systems.

 

Prerequisites of Running an Azure Application with Magento There are a few prerequisites needed to run an Azure application with Magento. A system must have:

  • A virtual machine on Azure running Linux
  • Apache server
  • MySQL
  • PHP

 

DevOps application - business scenarios

DevOps application

Application of DevOps varies for multiple scenarios, with accrued benefits as listed:

 

Automation of development cycle: Business needs are met with minimal manual intervention, and a developer can run a build with a choice of open tools through a code repository; the QA team can create a QA system as a replica, and deploy it to production seamlessly and quickly.

 

The single version of the truth - source code management: There are multiple versions of the code, but it is difficult to ascertain the appropriate code for the purpose. We lack a single version of the truth. Code review feedback is through emails and not recorded, leading to confusion and rework.

 

Consistent configuration management: We develop, test, and build source code on different systems. Validating the platforms and compatibility versions of dependencies are manual and error-prone.

 

It’s really challenging to ensure all the systems speak the same language and have the same versions of the tools, compilers, and so on. Our code works fine on build systems but doesn’t when moved to production systems, causing embarrassment regarding business deliverables, and cost overheads to react.

 

Product readiness for markets: We have a process to develop code, test, and build through defined timelines. There are many manual checks and validations in the process; the integrations between different groups cause our commitments and delivery dates to be unpredictable. We wish to know how close our product is to delivery and it's quality periodically, to plan in advance rather than being reactive.

 

Automation of manual processes: We are following manual processes, which are often error-prone, and wish to enhance efficiency by following an automation process wherever applicable.

 

Testing cycle automation, incremental testing, and integrating with the build cycle will expedite product quality, the release cycle, and infrastructure service automation such as creating, starting, stopping, deleting, terminating, and restarting virtual or bare-metal machines.

 

Containers: Portability of code is the primary challenge. The code works in development and QA environments, but moving to production systems causes multiple challenges such as code not compiling due to dependency issues, build break down, and so on.

 

The building platform agnostic code is a challenge, and maintaining multiple platform versions of development and QA platforms is a huge overhead. Portable container code would alleviate these kinds of issues.

 

On-premise challenges: We have many on-premise systems. There are multiple challenges, from capacity planning to turnaround time. Capex and operational expenses are unpredictable. Cloud migration seems to have multiple choices and vendors, so there needs to be an efficient adoption method to ensure results.

 

Business drivers for DevOps adoption to big data

Factors contributing to wide-scale popularity and adoption of DevOps among big data systems are listed as follows.

 

Data can be generated through multiple sources, internal, external, and even social media. Internal data is the data generated through internal systems and operations, such as in a bank, adding new customers or customer transactions with the bank through multiple channels such as ATM, online payments, purchases, and so on.

 

External sources could be procuring gold exchange rates and foreign exchange rates from RBI. These days, social media data is widely used for marketing and customer feedback on products. Harnessing the data from all avenues and using it intelligently is key for business success.

 

Going a step further, a few companies even monetize data, for example, Healthcare IQ, Owens & Minor, State Street Global Corporation, Ad Juggler, comScore, Verisk Analytics, Nielsen, and LexisNexis.

 

These organizations buy raw data such as web analytics on online product sales, or online search records for each brand, reprocess the data into an organized format, and sell it to research analysts or organizations looking for competitor intelligence data to reposition their products in markets.

 

Let’s analyze the factors fueling the growth of data and business. Fundamental changes in market and customer behavior have had a significant impact on the data explosion. Some of the key drivers of change are:

 

Customer preference: Today, customers have many means of interacting with businesses; for example, a bank provides multiple channels such as ATM withdrawals, online banking, mobile banking, card payments, on-premise banking, and so on.

 

The same is true for purchases; these can be in the shop, online, mobile-based, and so on, which organizations have to maintain for business operations. So, these multiple channels contribute to increased data management.

 

Social media: Data is flooding in from social media such as Facebook, LinkedIn, and Twitter. On the one hand, they are social interaction sites between inpiduals; on the other hand, companies also rely on social media to socialize their products.

 

The data posted in terabytes/petabytes, in turn, is used by many organizations for data mining too. This is contributing to the huge data explosion.

 

Regulations: Companies are required to maintain data in proper formats for a stipulated time, as required by regulatory bodies. For example, to combat money laundering, each organization dealing with finance is required to have clear customer records and credentials to share with regulatory authorities over extended periods of time, such as 10 to 15 years.

 

Digital world: As we move towards the paperless digital world, we keep adding more digital data, such as e-books and ERP applications to automate many tasks and avoid paperwork. These innovations are generating much of digital data growth as well.

 

The next generation will be more data-intensive, with the Internet of Things and data science at the forefront, driving business and customer priorities.

 

Cloud computing

Cloud computing

Acceptance of cloud platforms as the de facto service line has brought many changes to procuring and managing infrastructure.

Provisioning hardware and other types of commodity work on the cloud is also important for improving efficiency, as moving these IT functions to the cloud enhances the efficiency of services, and allows IT departments to shift their focus away from patching operating systems.

 

DevOps with cloud adoption is the most widely implemented popular option. With cloud penetration, the addition of infrastructure/servers is just a click away. This, along with credible open source tools, has paved the way for DevOps.

 

In a fraction of time, build, QA, and pre-prod machines can be added as exact replicas and configurations as required, using open source tools.

 

In-memory computing

 computing

Traditional software architecture was formerly based on disks as the primary data storage; then the data moved from disk to main memory and CPU to perform aggregations for business logic. This caused the IO overhead of moving large volumes of data back and forth from disk to memory units.

 

In-memory technology is based on hardware and software innovations to handle the complete business application data in the main memory itself, so the computations are very fast. To enable in-memory computing, many underlying hardware and software advancements have contributed.

The software advancements include the following:

  • Partitioning of data
  • No aggregate tables
  • Insert the only delta
  • Data compression
  • Row plus column storage
  • The hardware advancements include the following:
  • Multi-core architecture allows massive parallel scaling
  • Multifold compression
  • Main memory has scalable capacity
  • Fast prefetch unlimited size

 

Machine-learning projects

Machine-learning

There are a wide variety of open source tools to select for adoption in specific segments of DevOps, such as the following:

 

Docker: A Docker container consists of packaging the application and its dependencies all up in a box. It runs as an isolated process on the host operating system, sharing the kernel with another container. It enjoys resource isolation and allocation benefits like VMs but is much more portable and efficient.

 

Kubernetes: Kubernetes is an open source orchestration system for Docker containers. It groups containers into logical units for easy management and discovery handles scheduling on nodes, and actively manages workloads to ensure their state matches users’ declared intentions.

 

Jenkins: Jenkins is a web-enabled tool used through the application or a web server such as Tomcat, for continuous build, deployment, and testing, and is integrated with build tools such as Ant/Maven and the source code repository Git. It also has mastered and dumb slaves. Ansible:

 

Ansible automates software provisioning, configuration management, and application deployment with agentless, Secure Shell (SSH) mode, Playbooks, Towers, and Yum scripts are the mechanisms. Chef and Puppet: Chef and Puppet are agent-based pull mechanisms for the deployment automation of work units.

 

GitHub: Git is a popular open source version control system. It’s a web-based hosted service for Git repositories. GitHub allows you to host remote Git repositories and has a wealth of community-based services that make it ideal for open source projects.

 

There are comprehensive frameworks readily available, such as RedHat Openshift, Microsoft Azure, and AWS container services, with pre-integrated and configured tools to implement.

 

A few popular open source tools are listed here:

  • Source code management: Git, GitHub, Subversion, and Bitbucket
  • Build management: Maven, Ant, Make, and MSBuild
  • Testing tools: JUnit, Selenium, Cucumber, and QUnit
  • Repository management: Nexus, Artifactory, and Docker hub
  • Continuous integration: Jenkins, Bamboo, TeamCity, and Visual Studio
  • Configuration provisioning: Chef, Puppet, Ansible, and Salt
  • Release management: Visual Studio, Serena Release, and StackStorm
  • Cloud: AWS, Azure, OpenShift, and Rackspace
  • Deployment management: Rapid Deploy, Code Deploy, and Elastic box
  • Collaboration: Jira, Team Foundation, and Slack
  • BI/Monitoring: Kibana, Elasticsearch, and Nagios
  • Logging: Splunk, Logentries, and Logstash
  • Container: Linux, Docker, Kubernetes, Swam, AWS, and Azure

 

Benefits of DevOps

Benefits of DevOps

DevOps adoption and maturity for big data systems will benefit organizations in the following ways:

  • DevOps processes can be implemented as a standalone or a combination of other processes
  • Automation frameworks will improve business efficiency
  • DevOps frameworks will help to build resilience into the application’s code
  • DevOps processes incorporate SLAs for operational requirements The operations manual (runbook) is prepared in development to aid operations
  • In matured DevOps processes, runbook-driven development is integrated Into DevOps processes, application-specific monitoring is part of the development process
  • DevOps planning considers high availability and disaster recovery technology
  • Resilience is built into the application code in-line with technology features
  • DevOps full-scripted installation facilitates fully automate deployment DevOps operation team and developers are familiar with using logging frameworks

 

  • The non-functional requirements of operability, maintenance, and monitoring get sufficient attention, along with system development specifications Continuous integration and continuous delivery eliminates human errors, reduces planned downtime for upgrades, and facilitates productivity improvements

 

DevOps Best Practices

DevOps Best Practices

 

The DevOps process framework emphasizes the adoption of SCM for accruing the following benefits for the organization:

  • Coordination of services between members of a software development team
  • Define a single source of truth for any version, minor or major Review changes before implementing
  • Track co-authoring, collaboration, and inpidual contributions
  • Audit code changes and rollback facility
  • Incremental backup and recovery
  • SCM tools prevalent in the market are as follows:
  • IBM ClearCase
  • Perforce
  • PVCS
  • Team Foundation Server
  • Visual Studio Team Services
  • Visual SourceSafe

Open source SCM tools are as follows—their popularity is also attributed to DevOps’ widespread adoption:

  • Subversion (SVN)
  • Concurrent Version System (CVS)
  • Git
  • SCCS
  • Revision control systems
  • Bitbucket

 

Code Review

Code Review

Code reviews are an important process to improve the quality of software instances before they are integrated into the mainstream. They help identify and remove common vulnerabilities such as memory leaks, formatting errors and buffer overflows. Code review or inspection can be both formal and informal.

 

In a formal code review, the process is through multiple methods such as formal meetings, and interactions to review the code line by line. Informal code reviews can be over the shoulder, emails, pair programming where a few authors codevelop, or tool-assisted code reviews—these are also called code walkthroughs.

 

A code review process framework benefits the organization as follows:

  • Collaboration between software development team members
  • Identification and elimination of code defects before integration
  • Improvement of code quality
  • Quick turnaround of the development cycle
  • Proprietary tools for code review automation:
  • Crucible
  • Collaborator
  • Codacy
  • Unsourced
  • Understand
  • Open source tools for code review automation:
  • Review board
  • Phabricator
  • Gerrit
  • GitLab

 

Configuration Management

Configuration Management (CM) is the broad subject of governing configuration items at the enterprise level, as per Infrastructure Library (ITIL); even the configuration management database (CMDB) is part of the CM strategy.

 

Configuration management includes the identification, verification, and maintenance of configuration items of both software and hardware, such as patches and versions. In simple terms, it’s about managing the configuration of a system and ensuring its fitness for its intended purpose.

 

A configuration management tool will validate the appropriateness of the configurations on the system as per the requirements and its interoperability between systems.

 

A common example is to ensure the code developed on a development system is effectively functional on a QA (test) system and production systems. Any loss of configuration parameters between the systems will be catastrophic for the application’s performance.

 

As per DevOps, the benefits of incorporating configuration management processes and tools for an organization can be summarized as follows:

  • Facilitates organizations with impact analysis due to the configuration change
  • Allows automated provisioning on different systems such as dev, QA, and prod Facilitates audit, account, and verification of the systems
  • Reduces redundant work by ensuring consistency
  • Effectively manages simultaneous updates
  • Avoids configuration related problems of a single version of the truth Simplifies coordination between team members of development and operations
  • It is helpful in tracking defects and resolving them in time Helps in predictive and preventive maintenance

 

A few popular configuration management tools for infrastructure are as follows:

  • BMC Software’s Atrium
  • Hewlett Packard Enterprise’s Universal Configuration Management

 

Database

Database

A few popular software configuration management tools are as follows:

  • Chef
  • Puppet
  • Ansible
  • Salt
  • Juju

 

Continuous Integration

Continuous Integration

Continuous integration is a DevOps best practice wherein developers continuously integrate their code in small logical units to a commonly shared repository with regularity (for example, once a day).

 

The advantage of such a process is the transparency of the code’s quality and fitness for its intended purpose. Otherwise, bulk code integration after the lapse of a fixed time period could expose many defects or integration challenges which could be expensive to resolve.

 

To achieve continuous integration, there are few prerequisites to be implemented, as follows:

  • Using a version repository for source code
  • Regular code check-in schedule
  • Automate testing for the code changes
  • Automate the build
  • Deploy build in preproduction

 

The benefits of continuous integration are as follows:

  • Availability of latest code as we commit early and often
  • Build cycles are faster as build issues are exposed early with check-ins Transparency in the build process means better ownership and lesser defects
  • Automating the deployment process leads to quicker turnaround

 

Some continuous integration tools that are available are as follows:

  • Jenkins
  • TeamCity
  • Travis
  • Go CD
  • Buddy
  • Bitbucket
  • Chef
  • Microsoft Teamcenter
  • CruiseControl
  • Bamboo
  • GitLab CI
  • CircleCI
  • Codeship

 

Continuous Deployment

Continuous Deployment

Continuous deployment is the fully matured and complete process cycle of code change, passing through every phase of the software life cycle to be deployed to production environments.

 

Continuous deployment requires the entire process to be automated—also termed as automated application release—through all stages, such as the packaging of the application, ensuring the dependencies are integrated, deployment testing, and the production of adequate documentation for compliance.

 

The benefits of continuous deployment and automated application release are as follows:

Frequent product releases deliver software as fast as possible Automated and accelerated product releases with the code change Code changes qualify for production both from a technical and quality viewpoint

 

The most current version of the product is ready in shippable format Deployment modeling reduces errors, resulting in better product quality Consolidated access to all tools, process and resource data leads to quicker troubleshooting and time to market

 

Effective collaboration between dev, QA, and operation teams leads to higher output and better customer satisfaction Facilitates lower audit efforts owing to a centralized view of all phase activities

 

Infrastructure as Code

Infrastructure as Code is a means to perform infrastructure services through the defining of configuration files. In DevOps’ scope, IaC is the automation of routine tasks through the code, typically as configuration definition files, such as shell scripts, Ansible playbooks, Chef recipes, or Puppet manifests.

 

It’s usually a server and client setup with a push or pull-based mechanisms, or agentless through the secured shell (SSH). Many regular tasks on systems such as create, start, stop, delete, terminate, and restarting virtual or bare-metal machines are performed through software.

 

In traditional on-premise systems, many of the system administration tasks were manual and person dependent. However, with the explosion of big data with cloud computing, all the regular system activities and tasks are managed like any software code. They are maintained in code repositories, and the latest build updates are tested for deployment.

 

The advantages

  • The use of definition files and code to update system configuration is quick
  • The version of all the code and changes is less error-prone and has reproducible results

 

Thorough testing of the deployment with IRC and test systems Smaller regular changes are easy to manage, bigger infrastructure updates are likely to contain errors that are difficult to detect Audit tracking and compliance are easy with definition files Multiple servers update simultaneously System availability is high, with less downtime.

Some tools for IaC are as follows:

  • Ansible tower
  • CFEngine
  • Chef
  • Puppet
  • SaltStack

 

Routine Automation

Routine Automation

Every organization aims to automate routine, repetitive tasks; in fact, the survival of most companies and software products is based on the degree to which they automate. ERP systems, data visualization, domain applications, data analytics, and so on; almost all segments are potential areas for automation.

 

A few sections to automate are infrastructure (deployment, patching scalability), applications (development, integration, builds, delivery, and deployment), load balancers, feedback, and defects/errors management.

 

Design print phase:

Design print phase

The design phase is also the architecture phase; it’s about producing a blueprint of the target state to accomplish. It’s an iterative process of weighing alternatives for tools and processes arriving at an agreement by key stakeholders.

 

The timeline and cost will be baselined and revisited and revised regularly based on new learnings from the project as we move forward towards the target state.

 

The timeline for this phase depends on how acceptable the processes, tools, and budgets are to the key stakeholders. Design phase deliverables are as follows:

  • Target state is agreed upon
  • The baseline of the DevOps process to be adopted
  • The baseline of most viable tools to be implemented
  • Baseline agreed on timelines and cost

 

Development phase:

Artifacts baselined from the blueprint phase will be inputs for the development phase; the agreed upon process changes, tools to be implemented, frameworks to be adopted, and so on. A detailed project plan covering deliverables, schedules, dependencies, constraints, resource leveling, and so on will be quite handy.

 

Agile scrum methodology will be the framework to implement the DevOps, which will be discussed in detail. The timeline for the development phase will be as per the project plan base lined initially and revised regularly with the progress of milestones that have been accomplished. Development phase deliverables are as follows:

  • Initial project plan baselined and signoff
  • Incorporating regular feedback to project completion
  • Allocation of resources for each stage
  • Including new skills, methods, process, and tools
  • Workarounds for project risks, constraints, and so on
  • Deliverables as agreed in the project plan

 

Deployment phase:

The DevOps deployment phase is in accordance with the best practices outlined in the DevOps process framework detailed above. It depends on whether the deployment is a process, an application tool, or for infrastructure.

 

The timeline will be evaluated as per experience gained in the development phase. Deployment phase deliverables are as follows:

  • Deployment guide—cutover plan to production
  • Deployment checklist
  • Signoff from key stakeholders
  • Rollback plan
  • Capacity planning

 

Monitoring phase:

Monitors the key performance factors for each phase’s performance of development, build, integration and deployment over a time duration. It’s followed by tracking the defects, bug fixes, user tickets and plans for continuous improvement.

 

Monitoring phase timelines are as per organization need and performance benchmarks. Monitoring phase deliverables are as follows:

  • Operations manual
  • Feedback forms and checklists
  • User guide, support manual
  • Process flow manual
  • Performance benchmark

 

DevOps Maturity Map

DevOps adoption is a value-added journey for an organization. It’s not something achieved overnight quickly, but matured step by step over a period of time with manifested results.

 

Like any Capability Maturity Model (CMMI) or Process Maturity Models, the critical success factors are to be defined for the program’s performance objectives.

 

The initial maturity state of key evaluation parameters is agreed upon by key stakeholders. Then the target maturity level of the parameter variables to be accomplished will be defined in the project charter, along with detailed procedure, milestones, budgets and constraints as approved by stakeholders.

DevOps process maturity framework.

 

DevOps Progression Framework/Readiness Model

As discussed in the previous model, DevOps adoption is a journey for an organization to higher maturity states. In the following table, different practice areas and maturity levels of DevOps on a broad scale are listed.

 

DevOps maturity levels may vary across teams as per their standards, similarly even a common department or position of the same organization may have significantly more varied and advanced practices than others for the same process flow.

 

Enhancing to achieve the best possible DevOps process workflow throughout the entire enterprise should be the end goal for all teams and departments.

 

Early and frequent commit of code:

In a distributed development environment with multiple projects, each team or developer intends to integrate their code with the mainline. Also, the feature branches change to be integrated into the mainline. It’s a best practice to integrate code quickly and early. 

 

The time delay increases between new changes and merging with the mainline will increase the risk of product instability, the time taken, and complications as the main-line evolve from the baseline.

 

Hence each developer working with the feature branch should push their code at least once per day. For the main branch inactive projects, the high effort for constant rebasing must be evaluated before implementing.

 

Every change to be built: Developer changes are to be incorporated into the mainline, however, they can potentially destabilize the mainline affecting its integrity for the developers relying on the main line.

 

Continuous integration addresses this with the best practice of continuous build for any code change committed.

Any broken build requires immediate action as a broken build blocks the entire evolution of the mainline and it will be expensive depending on the frequency of commits and such issues. These issues can be minimized by enforcing branch level builds.

 

Push for review in Gerrit or pull request in GitHub are effective mechanisms to propose changes and check the quality of changes by identifying problems before they’re pushed into the mainline, causing rework.

 

Address build errors quickly: The best practice of building at the branch level for each change will put the onus on the respective developers to fix their code build issues immediately rather than propagate it to the main branch. This forms a continuous cycle of Change-Commit-Build-Fix at each respective branch level.

 

Build fast: The quick turnaround of builds, results, and tests by automatic processes should be vital inputs for the developer workflow; a short wait time will be good for the performance of the continuous integration process on overall cycle efficiency.

 

This is a balancing act between integrating new changes securely to the main branch and simultaneously building, validating, and scenario testing.

 

At times, there could be conflicting objectives so trade-offs need to be achieved to find a compromise between different levels of acceptance criteria, considering the quality of the mainline is most important. Criteria include syntactical correctness, unit tests, and fast-running scenario tests for changes incorporated.

 

Pre-production run: Multiple setups and environments at various stages of the production pipeline cause errors. This would apply to developer environments, branch level build configurations, and central main build environments.

 

Hence the machines where scenario tests are performed should be similar and have a comparable configuration to the main production systems.

 

Manual adherence to an identical configuration is a herculean task; this is where DevOps value addition and core value proposition and treat the infrastructure setup and configuration similar to writing code. All the software and configuration for the machine are defined as source files which enable you to recreate identical systems;

 

The build process is transparent: The build status and records of the last change must be available to ascertain the quality of the build for everyone. Gerrit is a change review tool and can be effectively used to record and track code changes, the build status, and related comments.

 

Jenkins flow plugins offer to build team and developers a complete end to end overview of the continuous integration process for source code management tools, the build scheduler, the test landscape, the artifact repository, and others as applicable.

 

Automate the deployment: Installation of the application to a runtime system in an automated way is called deployment and there are several ways to accomplish this.

 

Automated scenario tests should be part of the acceptance process for changes proposed. These can be triggered by builds to ensure product quality.

 

Multiple runtime systems like JEE servers are set up to avoid single-instance bottlenecks of serializing test requests and the ability to run parallel test queries.

 

Using a single system also has associated overheads in recreating the environment with change overhead for every test case, causing a degeneration a performance. Docker or container technology to install and start runtime systems on demand in well-defined states, to be removed afterward.

 

Automated test cases, since the frequency and time of validations of new comments, is not predictable in most cases, so schedule daily jobs at a given time is an option to explore, where the build is deployed to a test system and notified after the successful deployment.

Recommend