What is DevOps (2019)

DevOps for Azure

DevOps for Azure

DevOps is all about automating the application deployment process. It addresses the drawbacks associated with manual application deployment. The application deployment process contains several steps—from writing code to deploying the created release to the target environment, i.e., Microsoft Azure Cloud. This blog discusses the need for DevOps, the DevOps functions, the application deployment process, and the DevOps tools.

 

DevOps integrates the functionality of both teams (Developers and Operations/Production) in the application development and deployment process. This blog provided information about the basic functions of DevOps. The entire process of application deployment was discussed.

 

The Need for DevOps

Need for DevOps


Traditionally, the software development lifecycle warranted siloed teams taking on specific tasks, i.e., the development team and the operations team. The developers were responsible for writing code, checking in source code into source control, testing code, QA of code, and staging for deployment.

 

The Operations/Production team was responsible for deploying the code to servers and thereafter coordinating with customers and providing feedback to developers. Such siloed efforts were mostly manual processes with a small degree of siloed application/software deployment work. This manual process had several drawbacks, some of which are as follows:

  • The communication gap between different teams results in resentment and blame, which in turn delays fixing errors.
  • The entire process took a long time to complete.
  • The final product did not meet all the required criteria.
  • Some tools could not be implemented on the production server for security reasons.
  • The communication barriers slowed down performance and added to inefficiency.

 

To cope with these drawbacks, a push for automation arose, leading to the development of DevOps. DevOps is a combination of two terms and two teams—namely Developers and Operations. As the name indicates, it integrates the functionality of both of these teams (Developers and Operations/Production) in the application development and deployment process.

 

Describing the Functions of DevOps

 Functions of DevOps

 

The basic functions of DevOps are as follows:

  • Automates the entire process of application deployment. As a result, the entire process is straight forward and streamlined.
  • Allows multiple developers to check in and check out code simultaneously in/from the Source repository.
  • Provides a Continuous Integration (CI) server that pools the code from the Source repository and prepares the build by running and passing the unit tests and functional tests automatically.
  • Automates testing, integration, deployment, and monitoring tasks.
  • Automates workflows and infrastructure.
  • Enhances productivity and collaboration through continuous measurement of application performance.
  • Allows for rapid and reliable build, test, and release operations of the entire software development process.

 

DevOps Application Deployment Process

DevOps Application Deployment Process


Let’s now review the various steps in the application deployment process:

  • 1. Developers write code.
  • 2. The code is checked in to the source control/Source repository.

 

  • 3. Code check-in triggers the Continuous Integration (CI) server for generating the build. Automated unit testing can be done during the build process. Code coverage and code analysis can also be performed during this step. If there are build errors, unit test failures, or breaches of code coverage and code analysis rules, a report is generated and automatically sent back to the developer for correction.

 

  • 4. The successful build is then sent for release. This is where the release management process comes into the picture, whereby testing, QA, and staging operations are performed. Several types of tests are done, some of which are:
  • Module tests
  • Sub-system tests
  • System tests
  • Acceptance tests

 

  • 5. In the QA phase, the following types of tests are performed:
  • Regression tests
  • Functional tests
  • Performance test

 

Once the code passes all of the tests, a release version of the software, also called the “golden image,” is prepared. If any of the preceding tests fail, a report about the bug is generated for the team of developers who checked in the code.

 

The development team must first fix the bug and check in the code again. The code goes through the same process of generating the build and release until the code passes all tests.

 

6. The last step in the process is deploying the created release to the target environment—Microsoft Azure Cloud (https://azure.microsoft.com). Once the deployment is complete, all changes in the code are live for users of the target environment in Azure.

 

Understanding DevOps Tools

devops

 
There are several DevOps tools available that can help you develop an effective automated environment. You can also use separate tools for performing specific operations in DevOps. A list of tools, based on the broad level functionality, follows. Note that to demonstrate the DevOps principles, we selected a set of tools to use as an example.
 
 
Build automation tools: These tools automate the process of creating a software build, compiling source code, and packaging the code. Some build automation tools are:
 
  • Apache Ant (https://ant.apache.org/bindownload.cgi)
  • Apache Maven (https://maven.apache.org/download.cgi)
  • Boot (http://boot-clj.com/)
  • Gradle (https://gradle.org/)
  • Grunt (https://gruntjs.com/)
  • MSBuild (https://www.microsoft.com/en-in/ download/details.aspx?id=48159)
  • Waf (https://waf.io/)
 
 
Continuous Integration tools: These tools create builds and run tests automatically when the code changes are checked in to the central repository. Some CI tools are:
  • Bamboo (https://www.atlassian.com/software/ bamboo/download)
  • Buildbot (https://buildbot.net/)
  • Hudson (http://hudson-ci.org/)
  • TeamCity (https://www.jetbrains.com/ teamcity/download/). We focus on this tool in this blog.
 
 
Testing tools: These tools automate the testing process. These tools help organizations achieve configuration and delivery management needs in a specified time frame. Some commonly used testing tools are:
 
  • Selenium (http://www.seleniumhq.org/)
  • Watir (http://watir.com/)
  • Wapt (https://www.loadtestingtool.com/)
  • Apache JMeter (http://jmeter.apache.org/ download_jmeter.cgi)
  • QTest (https://www.qasymphony.com/qtest-trial-qascom/)
 
 
Version control system: This is a configuration management system that takes care of all the changes made to documents, codes, files, etc. Some commonly used version control systems are:
 
  • Subversion (https://subversion.apache.org/)
  • Team Foundation Server (TFS) (https://www. visualstudio.com/tfs/). We focus on this tool in this blog.
  • GIT (https://git-scm.com/)
  • Mercurial (https://www.mercurial-scm.org/)
  • Perforce (https://www.perforce.com/)
  • Code review tools: These tools help organizations improve the quality of their code. Some code review tools are:
  • Crucible (https://www.atlassian.com/software/ crucible)
  • Gerrit (https://www.gerritcodereview.com/)
  • GitHub (https://github.com/)
  • Bitbucket Server (https://www.atlassian.com/ software/bitbucket/server)
 
 

Continuous Delivery/release management tools:

 
These tools automate the process of building and testing code changes for release to production. Some of these tools are:
  • XL Release (https://xebialabs.com/products/xl-release/)
  • ElectricFlow (http://electric-cloud.com/products/electricflow/)
  • Serena Release (https://www.microfocus.com/serena/)
  • Octopus Deploy (https://octopus.com/downloads). We focus on this tool in this blog.
  • All-in-one platforms: These tools combine the functionalities of previously listed tools. Some all-in-­ one platforms are:
  • ProductionMap (http://www.productionmap.com/)
  • Jenkins (https://jenkins.io/)
  • Microsoft Visual Studio Team Services (Visual Studio Team Services) (https://www.visualstudio.com/team-services/). 
  • AWS CodePipeline (­https://aws.amazon.com/codepipeline/getting-started/)
 
With a basic understanding of the fundamentals, you’re ready to move forward and dive deeper into the specifics. We start by discussing stand-­ alone tools, and thereafter discuss an all-in-one integrated platform.
 
 

Deployment via TeamCity and Octopus Deploy

Deployment via TeamCity a

As discussed in the previous blog, application deployment in DevOps requires a Continuous Integration (CI) tool and Continuous Delivery (CD) tool/release management software to automate the entire process. Currently, there are several tools available in the market.

 

This blog discusses three best-of-breed tools—TeamCity as a CI tool, Octopus Deploy as a release management tool, and CD software to deploy the package on the Azure web application. Since different vendors deliver these best-of-breed tools, there is some complexity involved in integrating them into a single solution.

 

Introduction to Microsoft Public Cloud Azure

Microsoft Public Cloud Azure


Before we delve into the DevOps tools, let’s recap the deployment environment. As a reminder, we are focusing on Microsoft Azure. However, be assured that information from this blog can be applied to other public cloud solutions.

 

Azure has the capability to host applications. These applications can be further integrated with other applications and services on the Azure platform rather easily. Azure’s integration features provide customers with enhanced business agility and efficiency. They help users deploy the source code to multiple Azure websites.

 

Understanding TeamCity

teamcity


TeamCity is a CI server for developers and is powered by JetBrains.

 

It provides several relevant features:

Supports different platforms/tools/languages Automates the build and deployment processes Enhances quality and standards across teams Works as an artifact and NuGet repository Provides a reporting and statistics feature

 

Definition According to Martin Fowler, “Continuous Integration is a software development practice in which developers commit code changes into a shared repository several times a day. Each commit is followed by an automated build to ensure that new changes integrate well into the existing code base and to detect problems early.”

 

Basic Concepts of TeamCity

Basic Concepts of TeamCity

Here are the basic concepts of TeamCity:

 

Project: Refers to a set of build configurations.

Build configuration: Refers to a collection of settings (VCS roots, build steps, and build triggers) that define a build procedure.

 

VCS root: Refers to a set of version control settings (source path, username, password, etc.) that allow TeamCity to interact with a version control system for managing the modifications and sources for a build.

 

Build step: Refers to a task to be executed by the server. It is represented by a build runner.

 

Build runner: Integrates different tools, including the build tool (Ant, Gradle, MSBuild, PowerShell, etc.), a testing framework (JUnit, NUnit, etc.), and a code analysis engine. It describes the build workflow.

 

Build agent: Refers to an application that is responsible for executing the build process. It helps developers get faster feedback, as different tests can be run simultaneously on different platforms supported by the build agent.

 

TeamCity server: Refers to the server application, which manages all build agents, manages the sequence of builds to build agents, and conveys the results.

 

Build: Refers to the program/application version.

Build trigger: Refers to a rule that automatically starts a new build when a specified event occurs.

Build queue: Refers to a sequence of builds that are triggered and not yet started. These builds are assigned to the respective agents when they are available.

Build artifact: Refers to the set of files (installers, WAR files, reports, log files, etc.) generated by the build process.

 

Configuring a Build in TeamCity

Configuring a Build in TeamCity


In this section, we configure arguments for the PowerShell script in TeamCity. This will enable TeamCity to execute the PowerShell script. For this scenario, we created a PowerShell script named [string]App.Ps1.

 

The build configuration uses a step-oriented approach, which is outlined in the following sections.

Step 1 Creating a Project

To configure a build in TeamCity, first, create a project. There are several options available for this task, as follows:

 

Perform the following steps to create a standard project:

  • 1. Click the Administration link in the top-right corner of the Administration area.
  • 2. Click the down arrow button beside the Create Project button. A drop-­down list appears.
  • 3. Select the Manual option from the drop-down list to create a project manually. After you click the Manual option, the Create New Project page appears.
  • 4. Enter the desired name of the project in the Name text box.
  • 5. Enter the desired ID of the project in the Project ID text box.
  • 6. Enter the desired description of the project in the Description text box.
  • 7. Click the Create button to create the project. Now, the project has been created.

 

Step 2 Creating a Build Configuration

Build configurations describe the process by which a project’s sources are fetched and built. Once the project is created, TeamCity prompts you to create build configurations. Alternatives to create build configurations are as follows:

 

Pointing to a Bitbucket Cloud repository Perform the following steps to create a build configuration manually:

  • 1. Click the down arrow button beside the Create Build Configuration button. A drop-down list appears.
  • 2. Select the Manual option from the drop-down list to create the build configuration manually.
  • 3. Specify the name of the build configuration in the Name text box.
  • 4. Specify the build configuration ID in the Build Configuration ID text box.
  • 5. Specify the desired description in the Description text box.
  • 6. Click the Save button.

 

Step 3 Configuring the Version Control Settings

In this step, we provide settings related to the VCS root. The VCS root describes a connection to a version control system, and there are several settings associated with it. These settings allow VCS to communicate with TeamCity. They define the way changes are monitored and sources are specified for a build. Perform the following steps to configure the version control settings:

 

  • 1. Select the Version Control Settings tab.
  • 2. Click the Attach VCS Root button. The New VCS Root page appears.
  • 3. Select the desired type of VCS from the Type of VCS drop-down list. We selected Subversion.
  • 4. Specify a unique VCS root name in the VCS Root Name text box.
  • 5. Specify a unique VCS root ID in the VCS Root ID text box.

 

The connection settings appear on the page depending on the type of VCS selected. In our case, the SVN Connection Settings section appears.

6. Specify the repository URL in the URL text box.

 

7. To allow TeamCity to communicate with the Source repository, specify the username and password in the Username and Password text boxes, respectively.

 

8. Click the Test Connection button to test the connection. This validates that TeamCity can communicate with the repository. A Test Connection message box appears with the Connection Successful message. If the connection shows failure, check the specified URL and the credentials.

9. Click the Create button.

 

Step 4 Configuring the Build Steps

Once the VCS root is created, we can configure the build steps. Perform the following steps to add a build step:

1. Select the Build Steps tab.

2. Click the Add Build Step button. The Build Step page appears.

 

3. Select the PowerShell option from the Runner Type drop-down list. Note In this example, we use the PowerShell script file named [string]App.ps1. This file compiles the source code.

 

4. Specify the desired step name in the Step Name text box.

5. Select the desired step execution policy from the Execute Step drop-­down list.

6. Select the File option from the Script drop-down list.

 

7. Specify the path to the PowerShell script in the Script File box. This field contains the physical path mapped to the [string]App.ps1 script, which is located on the build agent.

8. Specify the PowerShell script execution mode in the Script Execution Mode option.

 

9. Enter script arguments in the Script Arguments section. We entered five arguments that will be passed to the [string]App.ps1 script during execution by TeamCity.

 

ARGUMENTS PASSED TO THE POWERSHELL SCRIPT

All arguments should be explained in terms of their relative paths. Descriptions of all the arguments passed to the PowerShell script follow:

  • Workflow: Allows the PowerShell script to access the contents of the Workflow folder.
  • Central: Allows the PowerShell script to access the contents of the Central folder.
  • Server: Allows the PowerShell script to access the contents of the Server folder.
  • Nuget.exe: Allows the PowerShell script to load the Nuget.exe file, which is located on the build agent.
  • Target folder: Specifies the path of a folder on the build agent where the compiled code is placed.

10. Click the Save button. A successful build is created in TeamCity, which is executable through the PowerShell script [string]App.ps1.

 

Creating a Package

Once TeamCity creates a successful build, changes may need to be made to the PowerShell script ([string]App.ps1). For example, we may need to make changes to NugetExePath to accept a new argument.

 

The changes made to the PowerShell script create a package in the target folder. Copy this NuGet package from the build agent to where it will be imported into the Octopus server for deployment purposes.

 

Using Octopus Deploy

octopus


Octopus Deploy is a deployment server (or release management software) that automates the deployment of different applications into different environments. It makes this process effortless.

 

Octopus Deploy automates the deployment of:

  • ASP.NET web applications
  • Java applications
  • Database updates
  • NodeJS applications
  • Custom scripts Octopus Deploy supports the following environments:
  • Development
  • Test
  • Production

 

Octopus Deploy provides a consistent deployment process to support the deployment needs of team members; an Octopus user can define a process for deploying the software. The Octopus user can specify different environments for different applications and can set privileges for different team members to deploy to different environments.

For example, a team member can be authorized to deploy to a test environment while also being restricted to the production deployment. Note The latest MSI of Octopus Deploy can be downloaded at https://octopus.com/downloads.

 

Creating a Project

Project
 

Octopus Deploy allows users to create projects. In Octopus Deploy, a project is a set of deliverable components, including websites and database scripts. A project is created within Octopus Deploy to manage multiple software projects across different environments. For instance, if there are six developers working on the same business project, we need to create a single project in Octopus Deploy.

 

Perform the following steps to create a project:

  • 1. Navigate to the Projects area.
  • 2. Click the Add Project button. The Create Project page opens.
  • 3. Specify a relevant name for the project in the Name text box.
  • 4. Specify a relevant description for the project in the Description text area.
  • 5. Select the desired option from the Project Group drop-down list.
  • 6. Select the desired lifecycle from the Lifecycle drop-down list.
  • 7. Click the Save button

 

Note A lifecycle is used to replicate deployments between environments automatically.

 

Creating an Environment

Creating an Environment

An environment is a group of machines to which the software is deployed simultaneously. Common environments in the Octopus Deploy are Test, Acceptance, Staging, and Production. In other words, an environment can be defined as a group of deployment targets (Windows servers, Linux servers, Microsoft Azure, etc.). For the current scenario, we are creating two environments so that we can deploy to two websites. Each environment represents a single tenant.

 

Perform the following steps to create an environment:

  • 1. Navigate to the Environments area.
  • 2. Click the Add Environment button to add an environment. The Environment Settings page opens.
  • 3. Enter a relevant name for the environment in the Name text box. In this case, we entered Test1.
  • 4. Enter a relevant description of the environment in the Description text box.
  • 5. Click the Save button

 

Uploading NuGet Package to Octopus Deploy

We can now upload the NuGet package, which we created earlier using the PowerShell script in TeamCity, on Octopus Deploy.

 

Perform the following steps to upload the NuGet package:

  • 1. Navigate to Library, then Packages, in the Octopus Deploy interface.
  • 2. Click the Upload Package button, The Upload a NuGet Package page appears.
  • 3. Click the Browse button beside the NUPKG File option. The Choose File to Upload dialog box appears.
  • 4. Navigate to the package’s location. As discussed earlier, we copied the package to the Package Source folder.
  • 5. Select the package.
  • 6. Click the Open button The name of the selected package file with its complete path appears in the NUPKG File box.
  • 7. Click the Upload button. After clicking the Upload button, the package file starts uploading.

 

Creating Steps for the Deployment Process

Deployment Process


As discussed earlier, Octopus Deploy allows users to define the deployment process for their project easily. Users can add steps to the deployment process using templates, including built-in step templates, custom step templates, and community contributed step templates.

 

Users can also select the Add Step button to display a list of templates and then select the desired step. The built-in steps can be used to handle common deployment scenarios.

 

In the current scenario, we created the following two steps for the deployment process:

NugetDeploy: This step deploys a NuGet package to one or more machines, which are running the Tentacle deployment agent.

 

Web Deploy-Publish Website (MSDeploy): This step is created to deploy the NuGet package to Azure websites by running a PowerShell script across machines.

 

Perform the following steps to add the NugetDeploy step:

1. Select the Process tab.

2. Click the Add Step button. The Choose Step Type pop-up appears with a list of built-in step templates.

 

3. Select the desired built-in step template. In this case, we selected the Deploy a NuGet Package option. The Step Details page appears.

4. Enter a name for step in the Step Name text box. In this case, we entered NugetDeploy.

5. Specify the target machines in the Machine Roles text box. In this case, we selected WebRole.

6. Select the desired package feed from the NuGet Feed drop-down list.

7. Click the Add button.

 

we see that the NuGet Package ID field contains the name of the NuGet package that was uploaded earlier.

 

Similarly, we can add a step using the custom step template with the name Web Deploy-Publish Website (MSDeploy). We can look at the created steps by selecting the Process tab of the created project.

 

Using Variables

Variables are required for eliminating the need for hard-coding the configuration values to support different environments easily. They are required while deploying packages to Azure websites. As a NuGet package is shared between two sites, we used the OctopusBypassDeploymentMutex variable to avoid resource locking of the NuGet package.

 

Creating and Deploying a Release

Creating and Deploying a Release

A release contains all details of the project and package so that it can be deployed to different environments as per requirements. Perform the following steps to create a release:

  • 1. Navigate to the Overview page, which displays all the details of the project.
  • 2. Click the Create Release button. The Create page appears.
  • 3. Enter the desired release version in the Version text box.
  • 4. Select the desired package from the Package column.
  • 5. Enter the desired release notes in the Release Notes text area.
  • 6. Click the Save button.

 

Note In the current scenario, we are creating a release to deploy the NuGet package to multiple Azure websites.

A release is created with the specified version. The Deploy page opens. Here, we can select the desired environment to which we want to deploy the created release. We can also click the Change button to change the environment.

 

7. Click the Deploy Now button to deploy the created release.

 

In this Section , we discussed the CI tool called TeamCity and the release management software or CD tool called Octopus Deploy. TeamCity builds the source code using MSBuild. Initially, we configured TeamCity by creating a new project and providing the SVN path to fetch the latest code onto the build agent.

 

We then configured the source code and set parameters for the PowerShell script file. The target path settings were modified to create a NuGet package. This package was copied from the build agent to a location where Octopus Deploy could pick it up.

 

In Octopus Deploy, we created a project and two environments to test multiple deployment scenarios. Then, we uploaded the package. We also created two steps—Nuget Deploy and Web Deploy-Publish Website (MSDeploy). The former was created to deploy the uploaded NuGet package onto a Tentacle machine while the latter was created to deploy the contents of the NuGet package from the Tentacle machine to the Azure websites.

 

We also configured variables and credentials for both environments. Lastly, we created a release for the project, which could be deployed to different environments. The release allowed us to deploy the contents of NuGet package onto Azure websites in parallel. In the end, we executed the release and found that the content of the NuGet package was deployed successfully.

 

Deployment via Visual Studio Team Services

Visual Studio Team Services

In this section, we discussed the process of deploying applications to Azure using best-of-breed and stand-alone DevOps tools: TeamCity as a CI tool, and Octopus Deploy as a CD tool. The challenge with the example solution is that there are separate tools used to deploy applications.

 

In this blog, we review a DevOps platform, an all-encompassing end-to-end solution called Microsoft Visual Studio Team Services (Visual Studio Team Services); see www.visualstudio.com/team-services/.

 

Visual Studio Team Services is a collaborative solution that takes care of the entire software deployment lifecycle, from creating packages to deploying the application. One of its major strengths is its tight integration with Azure. This blog steps through the entire process of application deployment to Azure using Visual Studio Team Services.

 

Understanding Visual Studio Team Services (Visual Studio Team Services)

Understanding Visual Studio Team Services (VSTS)

Visual Studio Team Services is an Application Lifecycle Management (ALM) system that manages the entire process of the software development lifecycle. In earlier versions, it was known as Visual Studio Online (VSO).

 

Features of Visual Studio Team Services

Some of the features of Visual Studio Team Services are as follows:

  • Provides integrated software development.
  • Supports source control systems, including Git and Team Foundation Version Control (TFVC).
  • Supports several features that can be used to track product features, bugs, and other issues.
  • Supports several Agile methods for planning purposes.
  • Automates the build, test, and release processes for rapid release of the software.
  • Supports usage across massively scaled-out teams consisting of thousands of members.
  • Provides a reliable and scalable service that is available 24 hours a day, seven days a week, and is backed by a 99.9% Service License Agreement (SLA).
  • Allows users to customize elements such as source control, work tracking, build and release, and test, etc., according to business requirements.
  • Allows users to add more functionality to Visual Studio Marketplace, service hooks, REST APIs, and Visual Studio SDKs.

 

Advantages of Visual Studio Team Services

Visual Studio Team Services is a Microsoft product introduced to upgrade Team Foundation Server (TFS). Therefore, it is also known as a cloud version of TFS. Some of the advantages of Visual Studio Team Services are as follows:

  • Free for up to five users.
  • Operations and maintenance costs are lower than TFS, as it is a cloud-based solution, while TFS is an on-­ premise solution.
  • Encourages more stakeholders to get involved as they can log on to the platform from anywhere and at any time.
  • Allows developers to write and commit code from anywhere.
  • Enables effortless inter-team communication, as it supports the Git source control system, which provides the cross-platform facility.
  • Ideal platform for organizations to develop a modern DevOps environment.

 

Creating an Account in Visual Studio Team Services

One of the primary tasks while using Visual Studio Team Services is creating an account to host the project. Perform the following steps to create an account in Visual Studio Team Services:

 

 

  • 6. Click the Sign In button. The Account Creation page appears.
  • 7. Enter the desired name of the account in the text box beside the Host My Projects At The label. This enables you to specify a host location (US, India, etc.) for the projects.
  • 8. Select the desired radio button below the Manage Code Using The option. This specifies the repository for Git to manage the code.
  • 9. Click the Continue button.

 

After you click the Continue button, the process of creating an account begins. The account is created with the specified name.

 

Azure Application Deployment

Azure Application Deployment
 

In the above sections, We discussed DevOps fundamentals and the use of best-of-breed stand-alone DevOps Software, and we reviewed the integrated DevOps platform. The next logical step is to put it all together and manage the software development lifecycle of an Azure application. Of course, you can further enhance this solution to suit your website or enterprise software. The key here is DevOps.

 

This blog discusses a real Azure application deployment using Visual Studio Team Services. We have a virtual machine on Azure that has e-commerce software (Magento) installed on it. We will use Visual Studio Team Services to deploy changes to the code automatically and view the effects on the Azure application. The solution also includes a GitHub repository to store and version source code and a shell script for installing the Azure virtual machine and Magento application.

 

We make changes in the Visual Studio Team Services Git repository, committed changes, and deployed the release. The release is then deployed to view the changes. In this scenario, we make changes to the HTML/CSS files of the source code to change the color of the menus from blue to orange and deploy a release.

 

Understanding Magento

Understanding Magento

Magento is an open source e-commerce platform. It allows developers to easily create a shopping cart for their online stores. It also allows developers to have better control over the content, appearance, and functionality of their online stores. It provides features such as search engine optimization and support for catalog-management tools.

 

Magento is extremely simple to use and can be used by individuals who are not experienced, developers. The availability of a number of themes and plug-ins makes it effective in enhancing the customers’ experience. Considerable support is available through its large volunteer community.

Benefits of Using Magento

Benefits of Using Magento

There are several benefits of using Magento. Some of them are as follows:

  • Easy installation.
  • Provides several layouts and plug-ins that can be used to add more functionality to the e-commerce solution.
  • Supports many payment gateways.
  • It is an open source technology, which means that it can be modified based on user requirements.

Disadvantages of Magento

The following disadvantages/limitations are associated with Magento:

  • A more complex system compared to Drupal.
  • Requires complex programming to add custom functionality.
  • Requires experienced developers to enable it to integrate with other systems.

 

Prerequisites of Running an Azure Application with Magento There are a few prerequisites needed to run an Azure application with Magento. A system must have:

  • A virtual machine on Azure running Linux
  • Apache server
  • MySQL
  • PHP

 

Setting Up Magento

Setting Up Magento

In this scenario, we used an ARM template to set up Magento. This template contains the source code and shell scripts for setting up a virtual machine on Azure and installing all the prerequisites and Magento on the created virtual machine. This template also contains a file that creates a button. Users utilize that button to navigate to Azure in order to deploy the virtual machine and launch the Magento application.

 

Note To use the Azure cloud, you need an Azure subscription. Perform the following steps to set up Magento:

1. Click the Deploy to Azure button to deploy a Magento package.

 

After clicking the Deploy to Azure button, you are redirected to the Azure login page, wherein you need to specify an email and password to log in.

  • 1 Once the authorization is done, the Custom Deployment page appears.
  • 2. Select the subscription details from the Subscription drop-down list.
  • 3. Select the desired radio button beside the Resource Group option to specify whether to create a new resource group or use an existing resource group. In this case, we selected the Create New radio button.

 

  • 4. Specify the name of the resource group in the Create a Resource Group text box.
  • 5. Select the desired location from the Location drop-down list.
  • 6. Specify a domain name in the Domain Name text box.
  • 7. Specify the name of the customer in the Customer ID text box.

 

  • 8. Specify the tier of customer subscription in the Customer Tier text box.
  • 9. Specify the password for MySQL in the My SQL Password text box.
  • 10. Specify the username of the virtual machine server admin in the VM Admin Username text box.
  • 11. Specify the password of the virtual machine server admin in the VM Admin Password text box.

 

The values for fields—including Magento File Backup (backup of Magento files), Magento Media Backup (backup of media files), Magento Init Backup (backup of INIT folder content), Magento Var Backup (backup of VAR folder content), Magento Default HTaccess (default access file), Magento DB Backup (backup of Magento DB), and virtual machine size (size of the required virtual machine)—are automatically completed through the ARM template.

 

12. Click the Purchase button

 

It takes a few minutes after clicking the Purchase button to get a successful deployment. Once the deployment is successful, the virtual machine starts running. We can view the artifacts by visiting the created resource group

 

To view the deployment history, click the deployment under the Deployment History section of the created resource group. Here, we will get the URL under the INSTALL EDURL text box under the Outputs section. If we run this URL in any web browser, we get the Magento website.

 

DevOps application - business scenarios

DevOps application

Application of DevOps varies for multiple scenarios, with accrued benefits as listed:

 

Automation of development cycle: Business needs are met with minimal manual intervention, and a developer can run a build with a choice of open tools through a code repository; the QA team can create a QA system as a replica, and deploy it to production seamlessly and quickly.

 

The single version of the truth - source code management: There are multiple versions of the code, but it is difficult to ascertain the appropriate code for the purpose. We lack a single version of the truth. Code review feedback is through emails and not recorded, leading to confusion and rework.

 

Consistent configuration management: We develop, test, and build source code on different systems. Validating the platforms and compatibility versions of dependencies are manual and error-prone. It’s really challenging to ensure all the systems speak the same language and have the same versions of the tools, compilers, and so on. Our code works fine on build systems but doesn’t when moved to production systems, causing embarrassment regarding business deliverables, and cost overheads to react.

 

Product readiness for markets: We have a process to develop code, test, and build through defined timelines. There are many manual checks and validations in the process; the integrations between different groups cause our commitments and delivery dates to be unpredictable. We wish to know how close our product is to delivery and its quality periodically, to plan in advance rather than being reactive.

 

Automation of manual processes: We are following manual processes, which are often error-prone, and wish to enhance efficiency by following an automation process wherever applicable. Testing cycle automation, incremental testing, and integrating with the build cycle will expedite product quality, the release cycle, and infrastructure service automation such as creating, starting, stopping, deleting, terminating, and restarting virtual or bare-metal machines.

 

Containers: Portability of code is the primary challenge. The code works in development and QA environments, but moving to production systems causes multiple challenges such as code not compiling due to dependency issues, build break down, and so on. The building platform agnostic code is a challenge, and maintaining multiple platform versions of development and QA platforms is a huge overhead. Portable container code would alleviate these kinds of issues.

 

On-premise challenges: We have many on-premise systems. There are multiple challenges, from capacity planning to turnaround time. The Capex and operational expenses are unpredictable. Cloud migration seems to have multiple choices and vendors, so there needs to be an efficient adoption method to ensure results.

 

Business drivers for DevOps adoption to big data

Factors contributing to wide-scale popularity and adoption of DevOps among big data systems are listed as follows.

 

Data explosion

Data is the new form of currency—yes you read right, it’s as much a valuable asset as oil and gold. In the past decade, many companies realized the potential of data as an invaluable asset to their growth and performance.

 

Let’s understand how data is valuable. For any organization, data could be in many forms such as, for example, customer data, product data, employee data, and so on. Not having the right data on your employees, customers, or products could be devastating. Its basic knowledge and common sense that the correct data is key to running a business effectively.

 

There is hardly any business today that doesn’t depend on data-driven decisions; CEOs these days are relying more on data for business decisions than ever before, such as which product is more successful in the market, how much demand exists area-wise, which price is more competitive, and so on.

 

Data can be generated through multiple sources, internal, external, and even social media. Internal data is the data generated through internal systems and operations, such as in a bank, adding new customers or customer transactions with the bank through multiple channels such as ATM, online payments, purchases, and so on.

 

External sources could be procuring gold exchange rates and foreign exchange rates from RBI. These days, social media data is widely used for marketing and customer feedback on products. Harnessing the data from all avenues and using it intelligently is key for business success.

 

Going a step further, a few companies even monetize data, for example, Healthcare IQ, Owens & Minor, State Street Global Corporation, Ad Juggler, comScore, Verisk Analytics, Nielsen, and LexisNexis. These organizations buy raw data such as web analytics on online product sales, or online search records for each brand, reprocess the data into an organized format, and sell it to research analysts or organizations looking for competitor intelligence data to reposition their products in markets.

 

Let’s analyze the factors fueling the growth of data and business. Fundamental changes in market and customer behavior have had a significant impact on the data explosion. Some of the key drivers of change are:

 

Customer preference: Today, customers have many means of interacting with businesses; for example, a bank provides multiple channels such as ATM withdrawals, online banking, mobile banking, card payments, on-premise banking, and so on. The same is true for purchases; these can be in the shop, online, mobile-based, and so on, which organizations have to maintain for business operations. So, these multiple channels contribute to increased data management.

 

Social media: Data is flooding in from social media such as Facebook, LinkedIn, and Twitter. On the one hand, they are social interaction sites between individuals; on the other hand, companies also rely on social media to socialize their products. The data posted in terabytes/petabytes, in turn, is used by many organizations for data mining too. This is contributing to the huge data explosion.

 

Regulations: Companies are required to maintain data in proper formats for a stipulated time, as required by regulatory bodies. For example, to combat money laundering, each organization dealing with finance is required to have clear customer records and credentials to share with regulatory authorities over extended periods of time, such as 10 to 15 years.

 

Digital world: As we move towards the paperless digital world, we keep adding more digital data, such as e-books and ERP applications to automate many tasks and avoid paperwork. These innovations are generating much of the digital data growth as well.

The next generation will be more data-intensive, with the Internet of Things and data science at the forefront, driving business and customer priorities.

 

Cloud computing

Cloud computing

Acceptance of cloud platforms as the de facto service line has brought many changes to procuring and managing infrastructure. Provisioning hardware and other types of commodity work on the cloud is also important for improving efficiency, as moving these IT functions to the cloud enhances the efficiency of services, and allows IT departments to shift their focus away from patching operating systems.

 

DevOps with cloud adoption is the most widely implemented popular option. With cloud penetration, the addition of infrastructure/servers is just a click away. This, along with credible open source tools, has paved the way for DevOps.

In a fraction of time, build, QA, and pre-prod machines can be added as exact replicas and configurations as required, using open source tools.

 

Big data

Big data

Big data is the term used to represent multiple dimensions of data such as large volumes, velocity, and variety, and delivering value for the business. Data comes from multiple sources, such as structured, semi-structured, and unstructured data. The data velocity could be a batch mode, real-time from a mechanical sensor or online server logs, and streaming data in real time.

 

The volumes of data could be terabytes or petabytes, which are typically stored on Hadoop-based storage and other open source platforms. Big data analytics extends to building social media analytics such as market sentiment analysis based on social media data from Twitter, LinkedIn, Facebook, and so on; this data is useful to understand customer sentiment and support marketing and customer service activities.

 

Data science and machine learning

Data science

Data science as a field has many dimensions and applications. We are familiar with science; we understand the features, behavior patterns, and meaningful insights that result in formulating reusable and established formulas. In a similar way, data can also be investigated to understand the behavior patterns and meaningful insights, through engineering and statistical methods. Hence it can be viewed as data + science, or the science of data.

 

Machine learning is a combination of data extraction, extract, transform, load (ETL) or extract, load, transform (ELT) preparation, and using prediction algorithms to derive meaningful patterns from data to generate business value. These projects have a development lifecycle in line with a project or product development. Aligning with DevOps methodologies will provide a valuable benefit for the program evolution.

 

In-memory computing

 computing

Traditional software architecture was formerly based on disks as the primary data storage; then the data moved from disk to main memory and CPU to perform aggregations for business logic. This caused the IO overhead of moving large volumes of data back and forth from disk to memory units.

 

In-memory technology is based on hardware and software innovations to handle the complete business application data in the main memory itself, so the computations are very fast. To enable in-memory computing, many underlying hardware and software advancements have contributed.

The software advancements include the following:

  • Partitioning of data
  • No aggregate tables
  • Insert the only delta
  • Data compression
  • Row plus column storage
  • The hardware advancements include the following:
  • Multi-core architecture allows massive parallel scaling
  • Multifold compression
  • Main memory has scalable capacity
  • Fast prefetch unlimited size

 

Planning the DevOps strategy

Planning DevOps

A good DevOps strategy, discussed in this book, helps the user gain in-depth and wider understanding of its subject and its application to multiple technologies and interfaces, to an organization provides focus, creates a common (unbiased) view of the current problems, develops the future state, unveils opportunities for growth, and results in better business outputs.

 

A holistic DevOps strategy, at the most basic level, must answer the following questions:

  • What are our business aims and goals?
  • How do we plan the roadmap? Where do we begin?
  • How should we channel our efforts?
  • What are we trying to accomplish?
  • What is the schedule for this?
  • What is the impact to the business?
  • How do our stakeholders see the value?
  • What are the benefits and costs of doing it?

 

A good DevOps strategy for an organization will bring multiple benefits, channel energy to focus on high impact problems, produce clarity to develop the future state, identify growth opportunities, and pave the way for better business outputs.

 

A DevOps platform strategy will be a unique and extensive program, covering every aspect of the software lifecycle, which integrates multiple technologies, platforms, and tools, and posing numerous challenges that need to be handled with skill, precision, and experience.

An organization can consider the introduction of DevOps to cater to specific purposes, such as the following:

  • Automating infrastructure and workflow configuration management
  • Automating code repositories, builds, testing, and workflows
  • Continuous integration and deployment
  • Virtualization, containerization, and load balancing
  • Big data and social media projects

 

Machine-learning projects

Machine-learning

There are a wide variety of open source tools to select for adoption in specific segments of DevOps, such as the following:

 

Docker: A Docker container consists of packaging the application and its dependencies all up in a box. It runs as an isolated process on the host operating system, sharing the kernel with another container. It enjoys resource isolation and allocation benefits like VMs but is much more portable and efficient.

 

Kubernetes: Kubernetes is an open source orchestration system for Docker containers. It groups containers into logical units for easy management and discovery handles scheduling on nodes, and actively manages workloads to ensure their state matches users’ declared intentions.

 

Jenkins: Jenkins is a web-enabled tool used through application or a web server such as Tomcat, for continuous build, deployment, and testing, and is integrated with build tools such as Ant/Maven and the source code repository Git. It also has mastered and dump slaves. Ansible:

 

Ansible automates software provisioning, configuration management, and application deployment with agentless, Secure Shell (SSH) mode, Playbooks, Towers, and Yum scripts are the mechanisms. Chef and Puppet: Chef and Puppet are agent-based pull mechanisms for the deployment automation of work units.

 

GitHub: Git is a popular open source version control system. It’s a web-based hosted service for Git repositories. GitHub allows you to host remote Git repositories and has a wealth of community-based services that make it ideal for open source projects.

 

There are comprehensive frameworks readily available, such as RedHat Openshift, Microsoft Azure, and AWS container services, with pre-integrated and configured tools to implement.

 

A few popular open source tools are listed here:

  • Source code management: Git, GitHub, Subversion, and Bitbucket
  • Build management: Maven, Ant, Make, and MSBuild
  • Testing tools: JUnit, Selenium, Cucumber, and QUnit
  • Repository management: Nexus, Artifactory, and Docker hub
  • Continuous integration: Jenkins, Bamboo, TeamCity, and Visual Studio
  • Configuration provisioning: Chef, Puppet, Ansible, and Salt
  • Release management: Visual Studio, Serena Release, and StackStorm
  • Cloud: AWS, Azure, OpenShift, and Rackspace
  • Deployment management: Rapid Deploy, Code Deploy, and Elastic box
  • Collaboration: Jira, Team Foundation, and Slack
  • BI/Monitoring: Kibana, Elasticsearch, and Nagios
  • Logging: Splunk, Logentries, and Logstash
  • Container: Linux, Docker, Kubernetes, Swam, AWS, and Azure

 

Benefits of DevOps

Benefits of DevOps

Non-adherence to DevOps practices would be challenging for an organization, for the following reasons:

  • High deployment effort for each of the development, QA, and production systems
  • Complex manual installation procedures are cumbersome and expensive Lack of a comprehensive operations manual makes the system difficult to operate

 

  • Insufficient trace or log file details makes troubleshooting incomplete Application-specific issues of performance impact not assessed for other applications

 

  • SLA adherence, as required by the business application, would be challenging
  • Monitoring servers, filesystems, databases, and applications in isolation will have gaps
  • Business application redundancy for failover is expensive in isolation

 

DevOps adoption and maturity for big data systems will benefit organizations in the following ways:

  • DevOps processes can be implemented as standalone or a combination of other processes
  • Automation frameworks will improve business efficiency
  • DevOps frameworks will help to build resilience into the application’s code

 

  • DevOps processes incorporate SLAs for operational requirements The operations manual (runbook) is prepared in development to aid operations

 

  • In matured DevOps processes, runbook-driven development is integrated In DevOps processes, application-specific monitoring is part of the development process

 

  • DevOps planning considers high availability and disaster recovery technology

 

  • Resilience is built into the application code in-line with technology features

 

  • DevOps full-scripted installation facilitates fully automate deployment DevOps operation team and developers are familiar with using logging frameworks

 

  • The non-functional requirements of operability, maintenance, and monitoring get sufficient attention, along with system development specifications Continuous integration and continuous delivery eliminates human errors, reduces planned downtime for upgrades, and facilitates productivity improvements

 

DevOps Process

DevOps Process

The DevOps standard processes prescribed across the industry and adopted by organizations are listed here; we will discuss them in detail:

  • Source code management
  • Source code review
  • Configuration management
  • Build management
  • Repository management
  • Release management
  • Test automation
  • Continuous integration
  • Continuous delivery
  • Continuous deployment
  • Infrastructure as Code
  • Application performance monitoring
  • Routine automation/continuous improvement

 

DevOps frameworks—under DevOps frameworks, we will study the life cycle models, maturity states, progression and best practices frameworks, and also Agile methodology:

  • DevOps project life cycle
  • Maturity states
  • Progression frameworks
  • DevOps practices frameworks
  • Agile methodology

 

DevOps Best Practices

DevOps Best Practices

The adoption of DevOps best practices will help to align people and progress towards organizational goals. DevOps offers multiple process frameworks at every stage of software development. Full-scale implementation of DevOps in an organization requires a cultural shift integrating departments, people, and the process of software life cycles. It enables organizations to move higher on the maturity roadmap in terms of compliance and process adherence:

 

DevOps Process

Now we will look at DevOps standard processes prescribed across the industry and adopted by organizations, discussed in detail.

 

Source Code Management

Source code management (SCM) systems have been in use for decades, offering many functions and benefits. However, integrating them with DevOps processes offers robust integration and automation. A source code management system enables multiple developers to develop code concurrently across multiple development centers spread across diverse geographies.

 

SCM helps in the management of code base and version control at the file level, so developers don’t overwrite each other’s code, and they have the ability to work in parallel on files in their respective branches.

 

Developers merge their code changes to the main or sub-branch which can be tracked, audited, enquired for bug fixes, and rolled back if needed. Branching is an important function of SCM, multiple branches of the software are maintained for different major and minor releases, tracking the features and bug fixes across various release versions. SCM enables managing process adherence across environments of development, test, and production, facilitating entire software lifecycle management from development to support.

 

The DevOps process framework emphasizes the adoption of SCM for accruing the following benefits for the organization:

  • Coordination of services between members of a software development team
  • Define a single source of truth for any version, minor or major Review changes before implementing
  • Track co-authoring, collaboration, and individual contributions
  • Audit code changes and rollback facility
  • Incremental backup and recovery
  • SCM tools prevalent in the market are as follows:
  • IBM ClearCase
  • Perforce
  • PVCS
  • Team Foundation Server
  • Visual Studio Team Services
  • Visual SourceSafe

Open source SCM tools are as follows—their popularity is also attributed to DevOps’ widespread adoption:

  • Subversion (SVN)
  • Concurrent Version System (CVS)
  • Git
  • SCCS
  • Revision control systems
  • Bitbucket

 

Code Review

Code Review

Code reviews are an important process to improve the quality of software instances before they are integrated into the mainstream. They help identify and remove common vulnerabilities such as memory leaks, formatting errors and buffer overflows. Code review or inspection can be both formal and informal.

 

In a formal code review, the process is through multiple methods such as formal meetings, and interactions to review the code line by line. Informal code reviews can be over the shoulder, emails, pair programming where a few authors codevelop, or tool-assisted code reviews—these are also called code walkthroughs.

 

A code review process framework benefits the organization as follows:

  • Collaboration between software development team members
  • Identification and elimination of code defects before integration
  • Improvement of code quality
  • Quick turnaround of the development cycle
  • Proprietary tools for code review automation:
  • Crucible
  • Collaborator
  • Codacy
  • Unsourced
  • Understand
  • Open source tools for code review automation:
  • Review board
  • Phabricator
  • Gerrit
  • GitLab

 

Configuration Management

Configuration Management (CM) is the broad subject of governing configuration items at the enterprise level, as per Infrastructure Library (ITIL); even the configuration management database (CMDB) is part of the CM strategy. Configuration management includes identification, verification, and maintenance of configuration items of both software and hardware, such as patches and versions. In simple terms, it’s about managing the configuration of a system and ensuring its fitness for its intended purpose.

 

A configuration management tool will validate the appropriateness of the configurations on the system as per the requirements and its interoperability between systems. A common example is to ensure the code developed on a development system is effectively functional on a QA (test) system and production systems. Any loss of configuration parameters between the systems will be catastrophic for the application’s performance.

 

As per DevOps, the benefits of incorporating configuration management processes and tools for an organization can be summarized as follows:

  • Facilitates organizations with impact analysis due to the configuration change
  • Allows automated provisioning on different systems such as dev, QA, and prod Facilitates audit, account, and verification of the systems

 

  • Reduces redundant work by ensuring consistency
  • Effectively manages simultaneous updates

 

  • Avoids configuration related problems of a single version of the truth Simplifies coordination between team members of development and operations

 

  • It is helpful in tracking defects and resolving them in time Helps in predictive and preventive maintenance

 

A few popular configuration management tools for infrastructure are as follows:

  • BMC Software’s Atrium
  • Hewlett Packard Enterprise’s Universal Configuration Management

 

Database

Database

A few popular software configuration management tools are as follows:

  • Chef
  • Puppet
  • Ansible
  • Salt
  • Juju

 

Build Management

Build management is the process of preparing a build environment to assemble all the components of a software application as a finished, workable product, fit for its intended purpose. The source code, the compilers, dependencies with hardware and software components, and so on, are compiled to function as a cohesive unit.

 

Builds are manual, on demand and automatic. On-demand automated builds reinitiate with a script to launch the build and are used in few cases. Scheduled automated builds are the case with continuous integration servers running nightly builds. Triggered automated builds in a continuous integration server are launched just after being committed to a Git repository.

 

As per DevOps, the benefits of build management processes and tools for an organization can be summarized as follows:

  • The vital function of ensuring software is usable
  • Ensures reusability and reliability of the software in client environments
  • Increases the efficiency and quality of software

 

It’s also a regulatory requirement. A few build tools that are in use are as follows:

  • Ant
  • Builder
  • Maven
  • Gradle
  • Grunt
  • MSBuild
  • Visual Build
  • Make (CMake/QMake)

 

Artifacts Repository Management

Artifacts Repository Management

A build Artifacts repository manager is a dedicated server for hosting multiple repositories of binary components (executables) of successful builds. By centralizing the management of diverse binary types, it reduces the complexity of access along with their dependencies.

 

The benefits are as follows:

  • Manage artifact life cycles
  • Ensure builds are repeatable and reproducible
  • Organized access to build artifacts
  • Convenient to share builds across teams and vendors
  • Retention policies based on artifacts for audit compliance
  • High availability of artifacts with access controls

A few repository tools that are in use are as follows:

  • Sonatype Nexus
  • JFrog Artifactory
  • Apache Archiva
  • NuGet
  • Docker hub
  • Pulp
  • Npm

 

Release Management

Release Management

Release management is the process of a software life cycle to facilitate a release’s movement from development, testing, and deployment to support/maintenance. It interfaces with several other DevOps process areas in the SDLC.

 

Release management has been an integral part of the development process for decades. However, its inclusion in the DevOps framework makes a complete cycle for automation.

 

Release management is an iterative cycle initiating by a request for the addition of new features or changes to existing functionality. Once the change is approved, the new version is designed, built, tested, reviewed, and after acceptance, deployed to production. During the support phase, there could be a possibility of enhancement or performance leading to the initiation of a new development cycle.

 

The benefits of adopting release management are as follows:

  • Product lifecycle holistic management, tracking and integrating every phase
  • Orchestrate all the phase activities—development, version control, build, QA, systems provisioning, production deployment, and support
  • Track the status of recent deployments in each of the environments Audit history of all activities of work items that are associated with each release
  • The automation of release management relies on automating all of its stages
  • Teams can author release definitions and automate deployment in repeatable, reliable ways while simultaneously tracking in-flight releases all the way to production
  • Fine grain access control for authorized access and approval for change

 

A few release management tools are:

  • Electric Cloud
  • Octopus Deploy
  • Continuum
  • Automatic
  • Quick build
  • UrbanCode Release
  • CA Service Virtualization (LISA)
  • BMC Release Process Management
  • Plutora Release
  • CA Release Automation
  • Serena Release
  • MS Visual Studio
  • StackStorm
  • Rally

 

Test Automation

Automation

Testing manually for every possible scenario is tedious, labor-intensive, time-consuming and expensive. Test automation, or automatic testing, is for running test cases without manual intervention. Though not all test cases qualify to be automatically run, the majority can be scheduled.

 

Automation is achieved by running the test cases with an automation tool or through the scheduling of automation scripts. Recent test data is used as input and the results are captured for analysis. The goal of test automation is to supplement manual testing by reducing the number of test cases to be run manually—not to replace manual testing altogether.

 

Automation testing is for test cases that are repetitive, monotonous, tedious, and time-consuming, that have defined input and boundary conditions. It’s not suitable for frequently changing, ad hoc or first-time execution test cases. Software automation testing can be based on a few types of frameworks data; keyword, modular, and hybrid.

 

Testing big data systems encompasses multiple technologies, integrations, frameworks and testing modules such as functional, security, usability, performance, integration testing, and so on.

 

The benefits of adopting test automation are as follows:

 

Improve software quality and responsiveness

software

A quick turnaround by substituting manual effort with automation Improve the effectiveness of the overall testing life cycle Incremental and integration testing for continuous integration and delivery

 

A few test automation tools are as follows:

  • Visual Studio Test Professional
  • QTP (UFT)
  • SoapUI
  • TestDrive
  • FitNesse
  • Telerik Test Studio
  • Selenium
  • TestComplete
  • Watir
  • Robotium

 

Continuous Integration

Continuous Integration

Continuous integration is a DevOps best practice wherein developers continuously integrate their code in small logical units to a commonly shared repository with regularity (for example, once a day). The advantage of such a process is the transparency of the code’s quality and fitness for its intended purpose. Otherwise, bulk code integration after the lapse of a fixed time period could expose many defects or integration challenges which could be expensive to resolve.

 

To achieve continuous integration, there are few prerequisites to be implemented, as follows:

  • Using a version repository for source code
  • Regular code check-in schedule
  • Automate testing for the code changes
  • Automate the build
  • Deploy build in preproduction

 

The benefits of continuous integration are as follows:

  • Availability of latest code as we commit early and often
  • Build cycles are faster as build issues are exposed early with check-ins Transparency in the build process means better ownership and lesser defects
  • Automating the deployment process leads to quicker turnaround

 

Some continuous integration tools that are available are as follows:

  • Jenkins
  • TeamCity
  • Travis
  • Go CD
  • Buddy
  • Bitbucket
  • Chef
  • Microsoft Teamcenter
  • CruiseControl
  • Bamboo
  • GitLab CI
  • CircleCI
  • Codeship

 

Continuous Delivery

Continuous Delivery

Continuous delivery is the next step of continuous integration in the software development cycle; it enables rapid and reliable development of software and delivery of product with the least amount of manual effort or overhead.

 

In continuous integration, as we have seen, code is developed incorporating reviews, followed by automated building and testing. In continuous delivery, the product is moved to the pre production (staging) environment in small frequent units to thoroughly test for user acceptance.

 

The focus is on understanding the performance of the features and functionality related issues of the software. This enables issues related to business logic to be found early in the development cycle, ensuring that these issues are addressed before moving ahead to other phases such as deployment to the production environment or the addition of new features.

 

Continuous delivery provides greater reliability and predictability on the usability of the intended features of the product for the developers. With continuous delivery, your software is always ready to release and the final deployment into production is a manual step as per timings based on a business decision.

 

The benefits of the continuous delivery process are as follows:

  • Developed code is continuously delivered Code is constantly and regularly reviewed
  • High-quality software is deployed rapidly, reliably, and repeatedly Maximum automation and minimal manual overhead
  • The tools that perform continuous integration do the job of continuous delivery as well.

 

Continuous Deployment

Continuous Deployment

Continuous deployment is the fully matured and complete process cycle of code change, passing through every phase of the software life cycle to be deployed to production environments.

 

Continuous deployment requires the entire process to be automated—also termed as automated application release—through all stages, such as the packaging of the application, ensuring the dependencies are integrated, deployment testing, and the production of adequate documentation for compliance.

 

The benefits of continuous deployment and automated application release are as follows:

Frequent product releases deliver software as fast as possible Automated and accelerated product releases with the code change Code changes qualify for production both from a technical and quality viewpoint

 

The most current version of the product is ready in shippable format Deployment modeling reduces errors, resulting in better product quality Consolidated access to all tools, process and resource data leads to quicker troubleshooting and time to market

 

Effective collaboration between dev, QA, and operation teams leads to higher output and better customer satisfaction Facilitates lower audit efforts owing to a centralized view of all phase activities

 

Infrastructure as Code

Infrastructure as Code (IaC) is a means to perform infrastructure services through the defining of configuration files. In DevOps’ scope, IaC is the automation of routine tasks through the code, typically as configuration definition files, such as shell scripts, Ansible playbooks, Chef recipes, or Puppet manifests.

 

It’s usually a server and client setup with a push or pull-based mechanisms, or agentless through the secured shell (SSH). Many regular tasks on systems such as create, start, stop, delete, terminate, and restarting virtual or bare-metal machines are performed through software.

 

In traditional on-premise systems, many of the system administration tasks were manual and person dependent. However, with the explosion of big data with cloud computing, all the regular system activities and tasks are managed like any software code. They are maintained in code repositories, and the latest build updates are tested for deployment.

 

The advantages of IaC are as follows:

  • The use of definition files and code to update system configuration is quick
  • The version of all the code and changes is less error-prone and has reproducible results

 

Thorough testing of the deployment with IaC and test systems Smaller regular changes are easy to manage, bigger infrastructure updates are likely to contain errors that are difficult to detect Audit tracking and compliance are easy with definition files Multiple servers update simultaneously System availability is high, with less downtime

Some tools for IaC are as follows:

  • Ansible tower
  • CFEngine
  • Chef
  • Puppet
  • SaltStack

 

Routine Automation

Routine Automation

Every organization aims to automate routine, repetitive tasks; in fact, the survival of most companies and software products is based on the degree to which they automate. ERP systems, data visualization, domain applications, data analytics, and so on; almost all segments are potential areas for automation.

 

A few sections to automate are infrastructure (deployment, patching scalability), applications (development, integration, builds, delivery, and deployment), load balancers, feedback, and defects/errors management.

 

Key Application Performance Monitoring/Indicators

code

Performance metrics are part of every tool, product, and service. Accordingly, organizations are ever vigilant of the performance metrics monitoring of their applications, products and services. To achieve high-quality output for any product, achieving a high degree of standard in process and metrics is prerequisite.

 

There are many parameters to gauge performance metrics, such as, for example, applications or hardware systems availability or uptime versus downtime and responsiveness, tickets categorization, acknowledgment, resolution timelines, and so on.

 

DevOps is all about measuring the metrics and feedback, with continuous improvement processes.

Several tools are available for application monitoring for various needs; we will cover the most appropriate and applicable tools in the context of the DevOps framework in further sections of this lesson.

 

DevOps Frameworks

Under DevOps frameworks, we will study the life cycle models, maturity states, progression, and best practices frameworks, as well as agile methodology.

Accomplishing DevOps maturity is a gradual progression to being well structured and planned, as stated in the following stages:

 

DevOps Maturity Life Cycle

DevOps project phases are on lines of the software development lifecycle as described here. We will dwell on each phase in detail:

 

Discovery and requirements phase:

The DevOps discovery phase is a highly interactive project phase for gathering inputs and feedback on the current state of process, frameworks and tools from key stakeholders.

Templates and checklists are used to capture the inputs. The timeline for the phase depends on the availability of key stakeholders, the existence of requisite documents, and the complexity of the processes to explore. Discovery phase deliverables are as follows:

  • Templates detailing the current state of process, tools, frameworks
  • Signoff from key stakeholders on the details collated
  • Existing best practices and DevOps methods
  • Existing challenges, constraints as applicable
  • Reusable tools, process, artifacts

 

Design print phase:

Design print phase

The design phase is also the architecture phase; it’s about producing a blueprint of the target state to accomplish. It’s an iterative process of weighing alternatives for tools and processes arriving at an agreement by key stakeholders. The timeline and cost will be baselined and revisited and revised regularly based on new learnings from the project as we move forward towards the target state.

 

The timeline for this phase depends on how acceptable the processes, tools, and budgets are to the key stakeholders. Design phase deliverables are as follows:

  • Target state is agreed upon
  • The baseline of the DevOps process to be adopted
  • The baseline of most viable tools to be implemented
  • Baseline agreed on timelines and cost

 

Development phase:

Artifacts baselined from the blueprint phase will be inputs for the development phase; the agreed upon process changes, tools to be implemented, frameworks to be adopted, and so on. A detailed project plan covering deliverables, schedules, dependencies, constraints, resource leveling, and so on will be quite handy.

 

Agile scrum methodology will be the framework to implement the DevOps, which will be discussed in detail. The timeline for the development phase will be as per the project plan base lined initially and revised regularly with the progress of milestones that have been accomplished. Development phase deliverables are as follows:

  • Initial project plan baselined and signoff
  • Incorporating regular feedback till project completion
  • Allocation of resources for each stage
  • Including new skills, methods, process, and tools
  • Workarounds for project risks, constraints, and so on
  • Deliverables as agreed in the project plan

 

Deployment phase:

The DevOps deployment phase is in accordance with the best practices outlined in the DevOps process framework detailed above. It depends on whether the deployment is a process, an application tool, or for infrastructure. The timeline will be evaluated as per experience gained in the development phase. Deployment phase deliverables are as follows:

  • Deployment guide—cutover plan to production
  • Deployment checklist
  • Signoff from key stakeholders
  • Rollback plan
  • Capacity planning

 

Monitoring phase:

Monitors the key performance factors for each phase’s performance of development, build, integration and deployment over time duration. It’s followed by tracking the defects, bug fixes, user tickets and plans for continuous improvement. Monitoring phase timelines are as per organization need and performance benchmarks. Monitoring phase deliverables are as follows:

  • Operations manual
  • Feedback forms and checklists
  • User guide, support manual
  • Process flow manual
  • Performance benchmark

 

DevOps Maturity Map

DevOps adoption is a value-added journey for an organization. It’s not something achieved overnight quickly, but matured step by step over a period of time with manifested results. Like any Capability Maturity Model (CMMI) or Process Maturity Models, the critical success factors are to be defined for the program’s performance objectives.

 

The initial maturity state of key evaluation parameters is agreed upon by key stakeholders. Then the target maturity level of the parameter variables to be accomplished will be defined in the project charter, along with detailed procedure, milestones, budgets and constraints as approved by stakeholders.

DevOps process maturity framework.

 

DevOps Progression Framework/Readiness Model

As discussed in the previous model, DevOps adoption is a journey for an organization to higher maturity states. In the following table, different practice areas and maturity levels of DevOps at a broad scale are listed.

 

DevOps maturity levels may vary across teams as per their standards, similarly even a common department or division of the same organization may have significantly more varied and advanced practices than others for the same process flow. Enhancing to achieve the best possible DevOps process workflow throughout the entire enterprise should be the end goal for all teams and departments.

 

DevOps Maturity Checklists

The process maturity framework, as seen in the preceding sections, is assessed with checklists and discussions. For each of the key focus areas, the detailed findings will indicate the maturity levels. The findings provide a general estimate of the maturity level and the impact it is causing:

 

Agile Framework for DevOps Process Projects

DevOps projects are typically Agile-framework based, for the effective and quick turnaround of the development and implementation process cycle.

 

Agile software development-based projects have become widely accepted and adopted across the industry. The traditional waterfall model is outdated and unable to keep up with the advantages offered by agile methodology.

 

Agile methodology owes its success to its core objectives such as:

  • Individuals and interactions are valued over process and tools
  • Working software is valued over comprehensive documentation
  • Customer collaboration is valued over contract negotiation
  • Change adoption agility is valued over project plan adherence

 

Agile Ways of Development

Scrum is the agile development methodology, focused on features development, from a team comprising of roles such as the following:

  • The scrum master is responsible for team setup, conducting sprint meetings, and removing development obstacles
  • The product owner creates and prioritizes product backlog, and is responsible for the delivery of the functionality at each sprint iteration cycle
  • The scrum team manages and organizes the work to complete in the sprint cycle
  • The product backlog is the list of features and requirements of functionality to be developed

 

The Agile method of development is an incremental and iterative approach for developing user stories, software features or functionality. Customers can see the product features early and make the necessary changes if needed. The development cycle is broken into sprint cycles of two to four weeks, to accomplish units of work. The idea is that smaller cycles can be developed and managed quickly with a team of developers and testers together.

 

The structure and documentation are not important but a working feature of the code is considered valuable. The development process is iteratively accomplished in successive sprint cycles. Bugs identified are fixed at the earliest sprint with successful testing. Regression testing is performed when new functions or logic are developed. User acceptance tests are performed after the sprint cycle to flag the product for release:

 

The benefits of adopting the best practices of agile software development are as follows:

  • Working software makes the customer satisfied, as he can view the features
  • Customers can add change requests at any phase of development Quick and continuous delivery of software in weeks

 

  • Projects are built around motivated individuals, who should be trusted Sprint teams are highly skilled and efficient in delivery
  • Since developers and testers codevelop, bugs are solved within sprint The communication mode is effective so the quality of product delivered is higher

 

  • Continuous attention to technical excellence leads to good design Self-organizing teams focus on optimal architectures, requirements, and designs
  • The team is lean and effective, so productivity is maximized

 

DevOps – Continuous Integration and Delivery

Continuous integration and continuous delivery are popular and valuable processes to ensure high-quality and timely software delivery. Continuous integration is the integrated software development process where multiple developers adhere to the agile methodology and adapt it to best practices like the following:

 

  • Ensure all development code is subject to a version control system
  • An adequate code review process is incorporated
  • Changes to code are integrated, tested, and built quickly
  • Build process is integrated to run unit tests and automated
  • Attend to the build errors immediately, turn around quickly
  • Tracking and Metrics of build results and repository management.
  • Transparency and a user-friendly build process
  • Continuous delivery is the process of extending the continuous integration.

 

The most current and latest version of the software is readily available Changes passing through the testing cycle from the technical and quality standpoint are ready for deployment to Automate the shipment and deployment process

 

The continuous integration process is the detailed following:

 

The developer’s environment: Developers create code changes in a local workspace with an Integrated Development Environment runtime and with build tools physically installed on PC, or a cloud-based (Web IDE). They do unit level testing, data validations, code performance checks, and so on. The code changes done by the developer are pushed to the source code management system.

 

The typical continuous integration and deployment cycle is comprised of setting up a CI/CD infrastructure and processes as listed:

  • The source code version and repository management system A process scheduler to initiate the orchestration pipeline
  • A build process to manage code builds and scheduled tests Build nodes for executing the build

 

  • Testing process on identified test nodes for automated testing Build outcome artifact repository
  • Artifact repository to store build results Scenario and acceptance tests on test nodes

 

  • Application installation with deploy tool on to runtime systems Acceptance tests for applications deployed on the runtime systems
  • The quality manager will approve the acceptance tests to agree to deployment test systems.
  • The delivery manager will approve the application deployment to production.

 

Best Practices for CI/CD

Let’s take a look at the best practices for CI/CD:

 

Using version control: In collaborative development environments with simultaneous development there will be multiple challenges:

A source code management system defines a single source of truth for the code after placing the code under a version control system. The source code will be reproducible by effectively adopting the merge process for mainline development and loop lines for bug fixes and so on in the system. Git is a popular source code management system and GitHub is a cloud variant as a Software as Service (SaaS) model:

 

Automate the build: Standardized automated build procedure will stabilize the build process to produces dependable results. The matured build process must contain the build description and all the dependencies to execute the build with a standardized build tool installation. Jenkins is the most versatile tool for build schedules; it offers a convenient UI and also has plug-ins integrating most popular tools for continuous integration.

 

Tests in the build: A few tests are to be performed to validate effectiveness and fitness of code beyond just the syntactical correctness of the code as follows:

 

Unit tests operate directly on build results

Static code checks on source code prior to developer check-in. Git pre-commit triggers or CI system could be used to set up a gating or non-gating check

 

Scenario tests for new build applications to be installed and started Functional performance of the code

 

Unit test frameworks are popular across source code technologies like JUnit for Java. Selenium Framework provides graphical user interfaces and browser behavior.

 

Implementing these tests on the developer’s workstation early as part of the build saves time and effort addressing bugs discovered later in the development process.

 

Early and frequent commit of code:

In a distributed development environment with multiple projects, each team or developer intends to

integrate their code with the mainline. Also, the feature branches change to be integrated into the mainline. It’s a best practice to integrate code quickly and early. The time delay increases between new changes and merging with the mainline will increase the risk of product instability, the time taken, and complications as the main-line evolve from the baseline.

 

Hence each developer working with the feature branch should push their code at least once per day. For the main branch inactive projects, the high effort for constant rebasing must be evaluated before implementing.

 

Every change to be built: Developer changes are to be incorporated into the mainline, however, they can potentially destabilize the mainline affecting its integrity for the developers relying on the main line.

 

Continuous integration addresses this with the best practice of continuous build for any code change committed. Any broken build requires immediate action as a broken build blocks the entire evolution of the mainline and it will be expensive depending on the frequency of commits and such issues. These issues can be minimized by enforcing branch level builds.

 

Push for review in Gerrit or pull request in GitHub are effective mechanisms to propose changes and check the quality of changes by identifying problems before they’re pushed into the mainline, causing rework.

 

Address build errors quickly: The best practice of building at the branch level for each change will put the onus on the respective developers to fix their code build issues immediately rather than propagate it to the main branch. This forms a continuous cycle of Change-Commit-Build-Fix at each respective branch level.

 

Build fast: The quick turnaround of builds, results, and tests by automatic processes should be vital inputs for the developer workflow; a short wait time will be good for the performance of the continuous integration process on overall cycle efficiency.

 

This is a balancing act between integrating new changes securely to the main branch and simultaneously building, validating, and scenario testing. At times, there could be conflicting objectives so trade-offs need to be achieved to find a compromise between different levels of acceptance criteria, considering the quality of the mainline is most important. Criteria include syntactical correctness, unit tests, and fast-running scenario tests for changes incorporated.

 

Pre-production run: Multiple setups and environments at various stages of the production pipeline cause errors. This would apply to developer environments, branch level build configurations, and central main build environments. Hence the machines where scenario tests are performed should be similar and have a comparable configuration to the main production systems.

 

Manual adherence to an identical configuration is a herculean task; this is where DevOps value addition and core value proposition and treat the infrastructure setup and configuration similar to writing code. All the software and configuration for the machine are defined as source files which enable you to recreate identical systems;

 

The build process is transparent: The build status and records of the last change must be available to ascertain the quality of the build for everyone. Gerrit is a change review tool and can be effectively used to record and track code changes, the build status, and related comments. Jenkins flow plugins offer to build team and developers a complete end to end overview of the continuous integration process for source code management tools, the build scheduler, the test landscape, the artifact repository, and others as applicable.

 

Automate the deployment: Installation of the application to a runtime system in an automated way is called deployment and there are several ways to accomplish this.

 

Automated scenario tests should be part of the acceptance process for changes proposed. These can be triggered by builds to ensure product quality.

 

Multiple runtime systems like JEE servers are set up to avoid single-instance bottlenecks of serializing test requests and the ability to run parallel test queries. Using a single system also has associated overheads in recreating the environment with change overhead for every test case, causing a degeneration a performance. Docker or container technology to install and start runtime systems on demand in well-defined states, to be removed afterward.

 

Automated test cases, since the frequency and time of validations of new comments, is not predictable in most cases, so scheduling daily jobs at a given time is an option to explore, where the build is deployed to a test system and notified after successful deployment.

 

The deployment to production is a manual conscious decision satisfying all quality standards and ensure the change is appropriate to be deployed to production. If it can also be automated with confidence, that’s the highest accomplishment of automated continuous deployment too.

Continuous delivery means that any change integrated is validated adequately so that it is ready to be deployed to production. It doesn’t require every change to be deployed to production automatically.