50 Key Responsibilities of a DBA in DevOps (2019)

 DBA in DevOps

Key Responsibilities of a Database Administrator (DBA) in DevOps 2018

DevOps presents exciting opportunities for DBAs to make the improvements that many of you have wanted for years. As the culture shifts to align with the agile and DevOps movements, and DevOps teams understand the valuable contribution that DBAs bring to DevOps.

 

Responsibilities of a DBA

 

Database as a Service (DBaaS)

Fortunately, perceived best practices, security requirements, and extended project and purchase approval processes are all realigning to deliver on the promises of DevOps.

 

DBAs need to exercise judicious discipline, mixed with flexibility and what may feel like overcommunicating, to adapt from siloed processes involving receiving hand-offs from an upstream team, sprinkling on a bit of DBA magic, and then passing the package to a downstream team.

 

To work effectively within the DevOps model, DBAs need to manage databases across a variety of platforms: physical or virtual hosts, and internal or external cloud implementations that are likely using database software that is not relational. For DBAs, ensuring secure access and robust access times may be where traditional responsibilities end.

DBA-work

New responsibilities include assisting with rapid deployment process (continuous integration and continuous deployment) creation, managing code and scripts using software versioning tools, and building infrastructure as code. Although data remains stateful, the schema, the database software, and the host platform are stateless.

 

DBAs need to become agents of change, supporters of the DevOps methodology, and tactical consultants are driven to improve all aspects of the SDLC. DBAs need to become platform and database agnostic. There is more to come on these topics.

 

Relational databases have been the preferred (best understood) environment for the storage and retrieval of data for several decades. As petabytes of unstructured data have been introduced into the mix, relational databases have struggled to manage the data while staying true to traditional relationship precepts.

 

To fill the gap, NoSQL databases such as Cassandra and MongoDB introduced ecosystems built to store and retrieve data outside of the relational model.

 

DevOps involves DBAs creating database build templates that developers, yes developers, use to spawn databases on demand, which is simply one step in the automated server provisioning process.

 

Test data loads are automatically consumed, tested, and measured without direct DBA action. DBAs instead help define how the test data is selected and staged for consumption. Learning to accelerate work using automation and source code control for all scripts and code further reduces the development cycle time.

 

DBAs must aggressively and proactively accelerate product delivery to match the velocity of the release cadence and be determined to never be the bottleneck.

 

“Best Database for the Job” Proponents

database

Particularly for new projects, DBAs need to weigh the impact of force-feeding data into the relational model versus introducing a new database model that is more aligned with the application’s expected data usage pattern.

 

Structured and unstructured data may best live in separate databases, with applications calling multiple services to read, modify, or delete data. The code is evolving to be more dynamic to leverage multiple back-end databases.

 

Legacy databases will not disappear soon because many still act as the databases of record and contain valuable data. Also, audit and governance requirements have to be satisfied, many by just keeping the data in place until the mandated retention window expires. 

 

Organizations may decide to decouple monolithic application functions into services that fit agile development and DevOps more readily. Segments of data thus may need to be copied or moved into a different database, which is work that DBAs perform regularly. Advantage DBAs: new resume entry!

 

Technical Advisors

Technical Advisors

 

Transforming to align with a business partner’s need for scalable, well-performing, and resilient systems, at a lower cost, is much easier when leveraging an established methodology.

 

This methodology has been proven feasible by Netflix, Facebook, Flickr, and Etsy; and DevOps has matured to the point at which even risk-averse organizations should feel comfortable adopting it.

 

Lean processes, obsessive automation, faster time to market, cost reductions, rapid cycle times, controlled failures and recoveries, and robust tool suites empower this ambitious transformation. DevOps DBAs must adapt to this new way of building software products while driving infrastructure stability, resiliency, and availability eclipsed only by extreme application performance.

 

DBAs are persistently ostracized for being inflexible, slow to deliver, and generally uncooperative. DBA processes, along with many Operations’ processes, remain serialized and burdened by outdated policies and delivery expectations.

 

Shifting to DevOps aligns (absorbs) DBA tasks into combined process flows that began during the agile development transformation. DBAs need to purposefully engage their development peers to communicate a willingness to adopt  DevOps practices, manage the infrastructure as code using source control, and learn the implemented tool suite.

 

servers

DevOps brings many new opportunities for IT teams to deliver superior software products that fulfil business initiatives that lead to excellent customer experiences.

 

On the flip side, challenges arise when integrating processes, increasing release momentum, reducing cycle time, managing infrastructure as code, and implementing change requests.

 

Many DBAs were left behind during the initial DevOps wave; however, current landscape changes include drawing in a variety of IT technicians to further expand capabilities, extend collaboration, reduce waste, and abate SDLC costs.

 

The inclusion of DBAs into DevOps is not without risk because, as with any process, adding another step, variable, or person increases the possibility for errors or other disruptions.

 

Fortunately, DevOps is supported by ever-evolving powerful tools purposed to assist with collaboration, code management, quality assurance testing, and task automation.

 

Converting from technology silo experts to technical advisors instils a new sense of purpose and resets our mindset so that we are willing to partner with teams once deemed “nemeses” for the good of the business and the customer.

 

[Note: You can free download the complete Office 365 and Office 2019 com setup Guide for here]

 

“Metrics for All to See” Facilitators

DBA

DBAs (at least good DBAs) constantly assess the production database environment (code base; database; host  operating system [OS]; load; capacity; and, less often, network throughput) and seek opportunities to improve application performance. Whether by identifying poor performing queries, needed indexes, or expanded buffer cache, performance matters to DBAs.

 

The misstep has often been the unintentional isolation of performance metrics by not purposefully, holistically, or frequently sharing with network and system administrator (SA), or development team members, although doing so may further improve application performance. 

 

More importantly, it provides exceptional value to customers. Sharing performance metrics enables disparate teams to aggregate their combined experiences and skills, producing opportunities for better solutions that are possible individually. 

 

DevOps Success Metrics

Extending metrics beyond customer experience performance management, DevOps introduces measures for software delivery efficiency, release cadence, and success rate. Continuous code integration, automated testing, and continuous delivery have to be measured to determine success.

 

Continuous integration checks how well newly introduced code operates with existing code, measured by defects. Automated testing checks whether new or modified code function is as defined in the use case and whether the code passes regression testing Continuous delivery/deployment checks how often code is released into production (release cadence) and whether the code causes disruption, tracked by incidents.

 

Customer Experience Performance Protectors

DBA_work

Holistically understanding the infrastructure and application architecture provides opportunities to decrease cumulative degradation, which improves customer experience. Even for a basic transaction flow, the delivery level drops rapidly.

Table. Cumulative Degradation

  • Cumulative Degradation
  • Component Success %
  • Network 99.9%
  • Web server 99.7%
  • App server 98%
  • Database 97%
  • App server 98%
  • Web server 99.7%
  • Network 99%
  • Customer Experience: 91.58%

 

Cumulative degradation reveals why IT five 9’s availability goal falls short when measuring customer experience.

 

Application performance management (APM) can provide transactional perspectives of customer experience, transaction times, and frequency, which provide a framework to fully understand application performance across the infrastructure. DBAs with this transparency level can shift to predictive analysis, allowing corrections to be implemented before the customer notices.

 

Even troubleshooting becomes less problematic and faster because baseline variances can be reported if predetermined thresholds are violated. Additionally, preproduction APM application monitoring can identify code or infrastructure performance deficiencies before release, preventing problems from getting into production.

 

Culture

 

culture

Internationally recognized management guru Peter Drucker famously pronounced, “Culture eats strategy for breakfast.” Culture presents a perplexing challenge to DevOps implementation.

 

Wanting to do DevOps by investing in DevOps tools, training staff, and hiring expert consultants, all without a transferal of mindset, behaviours, and incentives, only suppresses the status quo, which lies quietly below the waves seeking an opportunity to re-emerge.

 

During a recent client call, a team brought forward a build request for two virtual hosts, including software installs for several products from a popular Agile tool suite. The conversation went something like this:

 

Requester: “We need two VMs with tool A and tool B installed for a project starting in 10 days.”

SA: “Once approved, it takes 6 weeks to provision a VM.”

 

Requester: “This project has been approved by SVP whats-her-name and VP so-and-so as a fast-track project.”

SA: “Our process takes 6 weeks. Then someone still needs to install the tools because that’s not what we do.”

By this time, I am “cryaughing”—trying not to cry or laugh, but really wanting to do both. But I digress.

 

Requester: “We are trying to be agile and need to move fast.”

SA: “Our process is fast! It used to take 3 months to get a host provisioned.”

 

And so forth. Sadly, this is not a fictional story for blog demonstration purposes.

 

As this unfortunate yet emblematic example shows, existing processes create cultures ingrained with suppositions of how well teams are performing, what people believe is expected from them, and a “don’t rock the boat” mindset, all of which present tremendous hurdles to be surmounted. DevOps requires processes to be rethought, leaned out, sped up, and extraordinarily stable.

 

Pulling together strong and patient leaders, complemented by charismatic and uber-respected technical subject matter experts (SMEs) such as DBAs or senior DevOps engineers, to challenge the status quo by instigating a new culture focused on DevOps best practices must be the movement’s heart and soul.

 

An organization’s culture must transform into a more collaborative, process-defined, and people-centric entity to successfully drive continuous improvement, automation, and integrated testing.

people

To change the culture, people must at least accept the new direction, even if reluctantly. The best-case scenario includes people being only too happy to scrap the old model and excited to move on to better methods.

 

Both types of people encountered need to be coached differently to effectively ensure the movement’s success. The reluctantly accepting group drags its feet, in no hurry to reach the new destination.

 

Coaching increases the pace, improves buy-in, and develops needed attitudes. The excited group (probably the ones who have been telling each other for years that management is a cast of morons and constantly bloviating about how everything would be awesome if only they were in charge) can be more dangerous to the cause than those who may be flat-out resisting the change.

 

Failing to control the ascent with planned and well-communicated phases that include needed staff training, concrete process definitions, and general good change practices may result in a catastrophic crash and burn.

 

Change is an interesting beast. A team member once asked his manager why the manager had not pushed for a large change that needed to happen. The manager responded that change done gradually over time usually receives better acceptance.

 

The manager’s example was for the team member to imagine coming to work the next morning to find a full-grown tree in his cube.

 

The manager explained that even if the employee loved trees, the tree would be bothersome because it had invaded his space unexpectedly. But if the team member arrived to find a potted sapling on his desk, he might think it is cool.

 

Over time, as he would nurture the sapling (even though he had to report the now small tree and place it on the floor), the team member would remain comfortable with its presence.

 

After a few years passed, when people would ask him about the full-grown tree in his cube, he would proudly share that he was able to transform a weak sapling into a mighty tree. The employee would accept the change because he was involved (nurturing), and the change came about slowly but consistently.

 

Driving the new DevOps culture requires introducing a “sapling” and nurturing its growth until the “tree” is well rooted. The more people who are involved in the nurturing process improves the odds of a positive outcome.

 

Leaving the tree-nurturing responsibility in the hands of only the core team likely leads to a brown and wilted sapling.

 

Automation

automation

It is odd to think that one of the primary benefits unleashed at the dawn of the computer era was the ability to reduce costs and processing time by automating routine tasks.

 

Yet today, when CIOs and their teams are under pressure to drive strategic growth initiatives needed to increase revenue or introduce new products for customers, much of the behind-the-scenes effort is still completed manually.

IT professionals (I, too, have been guilty of this) love working with shiny new toys—often at the expense of reducing effort or costs through automation.

 

DevOps is about speed, flexibility, resiliency, and continuous improvement. People need to understand the processes, build and test the software, implement the automation, and then step back and let the computers do the work.

 

For DBAs, this means relinquishing control of the scripts and surrendering them to source code control. The scripts now become included in the release package instead of being manual executions listed as a task in a planning spreadsheet.

 

Automation applies to server builds, database software installs and configurations, network settings, storage allocations, schema builds, the code compiles job scheduling and more. Anything and everything should be automated.

 

Security programs should automatically scan hosts for vulnerabilities. Automation is the way resiliency can be gained, reducing human error risks. The automation itself needs to be monitored and measured to ensure it is delivering expected benefits.

 

Measurement

measurement_dba

People live by measurements. Our day (a measure) is an accumulation of events segmented by the measure of time. The value of our contribution to the organization comes periodically (a rhythm of measures):

 

the amount of our paycheck. Hence, measurements must be important. Yet too many IT shops still focus on binary checks, such as a server being up or down instead of business measures such as end-user experience and transaction capability; or for DevOps: cycle time, failure and resolution rates, release momentum, feature time to market, and reduced SDLC costs.

 

For DevOps to succeed, a consistent whittling away at inefficiencies, avoidable steps, and pointless multilevel approvals must occur. The burden of CYA and sometimes ego boosting for less-mature executives (e.g., requiring ten people to approve a change) has been known to be one of the most consuming yet valueless requirements related to the SDLC.

 

After all, the business made the request and IT agreed to complete the work, which sounds like approvals. Yes, another oversight is needed, but surely a few approvals would not be missed.

 

Applying lean and Kanban techniques to trim inefficiencies that should return value from reduced waste and improved speed. Process mapping, or value stream mapping, should be done to capture the delivery process, see how long each step takes, and evaluate the need for each step.

 

Decisions can then be made to remove impediments, smooth out the workflow, and drop unneeded steps and approvals to produce a streamlined SDLC process.

 

Sharing

share

“Knowledge is power.” That saying has been around for years, but has been distorted; many people hoard information to be used only for personal gain versus benefiting others.

 

Someone who knows how to cure cancer does not have power by selfishly retaining the solution; instead, the power comes from releasing the information and then watching how the knowledge, when applied, impacts people around the world.

 

DevOps breathes by sharing information. Business, development, and operations (including DBAs) must communicate in full-duplex. Messages need to be sent and received simultaneously upstream and downstream. Each team member must understand why the business needs the function and how the business plans to use the function.

 

Addressing operational challenges earlier in the process leads to better-performing and resilient production systems. As DevOps expounds continuous testing across the SDLC, all environments must match the planned end-point state.

 

Operational knowledge from team members’ vast experience, aggregated into manageable bundles driven upstream to improve the infrastructure, creates consistent and stable platforms.

 

Do you remember the grade school exercise in which the teacher would share a sentence with one student that was then passed from student to student until the last student relayed the sentence back to the teacher?

 

Whether it was changed intentionally for malice or fun or changed because students couldn’t remember the exact statement, the final message was usually so dissimilar to the original sentence that it was humorous.

 

Unfortunately, this is the exact process IT has used for decades: it receives requirements from the business and then passes the details, which are distorted incrementally, along with the supply chain so that (as witnessed far too many times) the business cannot reconcile the final product to the requested functional requirements.

 

DevOps must have a continuous feedback mechanism that constantly relays information concerning code and infrastructure decisions that seamlessly apply to production, and which decisions disrupt or degrade the customer experience by degrading application performance or availability.

 

Thinking Differently

think

Earlier involvement in the SDLC introduces challenges, maybe even opposition, to traditional responsibilities. Customary DBA tasks seem often to be outliers concerning the SDLC.

 

Although analysis, development, QA testing, releases, and initial operations support efforts stream as a continuous flow, DBA tasks have a tendency to abruptly change the flow, disrupting progress.

 

As DevOps database frameworks mature, DBA task inclusion becomes seamless to the process, supporting continuous integration and automation.

 

Core DBA work shifts from being a significant Gantt chart bar to barely a blip on the timeline. Imagine not being constantly asked, “When will the database be ready?”; instead not even being a part of the build and release cycle.

 

How? Infrastructure as code, which involves predefining database configurations that can be built on virtual resources, initiated by developers on demand.

 

Also shifting to DevOps, SAs can purchase, rack and stack, power up, and network attach computing resources like an internal cloud, ready for consumption. Optionally, provisioning platforms may simply mean consuming external cloud resources (an example is DBaaS).

 

Either way, SAs can create templates for standard server builds a database, app server, web server, and so on. DBAs can then extend the database server to build templates to include database software installation and database creation. Including test data loads for preproduction databases for testing can also be automated.

 

All scripts used in the build process must be managed as code, including versioning and source code control. DBAs need to manage their code just as their development partners.

 

Security, everyone’s concern, has at least three tasks: 1) scan and approve template-driven server builds; 2) dictate access methods and privileges for the operating system, database, web services, and more; and 3) periodically scan active servers for vulnerabilities. DBAs must provide continual feedback to the security team to ensure risk mitigation.

 

With this automation, the SDLC pipeline no longer includes long duration bars for purchasing and building servers and databases; instead, developers can provision on demand. Yes, the hairs on my neck are standing up, too.

 

Remember that although you still control the installation, build, and configuration of the database, you can turn your focus to performance and customer experience improvements once you have automated provisioning.

 

Now that servers are provisioned from predefined templates with or without using a DevOps tool, platform consistency begins to evolve. As code progresses toward production, needed environments are spun up using the same template as the initial development server.

 

In some cases, even the production ecosystem is built exactly like every server involved in the code release process.

 

Appreciating that production web and app server build from templates can be a successful model is one thing, but accepting that idea for production database servers needs more consideration.

 

Agreeing that only data is stateful allows the inference that the data could be loaded and unloaded, even trans-formed to meet business requirements.

 

Consequentially, it is unlikely that a multiterabyte relational database would undergo that much manipulation. In these cases, DBAs may choose to derive the pre-production database configuration from the production setup, maintaining platform consistency.

 

Mike Fal writes in the simple talk blog posting, DevOps, and the DBA, “The reality is that chaos, instability, and downtime are not the result of speed, but the result of the variance.”

 

Inconsistencies between nonproduction and production environments have always undermined production releases (it worked in development), extended outages (change ABC implemented 2 months ago was never removed from production, which caused this release to fail), and degraded performance (it was fast in QA) because the solution could not scale to the production load.

 

Marching forward, DBAs have the opportunity to improve platform stability, remove build bottlenecks, and increase production resiliency by collaborating toward on-demand provisioning capabilities, reducing failures caused by inconsistency, and most importantly, being cultural change warriors. Many of you are already doing DevOps work—now there is a name to help facilitate conversations.

 

DBAs for DevOps

DBAs for DevOps

DBA “Undersight”

DBA work has been a “black box” for too long, I mentioned “magic” as a DBA tool. I was joking, of course, but reality shows that DBA scripts, database performance configuration changes, login triggers, and other DBA outputs are not scrutinized enough nor managed properly.

 

The change advisory board (CAB) team may ask a question or two about why the change is needed, but many CAB members probably do not have the required knowledge to question the change enough to understand the potential harm. I hear what you are thinking, “The CAB does not have the technical experience to interrogate most changes.”

 

I agree, but I also maintain the position that the CAB members see fewer database changes (compared with application changes) and fail to realize that database change mistakes tend to lean toward catastrophic. I believe it’s because the CAB should not be evaluating changes.

 

The product owner and DevOps team members should know when to deploy because they intimately know the readiness of the code, understand the consequences of failure, and are working the backlog based on value.

 

DevOps protects the teams from consequences if the teams abide by the mandates to excessively test to code and never allow a defect to be deployed into production.

 

DBAs and DevOps team members surely agree to this value proposition, not needing oversight for releases. You’ll have to persistently engage the DBAs to shift expectations in order to incorporate their work into the release cycle.

 

 

 

DBA Value Proposition

dba

DBA participation in DevOps draws in critical application availability and performance contributor: the database. Involving DBAs means that application code is evaluated from a different perspective, especially calls to the database. Database changes become integrated code for continuous integration and exhaustive testing.

 

DBAs can identify poorly executing queries and transactions and baseline production performance. They can get ahead of forthcoming code changes or new functionality by understanding the impact on the prepared environments, which gives DBAs time to analyze and implement performance-tuning enhancements before the additional load is present.

 

Problems become challenges for a larger team, compiling more experiences and skills into the pool of contributors to determine root cause and deploy mitigation.

 

DBAs’ experiences in other infrastructure areas add another layer of value by being able to assess the application and database by looking under the covers of the operating systems, storage, and network. Further discussion is ahead.

 

Closer and constant DBA and DevOps team collaboration improve product outcomes, stability, security, and performance, which lead to happier customers and improved business results. As DBAs better understand the business team’s use of the product, building a disaster recovery solution or recovery from backup strategy can be customized

 

Giving developers the freedom to fire up virtual hosts with different database options enables consideration of risk early in the process.

 

A developer wanting to test a new data access service can test, retry, and destroy the virtual host to start over with a fresh host if necessary. DBAs scripting different template options applicable to different data platforms shifts experimentation from production too early in the pipeline.

 

Governing the Portfolio and Checkpoints

Your application portfolio is always evolving, and the only way to be successful in such a moving environment is to have the right governance in place. Governance was hard in the past; in the new world, it has become even more difficult.

 

There are more things to govern, the overall speed of the delivery of changes has increased, and without a change in governance, governance will either slow down delivery or become overly expensive.

 

There are four main points of governance for any change:

Checkpoint 1 (CP1): this answers the question of whether or not the idea we have for the change is good enough to deserve some funding to explore the idea further and come up with possible solutions.

 

Checkpoint 2 (CP2): this answers the question of whether we have found a possible solution that is good enough to attempt as a first experiment or first release to validate our idea.

 

Checkpoint 3 (CP3): this answers the question of whether or not the implemented solution has reached the right quality to be released to at least a small sub audience in production.

 

Checkpoint 4 (CP4): this answers the question of whether or not the Checkpoint 1 (CP1)

 

At CP1, we are mostly talking about our business stakeholders. Somewhere in the organization, a good idea has come up or a problem has been found that requires fixing.

 

Before we start spending money, our first checkpoint is to validate that we are exploring the right problems and opportunities that have a business impact, are of strategic importance or are our “exploratory ideas” to find new areas of business.

 

This checkpoint is a gatekeeper to make sure we are not starting too many new things at the same time and to focus our energy on the most promising ideas.

 

Between CP1 and CP2, the organization explores the idea, and both business and IT come together to run a discovery workshop that can take a couple of hours or multiple weeks depending on the scale of the problem. You can run this for a whole business transformation or for a small change.

 

The goal of discovery really falls into three important areas: (1) everyone understands the problem and idea, (2) we explore what can be done with the support of IT, and (3) we explore what the implementation could look like in regard to schedule and teams. This discovery session is crucial to enable your people to achieve the best outcome.

 

Checkpoint 2 (CP2)

After discovery, the next checkpoint is validation that we now have discovered something that is worth implementing. At this stage, we should check that we have the capacity to support the implementation with all parties: IT, business stakeholders, the operations team, security, and anyone else impacted.

 

This is a crucial checkpoint at which to embed architectural requirements, as it becomes more difficult to add them later on. Too often, business initiatives are implemented without due consideration of architectural aspects, which leads to the increased technical debt over time.

 

It is my view that every initiative that is being supported by the organization with scarce resources such as money and people should leave the organization in a better place in two ways: it better supports the business, and it leaves the IT landscape better than it was before. This is the only reasonable way to reduce technical debt over time and deal with legacy.

 

CP2 is the perfect time to make sure that the improvement of the IT landscape/down payment of technical debt is part of the project before it continues on to implementation. This has to be something that is not optional; otherwise, the slippery slope will lead back to the original state.

 

It is quite easy to let the necessary rigour be lost when “just this once” we only need to quickly put this one temporary solution in place. I learned over the years that there is nothing more permanent than a temporary solution.

 

Between CP2 and CP3 is the bulk of the usual software delivery that includes design, development, and testing work being done in an Agile fashion.

 

I am confident that Agile is the only methodology we will need going forward but that we will have different levels of rigour and speed as part of our day-to-day Agile delivery. Once the solution has matured over several iterations to being a release candidate, we will have CP3.

 

Checkpoint 3 (CP3)

At CP3, we will confirm that the release candidate has reached the right quality for us to release it to production. We will validate that the architecture considerations have been adhered to and technical debt has been paid down as agreed, and we will not introduce new technical debt unknowingly.

 

(Sometimes we might consciously choose to accrue a little more debt to test something early but commit to fixing it in the next release. This should be a rare occasion, though.)

 

This checkpoint is often associated with the change control board, which has to review and approve any changes to production. Of course, we are looking for the minimum viable governance here, and you can refer to the previous blog section for more details on general governance principles to follow at CP3.

 

Between CP3 and CP4 the product is in production and is being used. If we follow a proper Agile process, the team will already be working on the implementation of the next release in tandem with supporting the version that has just gone live.

 

Internal or external stakeholders are using the product, and we gather feedback directly from the systems (through monitoring, analytics, and other means) or directly from the stakeholders by leveraging surveys, feedback forms, or any other communication channel.

 

Checkpoint 4 (CP4)

Checkpoint 4 is the checkpoint that is extremely underutilized in my experience. It’s one of those processes that everyone agrees is important, yet very few have the rigour and discipline to really leverage it to meet its full potential.

 

This checkpoint serves to validate that our idea and the solution approach are valid. Because projects are temporary by definition, the project team has often stood down already and team members have been allocated to other projects.

 

CP4 then becomes a pro forma exercise that people don’t appreciate fully. If we have persistent, long-lasting product teams, the idea of learning from the previous release and understanding the reaction of stakeholders is a lot more important.

 

Those product teams are the real audience of CP4, though, of course, the organizational stakeholders are the other audience that needs to understand whether the money was well invested and whether further investment should be made.

 

CP4 should be an event for learning and a possibility for celebrating success; it should never be a negative experience. If the idea did not work out, we learned something useful about the product that we have to do differently next time.

 

You can combine CP4 with a post-implementation review to look at the way the release was delivered and to improve the process as well as the product. It is my personal preference to run the post-implementation review separately to keep improving the product and the delivery process as two distinct activities.

 

With this governance model and the four checkpoints in place, you can manage delivery at several speeds and deal with the faster pace. Each checkpoint allows you to assess the progress and viability of the initiative, and where required, you can move an initiative into a different delivery model with a different (slower or faster) speed.

 

First Steps for Your Organization

I will provide two exercises for you to run in your organization. This time, both of them are highly related: the first is an analysis of your application portfolio and the second is the identification of a minimum viable cluster of application for which a capability uplift will provide real value.

 

If you are like most of my clients, you will have hundreds or thousands of applications in your IT portfolio. If you spread your change energy across all of those, you will likely see very little progress, and you might ask yourself whether the money is actually spent well for some of those applications.

 

So, while we spoke about the IT delivery process in the blog section 1 exercise as one dimension, the application dimension is the second dimension that is important. Let’s look at how to categorize your application in a meaningful way.

 

Each organization will have different information available about its applications, but in general, an analysis across the following four dimensions can be done:

 

The criticality of application: How important is the application for running our business? How impactful would an issue be on the user experience for our customers or employees? How much does this application contribute to regulatory compliance?

 

Level of investment in the application: How much money will we spend in this application over the next 12–36 months? How much have we spent on this application in the past? How many priority projects will this application be involved in over the next few years?

 

The preferred frequency of change: If the business could choose a frequency of change for this application, how often would that be (hourly, weekly, monthly, annually)? How often have we deployed a change to this application in the last 12 months?

 

Technology stack: The technology stack is important, as some technologies are easier to uplift than others. Additionally, once you have a capability to deliver, for example, Siebel-based applications more quickly, any other Siebel-based application will be much easier to uplift too, as tools, practices, and methods can be reused. Consider all aspects of the application in this technology stack: database, data itself, program code, application servers, and middleware.

 

For each of the first three dimensions, you can either use absolute values (if you have them) or relative numbers representing a nominal scale to rank applications. For the technology stack, you can group them into priority order based on your technical experience with DevOps practices in those technologies.

 

On the basis of this information, you can create a ranking of importance by either formally creating a heuristic across the dimensions or by doing a manual sorting. It is not important for this to be precise; we are aiming only for accuracy here.

 

It’s clear that we wouldn’t spend much time, energy, and money on applications that are infrequently changed—applications that are not critical for our business and on which we don’t intend to spend much money in the future.

 

Unfortunately, just creating a ranking of applications is usually not sufficient, as the IT landscape of organizations is very complex and requires an additional level of analysis to resolve dependencies in the application architecture.

 

Identifying a Minimum Viable Cluster

As discussed above, the minimum viable cluster is the subset of applications that you should focus on, as an uplift to these will speed up the delivery of the whole cluster. Follow the steps below to identify a minimum viable cluster:

 

1. Pick one of the highest-priority applications (ideally based on the portfolio analysis from the previous exercise) as your initial application set (consisting of just one application).

2. Understand which other applications need to be changed in order to make a change to the chosen application set.

3. Determine a reasonable cutoff for those applications (e.g., only those covering 80% of the usual or planned changes of the chosen application).

4. You now have a new, larger set of applications and can continue with steps 2 and 3 until the application set stabilizes to a minimum viable cluster.

5. If the cluster has become too large, pick a different starting application or be more aggressive in step 3.

 

Once you have successfully identified your minimum viable cluster, you are ready to begin the uplift process by implementing DevOps practices such as test automation and the adoption of cloud-based environments, or by moving to an Agile team delivering changes for this cluster.

 

Dealing with Software Packages and Software Vendors

The original purpose of software packages was to support commodity processes in your organization. These processes are very similar to those of other organizations and do not differentiate you from your competitors.

 

And even though many of these software packages are now delivered as software as a service (SaaS), your organization has legacy package solutions that you have to maintain.

 

The problem is that many organizations that adopted software packages ended up customizing the product so much that the upgrade path has become expensive.

 

For example, I have seen multiple Siebel upgrades that cost many millions of dollars. When the upgrade path is expensive, it means that newer, better, safer functionality that comes with newer versions of the package is often not available to the organization for years.

 

Besides this downside, heavy customization over time to make the software support all the requirements from business also means that each further change becomes more expensive and that technical debt increases over time.

 

Leverage a System Integrator

Having spent my entire career either working for a software vendor or system integrator, I find it surprising that so much is left on the table when it comes to using the relationship effectively beyond the immediate task at hand.

 

If you work with a large system integrator (SI) to maintain and develop an application, it is likely that the SI is working with the same application in many other places. While you have some leverage with the software vendor, the SI can use the leverage he has across several clients to influence the application vendor.

 

The better and more aligned your organization is with your SI on how applications should be developed, the easier it will be to successfully influence the software vendor. Better leverage with software application vendors to change their architecture is only one of many benefits that you can derive from changing your relationship with your SI.

 

Leverage User Groups

For most popular applications, there are user groups, which can be another powerful channel to provide feedback to the vendor. Sometimes these are organized by the vendor itself; sometimes they are independent.

 

In either case, it is worthwhile to find allies who also want to improve the application architecture in line with modern practices. Having a group of clients approach the vendor with the same request can be very powerful.

 

A few years back, I was working with an Agile software that was unable to provide reporting based on story points, relying instead on hour tracking. The product vendor always told my client, my colleagues, and me that our request was unique and hence not a high priority for them.

 

We could only get traction once we had reached out to some other organizations that, unsurprisingly, had issued the same request and had received the same response. The vendor was clearly not transparent with us. Once we had found an alliance of customers, the vendor took our feedback more seriously and fixed the problem.

 

I encourage you to look for these user groups as a way to find potential allies as well as workarounds and “hacks” that you can leverage in the meantime. By now, people worldwide have solved how to leverage DevOps practices for applications that are not very suitable on paper for the implementation of

  • DevOps. And the good news is that DevOps enthusiasts are usually very happy to share that information.
  • Fence In Those Applications and Reinvest

 

As discussed in the previous blog section, when the application is not changing and you have to divest from it, you can use an analogy to the strangler pattern in software development to slowly move away from the application. Reduce the investment in the application and reinvest to build new functionality somewhere else that is more aligned with the architecture you have in mind.

 

Be transparent that you are doing this because the software vendor is not providing the capabilities that you are looking for, but you would reconsider if and when those capabilities were available.

 

This will incentivize the software vendor to look into investing in a better architecture (perhaps the reason that the capabilities don’t exist is simply that no one ever asked for them before).

 

Make sure to explain why locked-down architecture and tools are not appropriate going forward and that your requirements for modern architecture require changes in the application architecture.

 

If the software vendor decides that those capabilities are just not the right ones for their application, then not investing any further into the application and spending your money somewhere else is the right thing to do anyway to enable the next evolution of your architecture.

 

Incentivize the Vendor

I always prefer the carrot over the stick; you, too, should look for win-win situations. Improvements in the architecture and engineering of an application will lead to benefits on your side, which you can use to incentivize the vendor.

 

What is usually even more effective is to show how changes will make the application more attractive for your organization and how more and more licenses will be required over time.

 

This is the ultimate incentive for vendors to improve their architecture. And of course, you can present publicly how great the application is and hence create new customers—a win for both parties.

 

As I said in the beginning of this blog section, the opinion on software packages in many organizations is not great. It is therefore surprising that organizations do not actively manage what software they do use and how little effort goes into engaging their software vendors to improve the situation.

 

I truly believe that vendors would be happy to come to the party more effectively if more organizations would ask the right questions.

 

After all, why would a software vendor invest in DevOps and Agile–aligned architectures when all every customer is asking for is more functionality and no one is paying for architecture improvements?

 

If companies engaged vendors to discuss the way they want to manage the software package and how important DevOps practices are for them in that process, vendors would invest more to improve those capabilities.

 

Be Creative

If all else fails and you feel courageous and curious, then you can ignore the guidance from your vendors and attempt to adopt DevOps techniques yourself, even to software products that don’t easily lend themselves to those techniques. This is how you start:

 

Find and manage the source code, which sounds easier than it often is. You might have to extract the configuration out of a database or from the file system to get to a text-based version of the code.

 

Find ways to manage this code with common configuration and code-merge tools rather than the custom-made systems the vendor might recommend.

 

You should also investigate the syntax of the code to see whether there are parts of the code that are non-relevant metadata that you can ignore during the merge process. Something that in Siebel, for example, has saved my team hundreds of hours.

 

Try to find APIs or programming hooks in the applications that you can leverage to automate process steps that otherwise would require manual intervention, even if those were meant for other purposes.

 

In my team, we have used these techniques for applications like Siebel, SAP, Salesforce, and Pega.

The above techniques will, I hope, help you to better drive your own destiny and be part of a thriving ecosystem where IT is a real enabler. 

 

First Steps for Your Organization

Strengthen Your Architecture by Creating an Empowering Ecosystem

So, you already have software packages in your organization like so many others. In the previous blog section, we did an analysis of your application portfolio, which you can leverage now to determine which software packages are strategic for your organization.

 

1. Based on the previous application portfolio analysis (or another means), determine a small subset of strategic applications (such as the first minimum viable cluster) to devise a strategy for creating an empowered ecosystem around them.

 

2. Now pick these strategic packages and run the scorecard from this blog section. You can largely ignore the functional aspects, as they are used more for the choice between package and custom software.

 

You could, however, use the full scorecard in case you are willing to reconsider whether your current choice is the right one. Given that you are doing this after the fact, you will already know how suitable the package was by the number of customizations that your organization has already made.

 

3. Where you identify weaknesses in your software package, determine your strategy for them. How will you work with the software vendor to improve the capabilities? Will you work with them directly? Will, you leverage a system integrator or engage with a user group?

 

4. Results take time. Determine a realistic review frequency to see whether or not your empowered ecosystem is helping you improve the applications you are working with.

 

You can leverage the principles for measuring technical debt from the previous blog section as a starting point if you don’t have any other means to measure the improvements in your packaged applications.

 

Summary

DBAs are a good match for DevOps. Driven to improve performance, reliability, and system stability; and matched with the skills to adapt, analyze, and execute process improvements, DBAs can expand the DevOps team’s capabilities; reduce cycle time by pulling database changes into the continuous integration process;

 

contribute new test cases for improved bug detection; and get ahead of the performance, load, and other operational challenges before production impact.

 

By investing in DBAs joining DevOps teams, DevOps leaders and engineers increase influence and impact on the business. Applying proven DevOps processes to database changes, build templates, database selections, and broader platform considerations present new opportunities that may have been previously resisted by the same DBAs.

 

DBAs get excited when their contribution can grow, they can grow, and the business can grow.

Recommend