Key Responsibilities of a Database Administrator (DBA) in DevOps 2018
DevOps presents exciting opportunities for DBAs to make the improvements that many of you have wanted for years. As the culture shifts to align with the agile and DevOps movements, and DevOps teams understand the valuable contribution that DBAs bring to DevOps.
DBAs can more directly influence application performance and infrastructure stability while being able to provide better–fitting database solutions with the incorporation of NoSQL environments and DBaaS offerings.
DBAs need to become automation experts to create and maintain database build templates, integrate with server build templates, and let others do the actual builds.
They check in database change code for absorption into the continuous integration pipeline, build numerous tests to expose all possible defects to prevent them from progressing toward product delivery, and intend to never allow a defect to be deployed into production.
Organizational demand for agility—adapting the business to meet customer demands and speed—and fulfilling customer demands expediently with an earlier return on investment (ROI) realization continually drive the expanding and maturing cultural paradigm of DevOps.
These business-mandated edicts have forced information technology teams, including database administrators (DBAs), to incorporate rapid development, continuous integration, automated testing, and release management. Combined with immediate feedback loops, the result is a shift from monolithic applications to object- or services-defined applications.
This blog demonstrates how DBA responsibilities are moving from infrastructure builders to infrastructure enablers, from vendor-specific database managers to “best database for the job” proponents; from technology silo experts to technical advisors;
from unintentional database metrics, isolationists to “metrics for all to see” facilitators; from the “database is green” to the customer experience performance protectors. This disruptive movement looks to adopt more DBAs now that DevOps teams are seeking to strengthen themselves by including DBAs.
To date, DevOps primarily incorporates development (aka programming or software engineering), quality assurance (QA), release management, production operations support, and business team members united in streamlining the software development life cycle (SDLC). Involving DBAs seems to be an afterthought; as Pete Pickerill wrote on http://devops.com, “This oversight is unfortunate.
DBAs have a lot to offer when it comes to correlating the development of technology with the management of the environment in which it’s hosted. In a sense, DBAs have been DevOps all along.” It is a costly oversight.
A viable SDLC model no longer consists of sequential, isolated hand-offs from a business analyst to a programmer to a QA tester to a change coordinator, and finally to the last toss over the wall to operations.
Instead, each team member performs a shift left, which describes an earlier involvement in the process, being pulled upstream to learn about business drivers and other reasons why the software being requested is needed, and (perhaps more importantly) learning how the business uses the software.
QA shifts left to begin building test cases to be used in development and integration; the application DBA shifts left to learn directly from the business what functionality is needed, making the application DBA a more valuable contributor to the solution.
The operations DBA, instead of being ill-informed about changes heading toward production, now learns exactly what is in the pipeline, can recommend performance and other operational advice for inclusion in the solution, and can adjust database server templates early in the SDLC.
Teams build better products when each team member understands the purpose and intended use of the application. When developers hear directly from business team members the features and functionality needed instead of receiving a requirements document with second-hand information translated by a business analyst (BA)—even a very competent BA—the likelihood increases that the software will actually look and perform as requested.
In many organizations, BAs are a myth because it is difficult to bridge business language, process, and technical perspective to IT language, process, and technical perspective. Much can be lost in translation, resulting in the less efficient use of technology.
Whether an application DBA works side by side with the programming team to improve data access code or determine index requirements, or whether an operations DBA toils with release management to ensure the software or service gets moved into production without disrupting the business or degrading application performance, the value-add is clear.
Much of this alignment happens by using increased and improved communications, both in person and through specialized tools. DBAs bring tremendous value to the DevOps proposition by contributing deep technical skills and varied experiences that are ready to be leveraged by existing DevOps teams.
Database as a Service (DBaaS) empowers anyone—everyone—who needs a database to quickly provision one, without concern for the underlying infrastructure or software installation. Realizing the ease and immediate gratification that DBaaS provides, business and development team members expect DBAs to deliver a near-equal service.
Although these teams understand that corporate database provisioning requires proper governance, their delivery expectations are still much sooner than pre-DevOps capabilities. Fortunately, perceived best practices, security requirements, and extended project and purchase approval processes are all realigning to deliver on the promises of DevOps.
DBAs need to exercise judicious discipline, mixed with flexibility and what may feel like overcommunicating, to adapt from siloed processes involving receiving hand-offs from an upstream team, sprinkling on a bit of DBA magic, and then passing the package to a downstream team.
To work effectively within the DevOps model, DBAs need to manage databases across a variety of platforms: physical or virtual hosts, and internal or external cloud implementations that are likely using database software that is not relational. For DBAs, ensuring secure access and robust access times may be where traditional responsibilities end.
New responsibilities include assisting with rapid deployment process (continuous integration and continuous deployment) creation, managing code and scripts using software versioning tools, and building infrastructure as code. Although data remains stateful, the schema, the database software, and the host platform are stateless.
DBAs need to become agents of change, supporters of the DevOps methodology, and tactical consultants are driven to improve all aspects of the SDLC. DBAs need to become platform and database agnostic. There is more to come on these topics.
Relational databases have been the preferred (best understood) environment for the storage and retrieval of data for several decades. As petabytes of unstructured data have been introduced into the mix, relational databases have struggled to manage the data while staying true to traditional relationship precepts.
To fill the gap, NoSQL databases such as Cassandra and MongoDB introduced ecosystems built to store and retrieve data outside of the relational model.
DevOps involves DBAs creating database build templates that developers, yes developers, use to spawn databases on demand, which is simply one step in the automated server provisioning process.
Test data loads are automatically consumed, tested, and measured without direct DBA action. DBAs instead help define how the test data is selected and staged for consumption. Learning to accelerate work using automation and source code control for all scripts and code further reduces the development cycle time.
DBAs must aggressively and proactively accelerate product delivery to match the velocity of the release cadence and be determined to never be the bottleneck.
“Best Database for the Job” Proponents
Particularly for new projects, DBAs need to weigh the impact of force-feeding data into the relational model versus introducing a new database model that is more aligned with the application’s expected data usage pattern.
Structured and unstructured data may best live in separate databases, with applications calling multiple services to read, modify, or delete data. The code is evolving to be more dynamic to leverage multiple back-end databases.
Legacy databases will not disappear soon because many still act as the databases of record and contain valuable data. Also, audit and governance requirements have to be satisfied, many by just keeping the data in place until the mandated retention window expires.
Organizations may decide to decouple monolithic application functions into services that fit agile development and DevOps more readily. Segments of data thus may need to be copied or moved into a different database, which is work that DBAs perform regularly. Advantage DBAs: new resume entry!
Transforming to align with a business partner’s need for scalable, well-performing, and resilient systems, at a lower cost, is much easier when leveraging an established methodology.
This methodology has been proven feasible by Netflix, Facebook, Flickr, and Etsy; and DevOps has matured to the point at which even risk-averse organizations should feel comfortable adopting it.
Lean processes, obsessive automation, faster time to market, cost reductions, rapid cycle times, controlled failures and recoveries, and robust tool suites empower this ambitious transformation. DevOps DBAs must adapt to this new way of building software products while driving infrastructure stability, resiliency, and availability eclipsed only by extreme application performance.
DBAs are persistently ostracized for being inflexible, slow to deliver, and generally uncooperative. DBA processes, along with many Operations’ processes, remain serialized and burdened by outdated policies and delivery expectations.
Shifting to DevOps aligns (absorbs) DBA tasks into combined process flows that began during the agile development transformation. DBAs need to purposefully engage their development peers to communicate a willingness to adopt DevOps practices, manage the infrastructure as code using source control, and learn the implemented tool suite.
DevOps brings many new opportunities for IT teams to deliver superior software products that fulfill business initiatives that lead to excellent customer experiences.
On the flip side, challenges arise when integrating processes, increasing release momentum, reducing cycle time, managing infrastructure as code, and implementing change requests.
Many DBAs were left behind during the initial DevOps wave; however, current landscape changes include drawing in a variety of IT technicians to further expand capabilities, extend collaboration, reduce waste, and abate SDLC costs.
The inclusion of DBAs into DevOps is not without risk because, as with any process, adding another step, variable, or person increases the possibility for errors or other disruptions.
Fortunately, DevOps is supported by ever-evolving powerful tools purposed to assist with collaboration, code management, quality assurance testing, and task automation.
Converting from technology silo experts to technical advisors instills a new sense of purpose and resets our mindset so that we are willing to partner with teams once deemed “nemeses” for the good of the business and the customer.
[Note: You can free download the complete Office 365 and Office 2019 com setup Guide for here]
“Metrics for All to See” Facilitators
DBAs (at least good DBAs) constantly assess the production database environment (code base; database; host operating system [OS]; load; capacity; and, less often, network throughput) and seek opportunities to improve application performance. Whether by identifying poor performing queries, needed indexes, or expanded buffer cache, performance matters to DBAs.
The misstep has often been the unintentional isolation of performance metrics by not purposefully, holistically, or frequently sharing with network and system administrator (SA), or development team members, although doing so may further improve application performance.
More importantly, it provides an exceptional value to customers. Sharing performance metrics enables disparate teams to aggregate their combined experiences and skills, producing opportunities for better solutions that are possible individually.
DevOps Success Metrics
Extending metrics beyond customer experience performance management, DevOps introduces measures for software delivery efficiency, release cadence, and success rate. Continuous code integration, automated testing, and continuous delivery have to be measured to determine success.
Continuous integration checks how well newly introduced code operates with existing code, measured by defects. Automated testing checks whether new or modified code function is as defined in the use case and whether the code passes regression testing Continuous delivery/deployment checks how often code is released into production (release cadence) and whether the code causes disruption, tracked by incidents.
Customer Experience Performance Protectors
Holistically understanding the infrastructure and application architecture provides opportunities to decrease cumulative degradation, which improves customer experience. Even for a basic transaction flow, the delivery level drops rapidly.
Table . Cumulative Degradation
Component Success %
Web server 99.7%
App server 98%
App server 98%
Web server 99.7%
Customer Experience: 91.58%
Cumulative degradation reveals why the IT five 9’s availability goal falls short when measuring customer experience.
Application performance management (APM) can provide transactional perspectives of customer experience, transaction times, and frequency, which provide a framework to fully understand application performance across the infrastructure. DBAs with this transparency level can shift to predictive analysis, allowing corrections to be implemented before the customer notices.
Even troubleshooting becomes less problematic and faster because baseline variances can be reported if predetermined thresholds are violated. Additionally, preproduction APM application monitoring can identify code or infrastructure performance deficiencies before release, preventing problems from getting into production.
Internationally recognized management guru Peter Drucker famously pronounced, “Culture eats strategy for breakfast.” Culture presents a perplexing challenge to DevOps implementation.
Wanting to do DevOps by investing in DevOps tools, training staff, and hiring expert consultants, all without a transferal of mindset, behaviors, and incentives, only suppresses the status quo, which lies quietly below the waves seeking an opportunity to re-emerge.
During a recent client call, a team brought forward a build request for two virtual hosts, including software installs for several products from a popular Agile tool suite. The conversation went something like this:
Requester: “We need two VMs with tool A and tool B installed for a project starting in 10 days.”
SA: “Once approved, it takes 6 weeks to provision a VM.”
Requester: “This project has been approved by SVP whats-her-name and VP so-and-so as a fast-track project.”
SA: “Our process takes 6 weeks. Then someone still needs to install the tools because that’s not what we do.”
By this time, I am “cryaughing”—trying not to cry or laugh, but really wanting to do both. But I digress.
Requester: “We are trying to be agile and need to move fast.”
SA: “Our process is fast! It used to take 3 months to get a host provisioned.”
And so forth. Sadly, this is not a fictional story for blog demonstration purposes.
As this unfortunate yet emblematic example shows, existing processes create cultures ingrained with suppositions of how well teams are performing, what people believe is expected from them, and a “don’t rock the boat” mindset, all of which present tremendous hurdles to be surmounted. DevOps requires processes to be rethought, leaned out, sped up, and extraordinarily stable.
Pulling together strong and patient leaders, complemented by charismatic and uber-respected technical subject matter experts (SMEs) such as DBAs or senior DevOps engineers, to challenge the status quo by instigating a new culture focused on DevOps best practices must be the movement’s heart and soul.
An organization’s culture must transform into a more collaborative, process-defined, and people-centric entity to successfully drive continuous improvement, automation, and integrated testing.
To change the culture, people must at least accept the new direction, even if reluctantly. The best-case scenario includes people being only too happy to scrap the old model and excited to move on to better methods.
Both types of people encountered need to be coached differently to effectively ensure the movement’s success. The reluctantly accepting group drags its feet, in no hurry to reach the new destination.
Coaching increases the pace, improves buy-in, and develops needed attitudes. The excited group (probably the ones who have been telling each other for years that management is a cast of morons and constantly bloviating about how everything would be awesome if only they were in charge) can be more dangerous to the cause than those who may be flat-out resisting the change.
Failing to control the ascent with planned and well-communicated phases that include needed staff training, concrete process definitions, and general good change practices may result in a catastrophic crash and burn.
Change is an interesting beast. A team member once asked his manager why the manager had not pushed for a large change that needed to happen. The manager responded that change done gradually over time usually receives better acceptance.
The manager’s example was for the team member to imagine coming to work the next morning to find a full-grown tree in his cube.
The manager explained that even if the employee loved trees, the tree would be bothersome because it had invaded his space unexpectedly. But if the team member arrived to find a potted sapling on his desk, he might think it is cool.
Over time, as he would nurture the sapling (even though he had to report the now small tree and place it on the floor), the team member would remain comfortable with its presence.
After a few years passed, when people would ask him about the full-grown tree in his cube, he would proudly share that he was able to transform a weak sapling into a mighty tree. The employee would accept the change because he was involved (nurturing), and the change came about slowly but consistently.
Driving the new DevOps culture requires introducing a “sapling” and nurturing its growth until the “tree” is well rooted. The more people who are involved in the nurturing process improves the odds of a positive outcome.
Leaving the tree-nurturing responsibility in the hands of only the core team likely leads to a brown and wilted sapling.
It is odd to think that one of the primary benefits unleashed at the dawn of the computer era was the ability to reduce costs and processing time by automating routine tasks.
Yet today, when CIOs and their teams are under pressure to drive strategic growth initiatives needed to increase revenue or introduce new products for customers, much of the behind-the-scenes effort is still completed manually.
IT professionals (I, too, have been guilty of this) love working with shiny new toys—often at the expense of reducing effort or costs through automation.
DevOps is about speed, flexibility, resiliency, and continuous improvement. People need to understand the processes, build and test the software, implement the automation, and then step back and let the computers do the work.
For DBAs, this means relinquishing control of the scripts and surrendering them to source code control. The scripts now become included in the release package instead of being manual executions listed as a task in a planning spreadsheet.
Automation applies to server builds, database software installs and configurations, network settings, storage allocations, schema builds, the code compiles job scheduling and more. Anything and everything should be automated.
Security programs should automatically scan hosts for vulnerabilities. Automation is the way resiliency can be gained, reducing human error risks. The automation itself needs to be monitored and measured to ensure it is delivering expected benefits.
People live by measurements. Our day (a measure) is an accumulation of events segmented by the measure of time. The value of our contribution to the organization comes periodically (a rhythm of measures):
the amount of our paycheck. Hence, measurements must be important. Yet too many IT shops still focus on binary checks, such as a server being up or down instead of business measures such as end-user experience and transaction capability; or for DevOps: cycle time, failure and resolution rates, release momentum, feature time to market, and reduced SDLC costs.
For DevOps to succeed, a consistent whittling away at inefficiencies, avoidable steps, and pointless multilevel approvals must occur. The burden of CYA and sometimes ego boosting for less-mature executives (e.g., requiring ten people to approve a change) has been known to be one of the most consuming yet valueless requirements related to the SDLC.
After all, the business made the request and IT agreed to complete the work, which sounds like approvals. Yes, another oversight is needed, but surely a few approvals would not be missed.
Applying lean and Kanban techniques to trim inefficiencies that should return value from reduced waste and improved speed. Process mapping, or value stream mapping, should be done to capture the delivery process, see how long each step takes, and evaluate the need for each step.
Decisions can then be made to remove impediments, smooth out the workflow, and drop unneeded steps and approvals to produce a streamlined SDLC process.
“Knowledge is power.” That saying has been around for years, but has been distorted; many people hoard information to be used only for personal gain versus benefiting others.
Someone who knows how to cure cancer does not have power by selfishly retaining the solution; instead, the power comes from releasing the information and then watching how the knowledge, when applied, impacts people around the world.
DevOps breathes by sharing information. Business, development, and operations (including DBAs) must communicate in full-duplex. Messages need to be sent and received simultaneously upstream and downstream. Each team member must understand why the business needs the function and how the business plans to use the function.
Addressing operational challenges earlier in the process leads to better-performing and resilient production systems. As DevOps expounds continuous testing across the SDLC, all environments must match the planned end-point state.
Operational knowledge from team members’ vast experience, aggregated into manageable bundles driven upstream to improve the infrastructure, creates consistent and stable platforms.
Do you remember the grade school exercise in which the teacher would share a sentence with one student that was then passed from student to student until the last student relayed the sentence back to the teacher?
Whether it was changed intentionally for malice or fun or changed because students couldn’t remember the exact statement, the final message was usually so dissimilar to the original sentence that it was humorous.
Unfortunately, this is the exact process IT has used for decades: it receives requirements from the business and then passes the details, which are distorted incrementally, along with the supply chain so that (as witnessed far too many times) the business cannot reconcile the final product to the requested functional requirements.
DevOps must have a continuous feedback mechanism that constantly relays information concerning code and infrastructure decisions that seamlessly apply to production, and which decisions disrupt or degrade the customer experience by degrading application performance or availability.
Earlier involvement in the SDLC introduces challenges, maybe even opposition, to traditional responsibilities. Customary DBA tasks seem often to be outliers concerning the SDLC.
Although analysis, development, QA testing, releases, and initial operations support efforts stream as a continuous flow, DBA tasks have a tendency to abruptly change the flow, disrupting progress.
As DevOps database frameworks mature, DBA task inclusion becomes seamless to the process, supporting continuous integration and automation.
Core DBA work shifts from being a significant Gantt chart bar to barely a blip on the timeline. Imagine not being constantly asked, “When will the database be ready?”; instead not even being a part of the build and release cycle.
How? Infrastructure as code, which involves predefining database configurations that can be built on virtual resources, initiated by developers on demand.
Also shifting to DevOps, SAs can purchase, rack and stack, power up, and network attach computing resources as an internal cloud, ready for consumption. Optionally, provisioning platforms may simply mean consuming external cloud resources (an example is DBaaS).
Either way, SAs can create templates for standard server builds a database, app server, web server, and so on. DBAs can then extend the database server build templates to include database software installation and database creation. Including test data loads for preproduction databases for testing can also be automated.
All scripts used in the build process must be managed as code, including versioning and source code control. DBAs need to manage their code just as their development partners.
Security, everyone’s concern, has at least three tasks: 1) scan and approve template-driven server builds; 2) dictate access methods and privileges for the operating system, database, web services, and more; and 3) periodically scan active servers for vulnerabilities. DBAs must provide continual feedback to the security team to ensure risk mitigation.
With this automation, the SDLC pipeline no longer includes long duration bars for purchasing and building servers and databases; instead, developers can provision on demand. Yes, the hairs on my neck are standing up, too.
Remember that although you still control the installation, build, and configuration of the database, you can turn your focus to performance and customer experience improvements once you have automated provisioning.
Now that servers are provisioned from predefined templates with or without using a DevOps tool, platform consistency begins to evolve. As code progresses toward production, needed environments are spun up using the same template as the initial development server.
In some cases, even the production ecosystem is built exactly like every server involved in the code release process.
Appreciating that production web and app server build from templates can be a successful model is one thing, but accepting that idea for production database servers needs more consideration.
Agreeing that only data is stateful allows the inference that the data could be loaded and unloaded, even trans-formed to meet business requirements.
Consequentially, it is unlikely that a multiterabyte relational database would undergo that much manipulation. In these cases, DBAs may choose to derive the pre-production database configuration from the production setup, maintaining platform consistency.
Mike Fal writes in the simple talk blog posting, DevOps, and the DBA, “The reality is that chaos, instability, and downtime are not the result of speed, but the result of variance.”
Inconsistencies between nonproduction and production environments have always undermined production releases (it worked in development), extended outages (change ABC implemented 2 months ago was never removed from production, which caused this release to fail), and degraded performance (it was fast in QA) because the solution could not scale to the production load.
Marching forward, DBAs have the opportunity to improve platform stability, remove build bottlenecks, and increase production resiliency by collaborating toward on-demand provisioning capabilities, reducing failures caused by inconsistency, and most importantly, being cultural change warriors. Many of you are already doing DevOps work—now there is a name to help facilitate conversations.
DBAs for DevOps
Experienced DevOps professionals have the responsibility to assimilate new people into the movement. DBAs coming on board need to understand (and possibly be convinced) that DevOps is about improving and quickening a continuous flow of software or web service improvements designed to provide a richer customer experience, abounding with excellent performance and extreme availability.
DBAs need to change many habits to blend traditional work into the DevOps model.
DBA work has been a “black box” for too long, I mentioned “magic” as a DBA tool. I was joking, of course, but reality shows that DBA scripts, database performance configuration changes, login triggers, and other DBA outputs are not scrutinized enough nor managed properly.
The change advisory board (CAB) team may ask a question or two about why the change is needed, but many CAB members probably do not have the required knowledge to question the change enough to understand the potential harm. I hear what you are thinking, “The CAB does not have the technical experience to interrogate most changes.”
I agree, but I also maintain the position that the CAB members see fewer database changes (compared with application changes) and fail to realize that database change mistakes tend to lean toward catastrophic. I believe it’s because the CAB should not be evaluating changes.
The product owner and DevOps team members should know when to deploy because they intimately know the readiness of the code, understand the consequences of failure, and are working the backlog based on value.
DevOps protects the teams from consequences if the teams abide by the mandates to excessively test to code and never allow a defect to be deployed into production.
DBAs and DevOps team members surely agree to this value proposition, not needing oversight for releases. You’ll have to persistently engage the DBAs to shift expectations in order to incorporate their work into the release cycle.
Although DBAs, fortunately, have the rare ability to bridge the gap between development and operations, they have been detrimentally overlooked in many companies that deploy DevOps practices.
A DBA’s ability to interrogate code and construct a resilient, well–performing database environment uniquely defines the capabilities needed for DevOps.
DevOps requires transformation from organizational silos defined by a technology skill set to process-driven, continuous flowing work streams that are empowered by collaboration and automation. DevOps is about speed, delivery time, continuous integration and deployment, release cadence, and superior customer experience.
Although metrics are critical for measuring customer experiences such as application responsiveness, they are also needed to measure release success rate, software defects, test data problems, work, and more.
DBAs tend to be strong technical leaders who provide insight into coding best practices, host platform configurations, database performance improvements, data security, and protection. To be successful, DBAs have to communicate, collaborate, teach, and learn while continuously improving database performance and availability.
The job often includes having to meet with developers to discuss poor performing code, index requirements, or execution plans to recommend code remediation.
These “normal” interactions are imperative to the success of DevOps, leaving me perplexed about why DBAs were not one of the first operations team members asked to join the DevOps movement.
Understanding that DBAs are “built” in significantly different ways should help with the approach. Many DBAs were once developers, others came from various infrastructure roles, and still, others have always been DBAs. Determining which DBA type is easier to bring into the fold is a fool’s game. DBAs are people, and people are surprisingly unpredictable.
One ex-developer DBA may be excited to finally be able to use both skill sets to help advance DevOps, whereas another may be perturbed by having to dig up old skills she had hoped were long dead and buried. Individually interviewing and evaluating each DBA may be necessary.
Much like interviewing potential employees, discernment is needed to assess fit, training needs, and potential disruptive factors that may impact the existing DevOps team members, the right leaders, and SMEs need to be involved and dedicated to the time and effort needed to integrate DBAs.
Rest easy; the good news is that even if some DBAs may resist, they all want to provide value by improving the environment.
Besides, as you start to expand participation in DevOps, you already have a handful of people in mind to make the voyage smoother. You know who I’m talking about.
Yes, the ones you see talking to the development teams on a regular basis, checking in to see how things are going, seeing what changes are coming down the pipe, asking what the application users are saying about performance, and even offering to assist as needed. These people should be your initial picks to join the DevOps team.
Specifically, you should find DBAs who are already engaged, bring them on board, and then let them help you select and onboard other DBAs when needed.
Having a trusted and respected DBA doing the team’s bidding for additional DBA talent is likely to result in volunteers. People want to work with people with whom they have an established relationship. Leverage previous successful working relationships to resourcefully construct the DevOps team.
Whether through formal methods such as a classroom or virtual training, job shadowing, and mentoring; or through informal methods such as team discussions or presentations, teaching needs to be a frequent element of team integration.
It is a given that IT and business teams have difficulty understanding each other without a common taxonomy.
Even teams within IT often fail to understand each other. A developer discussing encapsulation or inheritance may totally perplex a DBA unfamiliar with object-oriented programming terminology.
Never mind if you start talking about Agile, which is very new to many IT professionals. Likewise, a DBA ranting about developers “thrashing” the buffer cache is likely to see the “deer in the headlights” stare.
While investigating a performance issue specific to a screen, a developer shared with a DBA that the drop-down window would display ten data elements from which the application user could select. As they looked at the code and then tested the code in a nonprod environment, they learned that the result set was millions of records.
A million records would move from the database to the middle tier, and then the needed to rows would be pushed to the client application screen. When asking why millions of rows were being returned, the developer said that was a standard practice.
After looking into other queries, the DBA soon found herself ranting to several development managers about the developers thrashing the buffer cache and the performance impact.
After realizing that these managers did not understand DBA “technical” jargon, she determined that there was a better way to communicate the message.
She scheduled a meeting a few days later, in which she put together a presentation deck outlining basic buffer cache concepts with visuals that demonstrated how large result sets can negatively impact not only the query requesting the data but also every aspect of the database performance.
After the DBA spent an hour walking the developers through the presentation and answering questions, these developers understood the impact of less-selective queries.
As days and weeks passed, and often when the DBA was visiting the developer realm, developers would jokingly remind each other to not thrash that buffer cache unless they wanted the DBA to get after them.
Although the training was succinct and simplified, it closed the language gap, resulting in improved query selection criteria, smaller result sets, and less buff cache “thrashing.”
The point is that even people in the same industry do not necessarily speak the same language. DevOps introduces another language gap that requires purposeful definition to keep all members of the team aligned.
This blog presumes that readers are technically savvy and already familiar with DevOps and the core terminology, but it may not be true as they begin working with DBAs. Accelerating DBA engagement requires DBAs to understand the DevOps principles and foundational constructs.
Experienced DevOps team members need to educate DBAs on processes, continuous integration and delivery, and the implemented toolset. Demonstrating how the code is built, tested, integrated, and released helps DBAs determine where best to interject changes supporting the code cycle.
DBAs also need early notification when system changes are necessary, allowing time for the reconfiguration to be completed, tested, security approved, and automated for pipeline consumption.
Differentiating which DBA inputs to put forth for absorption into existing agile and DevOps processes demand collaborative effort between existing team members and newly assigned DBAs.
Cohesive integration to advance the undertaking of capturing additional value at decreased costs lengthens the backbone—the code generation process definitions from start to finish—of the movement, triggering existing processes to be rehashed, or repurposed, and then reacclimated within the SDLC cycle.
Together, DBAs and DevOps team members make old things new again as processes throughout the development, testing, release, and operations support pathway are refined to incorporate DBA tools, change methods, and metrics. The critical goal is to not disrupt the code delivery schedule while reaffirming the automation and process sequence preciseness.
Sanctioning a parallel environment that initially mirrors the primary build-to-release architecture onto which the DBA components get added enables a side-by-side comparison to ensure that updated processes work correctly. Of course, automation oversees the execution, examination, and effects reporting.
Quick to Value, Delight the Customer
Excitement for DevOps, besides the “it’s the cool thing now” factor, stems from years of frustration as IT professionals have been viewed as money-wasting, unresponsive, slow to deliver, and second-rate business citizens.
One of my pet peeves has been the “IT alignment to the business” language. Viewing IT as an “outside” entity having to blend in plans to support or conform to the rest of the business accounts for much disillusion and poor esprit de corps.
When agile development (and DevOps in close pursuit) exploded in popularity, IT folks finally envisioned a promising future in which product delivery proficiencies incessantly eliminate time, process, approval, and implementation waste, and then rocket delivery to the customer. One Lean principle is to establish pull.
Customers establish pull inherently when reporting problems or requesting new product functionality. IT’s capability to deliver has never been this radically empowered, in which demand (pull) can be satisfied within a customer’s time expectations.
As consolidated teams, call them agile or DevOps, build new or decouple established services from monolithic applications, change footprints become much smaller (think microservices), making it possible to deploy code quickly with minimal risk. With speed united with smaller code chunks, a failed release becomes no more than a temporary blip on the radar.
Fail Forward, Fail Fast
Application programming interfaces (APIs), microservices, web services, and objects have all been “invented” to eliminate complexity, unreadability, tremendous testing requirements, and massive release risk associated with applications containing thousands, hundreds of thousands, or more lines of code.
Even “package” applications can require multiple objects (packages) to be modified for a single functional change. Each touch point increases risk. Dissecting large code segments into services, for example, decreases the time needed to find the code to be modified, which reduces testing time.
With DevOps, the duration is decreased further by using automated testing and minimizes the potential release impact on the production environment.
Compiling, packaging, and deploying large applications at once into production are some of the major reasons for disgruntlement between development and operations. The release causes huge problems for the business and customers, with operations under the gun to find and rectify the failure—often with no development assistance.
That division ends with DevOps. Now that development and operations work together during the coding, testing, release, and production support phases, true partnerships develop that provide significant business value and team harmony.
Services mimic real-life situations, increasing focus. Here’s a bank analogy: when you step up to the teller to make a deposit, you expect a quick and problem-free transaction to occur.
Really, you care about little else. The teller does not need to know how you got the money, where you came from, or how you got to the bank (whether you drove or had someone drive you). This information doesn’t matter for the transaction to be completed.
For you, knowing how the bank checks to make sure you are a customer with an active account, how the money flows from the teller to the safe, how the transaction is audited internally, or which bank industry best practice for deposit transactions are being applied means little.
You simply want to hand the teller your cash and/or checks and a deposit slip and receive a receipt verifying the deposit into your account. Managing code as services or APIs, for example, supports real-life conditions by reducing the code to the smallest number of lines or functions needed to carry out its purpose.
Code that expects and accepts only a few “requests,” which then performs one or two discrete actions and finally returns the “response,” makes it possible to accept the “fail fast, fail forward” model.
Being able to deploy distinct code elements quickly, matched with the ability to deploy the next release version or the previous version, facilitates moving forward, even on failure.
The small program unit minimizes the production impact upon failure—maybe only a few people experience the problem instead of a large set of application users when large code deployments go wrong.
Instead of backing out a massive change because it would take too long to find the root cause for the failure, the small footprint can be overlaid quickly, rectifying the problem while potentially advancing the code.
This model makes sense, although years of “backing out” have incorrectly indoctrinated our perception.
Think about it; have you ever fallen backward when you trip while walking or running? No, most likely you recover without falling, or momentum keeps you moving forward even if you do fall. DevOps leverages momentum to maintain forward progression.
Remember, though, failing forward cuts across the grain for DBAs who are used to protecting operational stability at all costs, making not rolling back failures a seemingly unnatural act. Experiencing only frequent successful fail forwards brings DBAs fully onboard.
Continuous Integration, Continuous Testing
Besides implementing small code segments, there are two additional reasons why fail forward has proven successful: continuous integration and testing. For DBAs whom you mentor, that means shifting direction from isolated islands of specific tasks to inclusion directly into the code-producing effort.
Code, schema changes, and even job scheduling tasks have to assimilate into the software code process, including the way DBA code is built, tested, version controlled, and packaged for release.
In the blog you learned that server clones, each built from the same script, eliminate platform variability, making application systems more resilient. For this reason, all software has to be managed without variability from start to finish. The only exceptions are new or modified code requested by the business or customers.
The continuous flow of code into production may initially disorient DBAs because the release and post-release support model has been a brutalizing cultural norm for decades.
It is patterned like this: deployment night pull an all-nighter and then get a little sleep before being called back into the office because the business is about to implode on itself (a total distortion of reality) if the problem is not fixed promptly.
After hours of troubleshooting, someone discovers that the C++ library was not updated on the production system, causing updated code to run incorrectly with the older library files.
In this case, the production system obviously was a huge variable, requiring separate work to upgrade the compiler that was missed as the release progressed. Variability burns you nearly every time.
when the production system has to remain, the best move is to clone the prod environments from the production server. Once the first nonprod server is built, the process can be automated to manage additional server builds.
When something like an upgrade to the C++ libraries is needed, test for backward compatibility; if successful, upgrade production, clone production, and start the nonprod builds.
When older code fails (perhaps due to deprecated commands or libraries) and forces the upgrade to be included in a larger release of all code needing to be modified for the new libraries, very stringent change management processes must be adhered to.
This scenario is becoming rarer because agile development and database management tools have been built to overcome these legacy challenges.
Tools of the Trade
Agile development and DevOps have not only changed how code is built, tested, released, and supported, and changed how teams collaborate to be successful, but new suites of tools were also specifically built to transform the SDLC.
There is a movement away from waterfall project management—serialized code progression starting with development and then proceeding to test, integration, quality assurance, and production.
New opportunities to create applications in weeks or even days has led to products being produced and then held for release until the company can be officially formed and readied for business operations. That reality did not seem possible a short 10 years ago.
Powerful tools have enabled businesses to move from “scrape together a little money, spend most of the money forming the company, start coding, go hungry, sleep in the car, and release version 1 in desperation.
Hoping to generate enough revenue to fix numerous bugs to be released as version 2” to an early-capture revenue model in which the application is built and readied to release and generate revenue, possibly even while the paperwork to form the company is underway.
Imagine releasing an application on the day the company comes into existence, possibly even recognizing revenue on day 1. Today, if the product is conservatively successful, the continuously growing revenue stream allows focus toward new products instead of figuring out where the next meal comes from. Tools empower possibilities.
Best time ever for software startups!
Years of experience looking at performance metrics, CPU, memory and disk space utilization, hit ratios, and SQL execution times translates easily into other tool sets. Even process building, test automation, regression, and release automation toolsets fail to challenge any but the most-junior DBAs. Working with tools comes easily for DBAs.
Logically developing process flows to incorporate database administrative tasks accelerates the SDLC. The biggest challenge may be selecting which tools are needed from among the plethora of popular DevOps tools.
As DBAs progress through the stages necessary to transition, become educated and share knowledge, learn that small failure are a part of the plan, morph their tasks into the mainstream workflow, and become tool experts, DevOps teams become stronger by sharing experiences, technical skills, improved collaboration, and (most importantly) trust.
Adding DBAs to DevOps teams gives the DevOps team members the opportunity to “mold” the DBAs. Previous challenges of getting a DBA to even consider a nonrelational database solution becomes an opportunity for the DBA to learn new database technologies. Just climbing over the fence gives new perspective.
Once DBAs buy into DevOps, learn the processes, and fully understand how database work can benefit the business, instead of the development team (the previous customer), the pipeline expands from database change introduction, growing the code base as DBAs check-in database changes and infrastructure as code templates and scripts.
Cycle time shrinks from database changes no longer being an outlier to the process. Deployments smooth out and complete faster as DBA work is automated.
DBA Value Proposition
DBA participation in DevOps draws in a critical application availability and performance contributor: the database. Involving DBAs means that application code is evaluated from a different perspective, especially calls to the database. Database changes become integrated code for continuous integration and exhaustive testing.
DBAs can identify poorly executing queries and transactions and baseline production performance. They can get ahead of forthcoming code changes or new functionality by understanding the impact on the prepared environments, which gives DBAs time to analyze and implement performance-tuning enhancements before the additional load is present.
Problems become challenges for a larger team, compiling more experiences and skills into the pool of contributors to determine root cause and deploy mitigation.
DBAs’ experiences in other infrastructure areas add another layer of value by being able to assess the application and database by looking under the covers of the operating systems, storage, and network. Further discussion is ahead.
Closer and constant DBA and DevOps team collaboration improve product outcomes, stability, security, and performance, which lead to happier customers and improved business results. As DBAs better understand the business team’s use of the product, building a disaster recovery solution or recovery from backup strategy can be customized
Giving developers the freedom to fire up virtual hosts with different database options enables consideration of risk early in the process.
A developer wanting to test a new data access service can test, retry, and destroy the virtual host to start over with a fresh host if necessary. DBAs scripting different template options applicable to different data platforms shifts experimentation from production too early in the pipeline.
DBAs are a good match for DevOps. Driven to improve performance, reliability, and system stability; and matched with the skills to adapt, analyze, and execute process improvements, DBAs can expand the DevOps team’s capabilities; reduce cycle time by pulling database changes into the continuous integration process;
contribute new test cases for improved bug detection; and get ahead of performance, load, and other operational challenges before production impact.
By investing in DBAs joining DevOps teams, DevOps leaders and engineers increase influence and impact on the business. Applying proven DevOps processes to database changes, build templates, database selections, and broader platform considerations present new opportunities that may have been previously resisted by the same DBAs.
DBAs get excited when their contribution can grow, they can grow, and the business can grow.