DevOps Automation (70+ Best DevOps Hacks 2019)

DevOps Automation

How to Achieve DevOps using Automation 2019

DevOps is drawing a line in the sand: it is taking a stand for software product delivery excellence. As agile teams produce less-defective code more quickly, DevOps teams need to solidify the infrastructure foundation supporting the application.

 

Both pieces are required to deliver software superbly and (more importantly) gain customer confidence in an organization’s capability to operate well. This blog explains the 70+ Best DevOps Automation Hacks used in 2019. 

 

Customers and internal business teams demand and deserve applications that are available, reliable, fast, secure, and functionally precise, but also delivered nearly on demand.

For decades, too many opportunities were missed or delayed because IT delivered software more slowly than the business need required. DevOps, along with Agile, refactors the software delivery process specifically to deliver faster, more accurately, and more highly resilient.

 

Company leaders need to see that technology investments are doing things to benefit the company. Revenue growth, a new sales channel, and reduced “lights-on” data center costs represent the tip of the iceberg of which the technology organization can gauge success. Automation accelerates product delivery, and code control reduces change coordination chaos while providing a recovery path.

 

Craftsmanship

IT teams, whether marching as DevOps, plan, build, run, or technology-divided soldiers, understand that the mission of delivering software requires more than brute force implementations; instead, it is what one CIO calls it: “craftsmanship.”

 

The product quality difference between a metal works craftsman, or “Meister,” and me wielding a 4-pound hammer and a few wrought iron bars does not require a rocket scientist to see.

 

Yet “acceptable” has been the software building bar height for too long (decades) across the industry. Setting the bar much higher to rate software as “excellent” challenges project, development, testing, validation, and implementation methods, and also people’s tried-and-true abilities.

 

As the culture and organization shift gears and accelerate, the goal is to complete software delivery without losing control and crashing head-on into a wall.

 

Two mechanisms contributing directly to software delivery excellence are automation and code control. Ironically, they have existed for decades, yet are now being leveraged to push a change within companies and across the IT industry.

 

Human versus Computer

DevOps

The human element introduces much of the imbalance (risk) that wreaks havoc on software delivery, especially when code release frequency slams into operational control. Two human perspectives—developers delivering code and operations teams protecting the ecosystem—tend to ignite all-night firefighting.

 

As mentioned in an earlier blog, computers were first used to automate repetitive and mundane tasks, yet today many steps in the code build, test, release, deploy, operate, and support value stream are completed manually. 

 

DBAs may script a set of commands to keep from having to be command-line commandos during the release, but the overall process is very choppy and inefficient. And that DBA script is not likely to be in the source code repository.

 

Computers can repeat tasks repetitively and without errors far faster than humans, so it is surprising how often humans choose to repeat tasks. Changes moving from development to production include intermediate environments, sometimes requiring the change to be executed a handful or more times.

 

Each manual change introduces the risk of a mistake, especially when the time elapsed since the last change is long enough to make recall fuzzy.

Whether a step is overlooked or a few steps are completed in a different order, the environment may be different, and for sure the release process changed. Automating and validating the process removes the risk and improves execution time.

 

In contrast, DevOps shrinks the no value-add work to bare bones by having capacity in place to spin up virtual servers quickly, accepting the agile idea of frequent small releases that require much less release and operational readiness and choosing to produce working software over documentation.

 

As agile development practices continue to expand globally throughout the industry, Operations potentially becomes the bottleneck, which is never a good reputation characteristic. 

 

Rising DevOps acceptance and maturation affords a systemic approach to operational efficiency and process leaning, making Operations a facilitator of fantastic software delivery. Iterative improvements of operational processes bring the same value model as iterative software releases.

 

Conflicting Interests

Automation

Developers want to get new products, enhancements, and bug fixes through the pipeline quickly because they are incentivized to complete projects on time, budget, and schedule.

 

Operations, including DBAs, want to preserve control, if for no other reason than not wanting to be on the phone over-night or all weekend, explaining why the company’s most critical application is metaphorically a smoking heap of elephant dung.

 

Control—change control or change management—provides the perception of stabilization or risk reduction while deceiving us all because most of us have experienced the opposite effect. Even with extra prerelease diligence performed, large releases still rattle business operations when the code “rubber” meets the production “road.”

 

DBAs embracing DevOps have the occasion and obligation to offer better and faster ways to make and process database changes. When the SDLC road trip has to divert onto an old country road to pick up schema changes, only to backtrack down that same old country road, inefficiency (read that as time lost) breeds.

 

DevOps tools facilitate schema change automation that can be inducted into the automated change management process. Being confident in the automated build process allows DBAs to deliver consistent deployments while staying aligned with the progressing code necessitating the database change.

 

Automation Benefits

Automation Benefits

Benefits stack up quickly with automation incorporation: shortening the software release cycle, reducing defects, integration evidence, exhaustive testing, repeatability, auditability, performance scrutiny, and documented results. DevOps DBAs produce value and reduce effort through automation.

 

I remember one Oracle shop in which a DBA would manually check all the production databases every day. The process took 4 to 5 hours every work-day. A new DBA joined the team, and, as the newbie, was given the honor of taking over the daily checks.

 

Day 1: the newbie watched the DBA most recently charged with performing the check complete the checks.

Day 2: manually completed the checks with oversight from the other DBA.

Day 3: manually performed the checks and built scripts to automate the checks.

Day 4: ran the scripts and double-checked the results manually. Once verified, the DBA scheduled the checks to execute the next morning, with the outcomes e-mailed to the DBA team and director.

 

Day 5: the DBA read the e-mail and was recognized by the director for being “innovative.” That sounds like a great first week!

 

Release Cycle Shrink

Retail shrink usually means that employees and/or customers are stealing from a store. This kind of shrink is bad. Reducing the period of time needed to code, promote, test, and validate included infrastructure and application changes that are needed to read the next release by “stealing” back time— time available but not leveraged—helps shrink the duration needed for each release.

 

There are at least three “times” that need to be refactored.

 

Testing

test
 

Do not do it . . . manually. Developers produce abundant amounts of usable (bug-free) code if they are allowed to focus. Unfortunately, developer efficiency degrades when testing, documentation, project scheduling, and meetings divert time and energy. DBAs incur these same distractors, plus they spend time doing operational support.

 

The easiest distraction to gain back time for punching code comes from minimizing manual testing. DBAs have the same opportunity to achieve maximum output over time. DBAs who assert themselves as DevOps team members need to start thinking like developers for database changes.

 

You should understand that Agile development practices include guidelines to reduce meeting duration while increasing communication. Agile frame-works Scrum, Kanban, and XP (Extreme Programming) focus developer effort on producing code.

 

The Agile Manifesto (Manifesto for Agile Software Development) includes “Working software over comprehensive documentation” as a value statement. Further, the manifesto states one of its 12 principles as “Working software is the primary measure of progress.”

 

Testing beyond compile and simple function execution belongs in the automation realm. Leveraging virtualization automation enables DBAs to recur testing until an acceptable product persists.

 

Spinning up a database server from an infrastructure as code template, replicating the current production foot-print to which proposed database schema changes, performing tuning tweaks, or applying and testing access privilege grants are doing DevOps.

 

First-time execution may reveal object invalidations triggered by the schema change due to dependency references. Changing a table column referenced by an index that uses that table column may cause the index to be invalidated.

 

The index fix has to be incorporated into the schema change code. Testing the database changes until perfecting implementation eases staff deployment tensions and mitigates change-related errors.

 

Although this cycle seems tedious, tools make the process easier. Anything and everything that can be automated to excessively test build, changes, code, configurations, or anything testable is a win for product delivery.

 

As changes are checked into the repository, the CI server is ready to pounce. The CI server may react on every change commit or be configured to run at a scheduled time (the industry suggests at least once daily).

 

Check-in occurs after developers build code and run simple functional testing to prove that the expected results did happen. The CI server has responsibility for ensuring that the objects representing the delta between the previous run and the new run are tested and proved to work as designed in the context of the whole application in the planned infrastructure environment.

 

Once the code branches pass all testing, a new product version is created as the code trunk. User acceptance testing, or product owner assessment, still plays a valuable role in the SDLC. Here is one aspect in which iterative development helps to ensure product success. The user interface (UI) significantly defines the user experience (UX).

 

You may make the best curry chicken on the planet, but your now unhappy customer will not try it if it is served in a plastic bucket. The poor presentation has plagued software development: your customer may decide to not use the application, or efficiency suffers if an employee must use the company’s application to fulfill job requirements.

 

Overcoming this typically late-stage mess means gathering immediate feedback from the customer or product owner early in the development process.

 

Iterative development includes producing an early model or prototype of the UI. Although there might not be a single line of code behind the storefront, the product owner still sees the interface and has the opportunity to recommend moving forward, make a few adjustments, or give the team the opportunity for a “do over.” No matter the outcome, timely feedback at this point does not result in expensive rework.

 

Scrum uses sprints, predetermined time allocations, to manage work demand. Daily sprints are designed for developers to select a unit of work that can be completed that day, making iteration a good fit.

 

The team’s day 2 UI presentation has the benefit of the product owner’s feedback from the previous day, which improves the UI to better imitate the requested look and feel. Henceforth, daily refinements and cosmetic enhancements further improve the UI.

 

Scrum also uses sprints to define the duration for when production-ready software can be deployed (2 weeks is a common interval). As the 2-week sprint ends, the UI is fully developed and fully vetted.

 

Parallelism

Parallelism

Getting work done without actually doing the work is awesome. Granted, you had to do something smart upfront, with repeatability as the goal. Testing was a step traditionally performed by a QA team after the software development completed. Two distinct phases, coding and testing, extended the release cycle.

 

Agile and DevOps are adamant that automated testing should occur early in the coding process. Many recommend that unit tests be created before the code unit is written, making the programming goal meeting the expected test results.

 

Once the unit is written and checked in to the source code repository, the CI process begins testing the code: testing the code integrated with the rest of the programs, including checking for performance and security gaps.

 

This process allows people to do other work in parallel. No, QA testers are not out of a job; instead, they focus on ensuring that defects are discovered by creating extensive, deep-diving test code to hunt down software flaws. The act of testing is automated, and the test creation can be supplemented by redirected QA team members.

 

As DBA changes are injected into the workflow as just more code, testing needs to broaden in scope to include the database code. DBAs and QA team members should follow the same advice as often as feasible: build the test and then the code. Interrogating database schema changes may include verifying the metadata definition.

 

Inserting or updating data in that modified column seems to be a reasonable test. Again, test for invalidations caused by a change to append the corrective action to the deployment script. This could be the process flow:

  • 1 Capture current settings
  • 2. Make changes
  • 3. Verify changes
  • 4 Check for invalidations
  • 5. Fix invalidations
  • 6. Record changes for audit
  • 7 Recheck for invalidations

 

Whether the fix is a call to another script or a dynamic SQL statement build matters not. The critical element is not leaving a broken object in place. As people work in parallel while testing, the testing itself should also execute in parallel. Crunching the CI cycle into the tiniest possible time slot positions testing in several ways:

 

if a problem occurs, retesting can be completed without interfering with post-testing efforts or running into the business day; testing can be repeated to validate results, and additional testing can be performed without extending the test window, so additional test cycles can occur.

Moving from once daily to twice daily testing is not possible if testing takes 15 hours running serially. Many threads that run in parallel may complete the testing in 4 hours, so testing can occur twice or even four times per day.

 

Not Working Does Not Mean Not Working

not work

Just because you have finished working for the day—after checking in your code to let the CI servers start chugging through automated testing—the work has not finished. So making something happen during your “down” window seems like a smart way to prepare for the next day while you complete other tasks.

 

Time and money are the finite resources that prevent IT shops from maintaining the perfect infrastructure and delivering every value-add business request. Managing to the best outcome, effective IT leaders set priorities by giving staff the flexibility and tools to produce the best product.

 

That practice has become easier as DevOps and its associated tool products have matured. DevOps team members know that product delivery cycles must contract; DBAs have to be able to meet the same timelines. Being able to get work done while not actually working benefits all.

 

Identifying distinct units of work that can be completed automatically and without human overwatch is not new. IT pros have for years executed back-ups, index builds or rebuilds, ETL processes, data scrubs, and other work during off-hours. DevOps expands the type of work that can be included.

 

For instance, a DBA may spend the day perfecting (or at least attempting to perfect) a database-provisioning process. To validate, the DBA may construct a series of test builds scheduled to run overnight.

 

Although the DBA is relaxing, maybe writing a book, testing hacks, or building a gaming computer, the work gets done, builds an audit log, and awaits review the next morning.

 

Repeating this cycle for a few days should be enough time to consider the build process viable. Not leveraging nonwork hours would be wrong, just plain wrong. DBAs tend to never be wrong; just ask one of us! (Just a joke, my developer friends.)

 

Automating Out Errors

errors


DBAs who repeatedly execute command sequences or scripts contribute to error origin, bottlenecks, and inefficiency. Agile development accelerates the software build process, leading to rapid code deployment readiness with proven error reduction.

 

If the monthly production rollout involves 15 minutes of code errorless deployment and 6 hours of error-prone database work, DBAs surely realize that the database deployment process expounds what “being out of kilter” means. 

 

Inconsistency results in problems; automation instills consistency. Even if the automation produces the wrong outcome, it returns a consistently incorrect outcome, which can be easily isolated and rectified.

 

DevOps mandates extensive testing for each element within the value proposition of better software faster. Thoroughly testing each foundational application component (program function, procedure, API, microservice, container, a database call, or audit service) represents the starting point for error elimination because an error is never allowed to progress.

 

DevOps teams should continuously assert the mantra “never progress errors.” As the code advances from singular functional purpose to an integrated form, intense testing must examine every attainable execution derivative to ensure code compatibility and expected outcomes, without error.

 

As code travels from the branch to the trunk, automated regression testing provides the final vetting of the software before it is considered release- or deployment ready, again 100% error free.

 

Taking advantage of every minute of the day to perform work without your participation increases your overall effectiveness and contribution to the organization.

 

Zero Defects!

Zero Defects
 
DevOps shifts the “defects can be worked around” cultural acceptance to “no bug lives past today” diligence. “Today” sounds aggressive, albeit improbable in infantile DevOps shops; however, team members begin to value quality out of respect for the customer, leading to the latter mantra.

 

As DevOps matures in the organization, the improbable becomes routine. By not having to plan according to a monthly release schedule, DBAs have the flexibility and empowerment to correct defects quickly if the need arises. Remember that excessive testing spawns flawless releases.

 

Maintaining zero defects across the IT supply chain challenges existing infrastructure and deployment paradigms, making DevOps the release hero. The effort spent on prerelease testing provides a better return than the effort spent troubleshooting and determining root cause and plausible remediation of the production system.

 

Broadening the concept to include the infrastructure as code is not limited to server or database provisioning. The right devices combined with the right tools means that load balancers can be automatically updated during rolling releases.

 

Moving away from sequential team efforts (team A completes task 1, team B completes task 2, team A completes task 3, and so forth) decimates error opportunity through excessive automation that is not used until it runs correctly and efficiently, leaving behind a comprehensive audit trail.

Not introducing errors from manual sequential change steps is a great step toward maintaining a zero-defect posture.

 

Death by Workaround

Death by Workaround

Workarounds have a nasty habit of sticking around longer than expected, and for sure until there is a chance to cause further disruption or grief. Imagine “softening” application access to deliver a large-revenue customer’s demanded application enhancement. Great, the customer is happy, so that gets registered as a win for the team.

 

Six months later, your company is being sued by the same customer who blames your squishy access security for a major data breach. Internal workarounds create similar problems, mainly because the production workaround never gets applied to the preproduction or development environments.

 

New features rolling toward production then pass all testing but fail when released. After hours of multiteam investigations, the forgotten workaround is rediscovered as the culprit.

 

Platform consistency must include workarounds introduced into production. DBAs implementing workarounds must incorporate the workaround into the build automation to ensure consistency in the nonprod environments to eliminate production surprises. This obligation includes positive workarounds, such as adding an index to improve query performance.

 

Adding an index seems innocuous until another query that performed well in development and testing runs ten times more slowly in production. The availability of the workaround index caused the optimizer to select an execution plan that was not previously obtainable.

 

Two equally important mandates—to drive out defects before the production release and, when a production defect is discovered, to redirect energy toward its elimination—amp up software quality so that customers notice, providing you with a competitive edge through customer loyalty.

 

Orchestration

Evolving from automation to orchestration includes threading together individual automation to build an efficient workflow. After meticulously building, testing, and validating various automation packages used to construct a web, app, and database servers; test data loads; and finally verification scripts, stepping through a logical process makes sense.

 

Although each automation can be executed in isolation, the true power of automation comes from purposefully designed workflows that meet expected build demand.

 

A developer needing a new application environment would appreciate and benefit from a stringed set of automation. An easy click or two starts a cascading progression of the web, app, and database server builds; test data loading; and validation.

 

This list is simplistic, but the positive results are nearly unlimited. 

Orchestration is illustrated by security patching and scanning; code compilation; firewall and load balancer configurations; monitoring agent installs; synthetic transactions; auditing log transmission;

 

backup configuration; encryption; certificates, meta-data loads; and everything else that needs to be built, configured, or reported combining to provide the needed infrastructure and application, which enables further code development or production deployment.

 

Orchestration offers amiable outcomes. DevOps engineers produce automation designed to allow the consumer—DBA or developer—to customize changes, whether allowing the selection of different database products or digesting a spreadsheet of access requests. Pieces from here, there, and over there align to produce an expected product.

 

KISS (keep it simple, stupid) brutally challenges us to not do what we tend to do too often: overengineer. To summarize Einstein, “If you can’t explain it simply, you do not understand it.”

The general challenge for IT team members is to not overcomplicate things from a lack of understanding. Simple, elegant, and effective solutions take much more brilliance to implement.

 

Making it possible for DevOps team members to hit the “easy” button to produce needed environments requires significant behind-the-scenes orchestration, during which design and performance are critical, complexity is unwelcomed, and the results are astonishing.

 

Automated testing provides a means to complete 99% of the testing needed before product launch, leaving the final most critical 1% to be done manually: user acceptance testing or (for Agile) product owner acceptance.

 

Sitting down with the customer, walking through the product to determine usability, cosmetics, workflow, understandability, and general awesomeness can be a disillusioning experience. Many application products have died on release after millions of dollars were invested because the product was unusable for the customer.

 

The iterative building, revealing, product owner feedback, and realignment using agile practices intend to stop product DOA scenarios. Although having frequent face time with the product owner to demonstrate the product and accept feedback favors final product acceptance, surprises can happen, so user acceptance is a critical path approval.

 

Because of the recurrent product display and owner feedback, user acceptance may take 15 minutes, whereas a waterfall user acceptance test could take days or even weeks (remember that waterfall projects are relatively larger in scope per release).

 

Great testing requires an aggressive posture: DevOps team members openly challenge each other, including those who say, “My tests will crush your code!” Keeping score can add to the friendly competitiveness.

 

[Note: You can free download the complete Office 365 and Office 2019 com setup Guide for here]

 

Performance

Performance

The modified code should execute at least as fast as the previous version, given that no functional elements have been added. Based on how another code performs,­ extrapolating base performance expectations are possible.

 

For instance, if most production searches, regardless of the search criteria, complete in .1 second on average, you can state that all search code must execute and return the result set in . second or less. 

As you mature the performance standards, searches may get divided into search groupings: simple versus complex or structured data versus unstructured data.

 

As experience grows among the team, and analytics provide undisputable evidence for performance, metrics selection that includes data inserts, updates, and deletes also needs to have defined performance expectations.

 

It astonishes me how many times I have heard, “The update took only half a second.” I like to remind that person that half a second is “like forever” in computer time.

 

DevOps has also brought about the mindset that not only does the code (application or database) have to produce the correct outcome; it also has to produce it without negatively impacting performance.

 

Code Control

code_control

Gaining control over DBA code first requires the code to be managed like the application code, by using a source code repository. Versioning, check-in, and check-out provide an audit trail, making it much easier to determine when an error was introduced, or perhaps a change proceeded or did not get applied with the matching code change.

 

The CI test executions should catch these problems, allowing DBAs to adjust the code and resubmit. This additional accountability is good for DBAs. Whether implementing infrastructure as code for a database install or schema build, a data load for testing or a simple create index script, all the code needs to be pulled by the version from the source code repository for execution.

 

Check-in Snowballs

CI servers anxiously await source code check-ins because that is when CI servers get to shine. Remember that everything is considered code. As DBAs, developers and other team members commit changes to the source code repository, the CI server goes to work by testing each autonomous code element.

 

If hundreds of people commit changes before the daily trunk integration, the CI server work snowballs quickly into a significant workload. Fortunately, the automated testing progresses proficiently through the test sequence leading to a deploy-ready state.

 

Continuous Delivery (CD)

Continuous Delivery (CD)
 

DBAs participate in CD by uniting with DevOps team members to deliver the promise of always–ready database server deployment and database application code. Unlike developers who focus primarily on application delivery,

 

DBAs contribute to infrastructure database builds and application elements. DBA infrastructure as code assimilates with the OS build a package to facilitate automated database server provisioning. Additionally, DBA schema, stored procedure, trigger, auditing, and data loading code aids application delivery.

 

Easy to Roll Back or Roll Over

By design, extensive CI testing should discover and report bugs. DBAs implicating code defects need to resolve them quickly because it is imperative that the application always is in a deplorable state.

 

DBAs can refer to the source code repository to determine whether the defective code should be rolled back to a known working version or to roll over the defective code with a newer version. Perhaps a needed schema change was overlooked, causing a mismatch between the database and the application code.

 

A DBA could quickly build a new version containing the schema change, commit into the source code repository, and allow the CI server to execute the test sequence again, successfully this time. DevOps truly leans toward advancing code rather than rolling back to code that obviously needed to be replaced.

 

Auditable

Auditable
 
Version control provides a very beneficial change: an audit trail. Auditing serves many purposes, whether providing responses to external governing bodies or internal security or audit team inquiries, the electronic paper trail tells the necessary story. For DevOps continuous delivery, the versioning supports the bug-free, application-readiness premise.

 

Code failing the “shock-and-awe” quantity of functional and regression testing demands immediate remediation. Interrogating the code versions makes it easy to determine where the defect was introduced.

 

Managing Chaos

IT shops continue to struggle to manage large and complex infrastructures. DBAs struggle with the database subset of infrastructure, not to mention the application demand on the databases. Scale introduces variability risk.

 

A DBA team that manages a handful of databases may be able to manually maintain the databases, keeping deployments consistent, and maintaining performance and security while assuring recoverability. Amping up the number of databases to 1,000 with few staff additions makes the scale and scope unmanageable.

 

Drift management software helps keep basic infrastructure configuration inline, leaving change management to the DBAs. Determining the lowest common denominator pertaining to database configuration becomes the base build template to be applied for every build. Being confident that every deployed database has the same attributes allows DBAs to focus elsewhere.

 

The next level may be scale, where a development deployment configuration includes less compute and memory consumption. For builds, having a size option prevents resource waste while still helping with overall environment control. Exceptions, aka special features, become the last frontier in which significant deployment variation can be introduced.

 

Minimizing database variation while empowering application and business capabilities can be balanced. Forward-thinking and well-planned builds shrink the scale risk by limiting the number of possible database gestations deployed. Variable consistency seems oxymoronic; no wonder so many companies have successfully leveraged it.

 

Car makers have worked diligently to maintain parts consistency—working to use the same parts in as many models as possible while still being able to offer a variety of models to customers. Printer manufacturers separated power and language component from the core printer product, making printers consistent and offering variation by customer geography.

 

Abase printer, to which the needed power module and cord and language set are added, can be shipped anywhere in the world, making the printer function for the local environment. The cost savings and product maintenance savings have validated the variable consistency idea.

 

Bare Bones Disaster Recovery

 Disaster Recovery

Having an application data backup and the source code repository means that you can recover the environment, period. Infrastructure as code can be used to provision host servers—anywhere you can find the resources, deployment automation can configure the application, and the data recovery completes readiness for business operations.

 

That is the power of Agile and DevOps pertaining to disaster recovery. Sure, that is an unlikely recovery strategy for many organizations, but having the capability reveals program maturity.

 

After all, fully automated server provisioning, application code deployment, and data recovery are foundational DevOps goals. This capability can also be leveraged to disperse the application for geographical separation and multivendor cloud redundancy.

 

Note For context, disaster recovery pertains to the catastrophic loss of a data center requiring production processing to be recovered at an alternate, geographically separated data center.

 

Disaster recovery can be done in many ways. Personally, I like the 5 Rs:

  • \1.\ Replicate: disk replication for the most critical systems
  • \2.\ Recover: secondary systems can be recovered from virtual tape backups
  • \3.\ Redirect: DRaaS makes it possible to redirect connections to the secondary system
  • \4.\ Rebuild: use infrastructure as code templates to spin up virtual hosts
  • \5.\ Retire: it may make more sense to not invest in noncritical and/or antiquated systems recoveries

 

Tribal Knowledge Retained

Staff turnover happens, but when the source code repository contains all the database-related code, you are much less likely to hear the just-hired DBA say, “I have no idea how the previous DBA implemented that change.”

 

Being able to check the source code repository, change management logs, and deployment logs provide the details needed to understand how a change was implemented. The automation also allows code to continue progressing without having to wait for the new DBA to “get up to speed.”

 

Sure, the new DBA needs to quickly understand the database automation and be ready to modify the code as needed, but as DevOps preaches, no one person should be the only resource capable of doing a specific task.

 

Other team members should be able to handle code modifications and even simple database changes if a DBA is not available. Speed and control are benefits driven by excellent code-progressing automation, not by more and more people talking about risk.

 

DBaaS, IaaS, and PaaS

Saas

Proper orientation, or level-setting ourselves, allows us to leverage the knowledge foundation we already have to gain additional knowledge. Software as a Service (SaaS) is an acknowledged winner in the “as a Service” product realm, so let’s start there before engaging with our chapter title offerings.

SaaS

Software as a Service


Software as a Service (SaaS) offerings have successfully penetrated organizations across industries, continuingly growing market share while embedding the term as a Service into our language:

 

…worldwide SaaS and cloud software performance by the vendor in 2014 and anticipated performance through 2019. The cloud software market reached $48.8 billion in revenue in 2014, representing a 24.4% YoY growth rate. IDC expects cloud software will grow to surpass $112.8 billion by 2019 at a compound annual growth rate (CAGR) of 18.3%.

 

SaaS delivery will significantly outpace traditional software product delivery, growing nearly five times faster than the traditional software market and becoming a significant growth driver for all functional software markets. By 2019, the cloud software model will account for $1 of every $4.59 spent on software.”

 

SaaS is simply the delivery of an application that is supported by infrastructure (and includes a database if needed) that is offered to individuals or companies that need the functionality of the application but do not want to develop, host or support the environment. I very recently watched a television advertisement for Namely, a provider of human resources management software.

 

An e-mail was in my inbox this week from FreshBooks, a small business accounting software provider. The big dogs such as Microsoft, Oracle, CA, salesforce, and more are still positioning for market dominance.

 

SaaS is the pinnacle of the “as a Service” offerings because the provider does the care and feeding of the solution. You need only to connect to the provider’s site, log in, and then start doing business (after you make the agreed-upon payments, of course). SaaS offerings can scale to support large organizations while allowing small businesses to use the same application because costs are driven by consumption.

 

SaaS Ecosystem

saas

SaaS combines the full infrastructure stack (physical hardware with computing and memory resources, network connectivity, attached storage, OSs, database, and a well-designed application) hosted in an industry best-in-class data center with exceptional redundancies for power, network ingress and egress, and environmental controls.

 

The SaaS delivery model simplifies business and IT operations for companies. To leverage SaaS, organizations need to connect to the product portal, usually with a web browser, to begin using the application.

 

In some cases, the company may load an initial data set, possibly customer or product information. Smartphones, tablets, or workstations connected through an Internet service provider are easily attainable and manageable.

 

Businesses of all sizes can easily evolve from the desktop, in-house, or commercial off the shelf (COTS) applications, in which software installation and occasional software upgrades are troublesome and backups are not done (or not done properly or frequently enough), to SaaS. Relieved from the IT administrative burden, organizations execute on strategic drivers to grow revenue and market share.

 

SaaS adds value such that the customer’s only concern becomes application availability, although confident the provider has the capability to keep the application available. 

 

SaaS offerings range from simple to complex applications, at least from the customer’s functional capabilities perspective. E-mail provides limited capabilities compared with a customer relationship management (CRM) application.

 

For the developers creating the products, the complexity may be similar: the customer perceives the products differently. People approach e-mail expectantly, but they may approach CRM apprehensively.

 

“as a Service”

Service offerings come in many flavors, each with an affinity toward specific customers who need particular capabilities. These offerings relieve IT departments from the drudgery and costs surrounding asset procurement, installation, maintenance, and ongoing support while delivering very functional, highly available, and well-performing environments. “as a Service” produces a strategic opportunity for organizations.

 

For example, organizations are unlikely to invest in developing or operating human resources, accounts payable, or procurement application if they are currently searching for a solution. Software development and hosting companies provide applications and infrastructure, allowing organizations to pay for these applications as a service.

 

Executable functions common to organizations are not where companies look for competitive advantages. Therefore, “as a Service” products are expeditious and frugal selections recognized for ease of use, competitive pricing, and corporate financial prudence.

 

Predictable cash flow (knowing how much is being paid monthly as opposed to large, periodic capital outlays) combined with hands-off administration redirects staff focus and purpose to pursue strategic opportunities.

 

IT can energize business growth, eliminate waste, manage the value stream, and deliver on customer expectations to create competitive advantages.

 

Because “as a Service” offerings are consumption based, pricing scales in correlation to business growth. For instance, if your company is growing and hiring more people, expect the human resources SaaS provider to increase fees based on an agreed–upon pricing scale.

 

as a Service” opportunities break down complexity to simplify investment and operating decisions: which layers of the technology stack should or can others manage better than us?

 

Let them do it. Or strategically, which layers of the stack must we manage for competitive advantage or data security? We should turn everything else over to a provider.

 

IaaS

 
 
iaas
 

Infrastructure as a Service (IaaS), also known as utility computing, includes the physical infrastructure: for example, CPU, memory, storage, network, and power. It also has an IaaS virtualization layer, also called a data center OS. Here, the customer consumes resources to execute and manage the rest of the stack.

 

Virtual host migrations between IaaS environments, on-premise to cloud, cloud to cloud, and cloud to on-premise are the same tasks across a variety of host configurations and locations.

 

Physical server migrations are tougher but doable because you have to convert the physical server into a virtual server; if possible, I recommend virtualizing locally before migrating.

 

Technically, IaaS migration work is not much different from the effort needed to move from an existing server to a new server; the primary difference is the distance between the source and target computers when using a provider’s offering.

 

IaaS offerings provide the “shortest” stack, leaving customers with design flexibility. Google Cloud Platform, AWS, and Azure are a few IaaS offerings.

 

PaaS

Platform as a Service (PaaS) really appeals to the software development crowd. Focusing on building great software products without the distractions of building and operating infrastructure increases developer productivity and probably morale.

 

By offering a variety of OSs, programming environments, containers, and middleware technologies, PaaS enables the software development company to provision environments and quickly start developing and testing products.

 

Whether the company plans to make the software available for organizations to run internally or in the cloud, the software should be able to run on multiple OSs without specific configuration changes. Not wanting to build guests with OSs or middleware software, the company simply provisions the environment needed.

 

Businesses can also take advantage of the PaaS solution, whether for development or production because the environment provides flexible configurations.

 

One business I am familiar with has been porting web services to an on-premise PaaS environment, including containerization. The company is basically consolidating web services on disparate platforms written in several programming languages to the PaaS environment using a single programming language.

 

Off-premise PaaS means that the provider’s automation performs the builds and makes the environment available to the customer. Relegating to customers the power to select the best platform based on their development strategy delivers value while allowing the consumer to serve a broader customer base without having to own every potential customer configuration on which the software may be deployed.

 

“Born on the web” companies (those that build applications in the cloud) can leverage PaaS. Imagine going to a PaaS provider site and provisioning a Windows Server environment with IIS, a SQL Server database guest, and another Windows server.

 

You then add your preferred development tool onto it and let the developers have at it. A new app could be available for sale in days, if not sooner. Within a week, the company can generate revenue. Next app!

 

SaaS, IaaS, and PaaS fit a unique base customer requirement: SaaS meets application needs; PaaS meets development, web and app tier, services, and container deployment requirements; and IaaS allows full data center server compliment without having to invest in building a data center.

 

Although there are other interesting “as a Service” offerings such as Communications as a Service (CaaS), DBaaS needs to be our principal focus for the rest of the chapter.

 

DBaaS

Dbaas

The crucial differences between them are that you pay for everything needed to use an application with SaaS (you configure nothing); PaaS solutions may be web or application servers, containers with microservices, or a combination; and DBaaS allows you to select which database you need and is more likely to be implemented as the only technology in the environment.

 

You won’t be doing much beyond selecting the database product and a few sizing options because the provider’s automation builds the environment and database based on a few selections you made concerning usage.

 

As with SaaS, the DBaaS provider manages the technology; the customer provides the data. CRM SaaS offerings obviously include a database for data storage, but realize that the back end could be a DBaaS environment. It makes sense that a SaaS provider would use a DBaaS solution for the data.

 

For a DBA, fulfilling the demand for a data store needs to be a flexible, business-driven, and cost-conscious act. For example, using DBaaS to provide development and testing environments that can be spun up and then destroyed provides the necessary flexibility to keep a project moving without having to purchase additional servers or database licenses.

 

Along with that same line, the company may have an internal homogenous development infrastructure used to produce application software. To then certify the application against other databases, DBaaS facilitates the need at a reasonable price with technical simplicity.

 

DBaaS also allows DBAs to provide the best-fitting database for the job. The type of data to be stored, how the data is retrieved, and how often the data is modified or replaced drive the database selection.

 

DBaaS fits especially well when the company’s data repositories are separated from the application, creating the amiability of using different databases as business requirements change.

 

If your business is data, you want to be able to manage it while making it available to customers. Customers paying to look at or retrieve your data need simple methods (APIs or services) for the access.

 

Google provides data. Google stores and indexes data for people to access by using the search engine. DBAs understand that the volume and types of data make using a single vendor’s product challenging, so having the flexibility to select different databases to fit specific needs leads to better solutions and (very likely) significant performance improvements.

 

Leveraging DBaaS for DevOps

dbaas


DevOps requires speed and agility based on a foundation of lean practices and simplicity. New projects, whether for new functionality or improving existing code, no longer require DBAs to figure out how to fit new data requirements into existing relational databases.

Although relational databases have served us well and continue to be excellent transaction recorders and systems of record, not every data requirement fits into the relational model.

 

Unfortunately, DBAs have had to figure out how to make various-shaped data fit into the relational model, which is not necessarily the best performing or manageable situation.

 

Companies that have made sizeable investments in database technologies may be reluctant to spend additional money on DBaaS; but not doing so may, unfortunately, limit competitiveness, the flexibility needed to meet customers’ demands, and application functionality and performance.

 

Architecture

The design processes for a database in the cloud (DBaaS) and for an on-premise cloud are very similar. Two aspects demand additional scrutiny: latency and configuration flexibility. Otherwise, architecture decisions for DBaaS are typical of what you have been doing for years, which makes the learning curve short.

 

Latency

Network packet travel time becomes a design challenge when the database is hosted in a location that is geographically distant from other components supporting the application. Even 20 milliseconds of ping time between an application server and a database server results in 40 milliseconds of latency for each network send and acknowledge pairing.

 

Transactions involve many packets, so (doing basic math) we know that 100 packets would result in 4,000 milliseconds, or 4 seconds, of latency, not including the database processing time.

 

Four seconds, in most instances, is already an unacceptable response time; customers may abandon your application to find an alternative. The 4-second example is a bit extreme, but it does demonstrate how quickly latency can impact business operations and customer experience.

 

Some protocols address latency better than others, such as acknowledging only every tenth packet. However, for design purposes, using the worst-case scenario (40 milliseconds per send and receive in this example) is recommended, primarily because it can be applied to every database product talking to every middleware product.

 

If the implemented solution keeps latency under the design value, the better-than-expected application response time is a win for the design team.

 

Configuration Inflexibility

Configuration Inflexibility
 

Experienced DBAs—not necessarily old DBAs—are used to being able to finely tune the database.

They then work with the OS administrator to tune the kernel, optimizing I/O and storage throughput, and possibly even working with the network and client support teams to boost network and client performance by rightsizing the packet MTU and client buffer, which can also be a latency factor. With DBaaS, DBAs generally have limited ability to tune for performance.

 

Available selections, some as simple as small, medium, and large, can be very limiting. If the environment is based on database product and size, careful planning is needed to prevent capacity problems too soon in the lifecycle.

 

Fortunately, virtualized environments include the advantage of adding capacity with a few mouse clicks, but DBAs would still rather not have to explain why the business has to absorb unplanned costs so early in the product lifespan.

 

When it is considered, simplification is the saving aspect for configuration inflexibility. Although being able to fine tune a database may be needed, the effort becomes non-value-adding when it is really not necessary.

 

If the application can run well using a very simple database implementation, consider that a good thing. For products that rely on extreme performance or synchronous writes, the inability to properly engineer the database implementation may be problematic; fortunately, that applies to a small percentage of databases running in our world.

 

Scalability

Scalability

A database does not often shrink, and it could be stated that customers have never provided feedback saying the application is too fast, or even that it would be acceptable for response times to be 1.5 seconds instead of 1 second. So based on experience, DBAs know that performance must be maintained or improved to keep both the business and customers happy.

 

As mentioned earlier, DBaaS, or a virtualized database solution, includes the benefits of being able to scale solutions by adding CPU and memory resources on the fly. The preferred method is an automatic resource increase based on triggers.

 

For instance, memory can be increased by N% if free memory falls below 5%. Or .25 virtual CPU can be added when CPU usage exceeds 98% for more than 5 minutes.

 

Consider also the decision to be on a shared environment, in which growth applies to each entity hosted, compared with selecting a dedicated implementation. The latter comes at premium costs but provides segregation.

 

Recoverability

 

Recoverability

Three primary risks need to have mitigation plans: data corruption, failures that cause the database to be unavailable, and data center disasters that destroy the complete computing environment. Each risk must be described and the mitigation detailed in the agreement, along with any expenses or other expectations negotiated.

 

Data Corruption

Data corruption happens very rarely these days because of the available advanced storage technology protections and database level checks, but no one can provide a 100% guarantee against corruption occurring.

 

Corruption is a nasty bugger, probably the worst event for DBAs. Coordinating with the DBaaS vendor for recovery must be defined ahead of time, including each party’s responsibilities.

 

Disk–level database replication for disaster recovery protection is great; for corruption, it quickly propagates the problem to the recovery site. Backups may include the corruption if detection was delayed. Point-in-time recovery means data loss, but hopefully not more than SLA-agreed loss.

 

Although the DBaaS provider may offer solutions, you are ultimately responsible for the recovery. Consider periodic disk snapshots or using a product such as Oracle’s Data Guard to have a standby database in place that is protected by block-level integrity checks before the log is applied to the database. Once the corruption is discovered, a cutover to the standby database restores service for your customers.

 

Failure: DB Down

After recovery, the primary consideration when a DBaaS database crashes is to determine the root cause. Without access to the OS and lower stack, the provider has to perform the root cause analysis. That requirement needs to be included in the arrangement, along with an availability and/or return to service SLA inclusion.

 

Catastrophic Data Center Event

Even the most well-built data centers are susceptible to disaster, whether natural, accidental, or intentional. Companies spend a lot of money to have a secondary data center that is geographically separated from the primary data center and populated with a large or equal percentage of equipment ready to become the primary data center.

 

If you are using DBaaS to keep from managing a data center and maintaining infrastructure, you will certainly want to also use DBaaS for disaster recovery. Therefore, ensure that the provider’s disaster recovery plan clearly defines the recovery process, and is required to exercise and report on at least an annual disaster recovery test.

 

Encryption

Encryption

Data protection, whether required by PCI-DSS, HIPAA, or any of the many other governance controls, becomes more necessary when a DBaaS solution is deployed.

 

DBaaS solutions need to include encryption, which can be a hardware or software solution covering database data and data at rest. Storing and transmitting data outside of the corporate walls increase risk, so protecting all data with encryption is smart.

 

The DBaaS vendor may offer an encrypted storage solution option that makes implementing encryption much easier. Otherwise, encryption may need to be implemented within the database if it is an option, or a third-party package may work but have performance implications.

 

Access Control and Auditing

Criminal or accidental access to a database continues to be a huge organizational risk. Likewise, many (too many) governing bodies require audits for various reasons.

Auditing may require DBAs to not choose the best database for their needs; instead, they select a good database match that includes auditing, although custom auditing can be built without too much pain.

 

Work with the DBaaS provider to understand auditing controls—remember that it is in the vendors best interest to make sure that your data is protected, including the way violations or intrusions are reported. The provider wants to protect you as much as you want to protect your company.

 

Leveraging single sign-on provides internal clients access to local and external applications, relieving them from having to remember another password or dealing with two-factor authentication protocols. Build in security with simplicity for your customers.

 

Data Archiving

archive

Multitiered storage and/or data archiving products help manage aging data, preventing performance and space management problems introduced as the data volume increases.

 

Be sure to consider an archiving solution when working with the DBaaS provider. If the provider can make data archiving easier by offering the right archiving tools, it is probably worth investigating. For data archiving, be sure to define how and where it will be done.

 

Other Customers’ Problem Impact

Other Customers’ Problem Impact

Experience has taught many of us that a problem with one system within a data center can (and probably will) impact other systems. That same consequence must be addressed for DBaaS environments.

 

If the DBaaS solution you plan to use is offered by a provider that would be considered a colocation provider (having many customers within the same data center), understanding the data center infrastructure is essential.

 

Knowing whether another customer's problem, maybe a rogue batch job sending terabytes of data out of the data center, could potentially harm your applications leads to an exercise showing that this provider does not have enough isolation between customer systems.

 

It matters not to the CEO or the Board that the reason why your customers could not spend money on your products or that the business teams were late closing the monthly financials by 4 hours is because company XYZ’s batch job consumed all the network bandwidth. Instead, the feedback may be quite pointed: “You should have known that was a risk and managed it.”

 

Fortunately, data center providers have learned much over the past decades, prompting highly modular data center design solutions. Today, customers may still be considered as co-located—primarily geographical nearness only, but also supported by isolated power, network, telecommunications, and so on.

 

Imagine rows and columns of stacked shipping containers, each with direct and independent power, environment, data and voice transmission networks, and compute and storage, surrounded by a building shell in which the hallways on each floor leading to the container doors for physical access. This data center model is not imaginary; it actually exists in several flavors.

 

Modular separation greatly diminishes the risk of another customer’s issue becoming your own. Translating the data center isolation need to DBaaS separation challenges DBAs.

 

Here, colocation is not a building-level consideration, but instead a computer and storage concern. Providers are leveraging virtual capabilities, which means running many virtual guests on the underlying physical servers.

 

Therefore, many customers could have guest hosts sharing the same physical server on which your database is hosted. Be sure that the database resources assigned to your implementation cannot be “borrowed” by other guests.

 

Virtual systems can be overprovisioned, meaning that guests can “borrow” unused compute and memory resources from other guests. Databases do not like having their resources borrowed.

 

Nothing delivers performance degradation faster than having the database cache suddenly forced into swapping due to another guest “borrowing” what was believed to be an unused memory. Verify with the vendor that the guest resources can be “locked” to prevent other guests from stealing resources.

 

Monitoring and Synthetic Transactions

Monitoring and Synthetic Transactions

When searching for a DBaaS solution, monitoring, and synthetic transactions may be a critical add-on service offering from the provider, which should be leveraged.

 

Too often, companies invest tens of millions of dollars building or acquiring, and implementing and supporting applications and infrastructure, only to chintz out by not investing in the right support tools. Fighter jet designers and engineers include navigation and threat warning systems to help pilots “see” where they are going and to avoid risks.

 

It’s perplexing how many IT “pilots” are “flying” blind in the application cockpit, unable to detect business-disrupting threats. Implement great systems and implement the tools needed to keep the systems great!

 

When DBaaS is the best solution for your organization, keeping vigilant becomes imperative. Work with the provider to determine how tools can be implemented, what monitoring the vendor provides, and how you are notified of failures or looming performance degradations.

 

I mention this based on outsourcing experiences in which the outsourcing company did not include monitoring in the bid, and the customer assumed that monitoring was foundational.

 

The miscommunication then came to light in the middle of the night when a major failure occurred. Small details matter, hence my intended inclusion in this book.

 

As just mentioned, tossing the database into the cloud does not relieve DBAs from oversight responsibility. Does the provider offer a monitoring solution? Will the solution integrate with an existing tool suite?

 

Does the solution include the ability to create and monitor synthetic transactions to baseline and alert threshold variances for critical transactions? DBAs must have performance data visibility.

 

Performance between the database and the client, whether the client is an application server or a person’s workstation, also has to be monitored. If it’s reported that the database is causing application slowness, DBAs must be able to identify where the slowness is being introduced.

 

Even if you can prove that the database is responding well, you remain on the hook until the root cause is identified. Many DBAs, myself included, worked to become infrastructure “experts” from necessity because it became a requisite to absolutely prove that the database was not at fault.

 

You have experienced the calls that ask, “Is the database is down/slow?” Even when logic disagrees: 20 people report the database being down (the other 10,000 users are silent), and someone at the help desk escalates to the DBA team and carbon copies the entire IT leadership team that the world is about to implode due to this perceived catastrophe.

 

Even before you get a chance to access the database to try to determine why 20 people could be having a problem—DBAs must always check their own backyard before complaining about someone else’s—your bosses’ boss is already texting you and asking when the database will be back up.

 

OK, that is a bit extreme because most of my bosses have never been so quick to panic, but you get the idea. DBAs must be able to prove that the database is not the culprit and then work with others to determine the root cause.

 

Having the right monitoring tools and specific synthetic transactions in play minimize the time needed to find and correct the problem. Otherwise, consider yourself to be much like a fighter jet pilot without a navigation system who is trying to locate an aircraft carrier in an ocean.

 

Network Configuration Matters

Network Configuration Matters

Whether they are a cloud provider’s or your company’s, shared environments require us to systematically assess all components to ensure that even at peak demand—every customer using every system at full capacity—the business and its customers are not negatively impacted by degraded performance.

 

Total network bandwidth (and, more importantly, the way the total is actually amassed) needs to be understood and then matched to predicted traffic patterns. DBaaS via a cloud provider means that data read or written to the database must travel some distance (refer to the earlier latency discussion).

 

Forecasting data usage and architecting the infrastructure and application wisely to allow DBaaS to be leveraged without harming application performance or customer expectations becomes a critical task.

 

Bandwidth and bandwidth configuration need to be considered for peak load and unexpected load caused by failures or irregular traffic. Because the connection to the DBaaS provider is over some form of WAN, not over a LAN, there may be less bandwidth available to absorb the lost capacity.

 

If your location happens to be geographically close to the provider, a metropolitan area network (MAN) or another form of “wired city” network may provide plenty of bandwidth with little distance-caused latency.

 

When I was asked to investigate repeated reports of slowness at a small site (about 16 people), I discovered that the site had two 1.54 megabits frame relay connections.

 

One of the connections became saturated almost every day during the lunch window. A quick traffic capture revealed significant streaming video traffic, which turned out to be company-mandated training.

 

The root problem was not that the streaming video caused slowness for the applications because team members were obligated to watch the training; instead, there was a failure to communicate between the training and IT departments.

 

Had the infrastructure been considered, it would have been transparent immediately that the company’s smaller sites did not have enough bandwidth to conduct normal operations and watch the mandatory training as streaming video. Other arrangements could have been considered, keeping the business from experiencing a disruption.

 

Scale that problem into a DBaaS provider’s infrastructure with many customers, each reading and writing varied data types. Sufficient bandwidth construction becomes a key performance protector.

 

A gigabit of bandwidth can be designed either as a single connection or as a combination of several smaller connections. They might seem to be equal, but we know that it’s much more complicated than that. Never mind the single-point-of-failure because vendors know better; focus instead on the delivery capability of the two solutions.

 

DBAs, although probably not network gurus, can easily translate network configuration into database configuration, knowing that multiple read/write connections to a storage array distributes the load, which results in overall better response times.

 

They can apply the same principles, understanding that thousands of customers who reach out to the database from many locations, doing a mixture of work, can benefit from many I/O (network) paths.

 

DBaaS and Continuous Integration

Fortunately, DBaaS and CI do not look much different from an on-premise database and CI when it comes to database changes. DBAs still need to automate database changes to integrate with application changes, maintain all code in the source code repository, and provide tests to thoroughly vet the changes.

 

Database template builds and execution may no longer be something DBAs need to manage because the provider probably controls and provides the DBaaS platform.

 

Summary

Considering DBaaS leads down an exciting and challenging path of leveraging “old school” DBA skills: design, access control, recoverability, scalability, performance, and more; combined with ensuring that the shiny new cloud model does not introduce unacceptable latency, shared database or data center risks, or problems when you are not fully in control of the build configuration.

 

Costs can be leveled out, changing only as capacity increments or decrements; performance and scalability are easily managed given the virtualization mode; database selection varies so that the best database can be selected, instead of forcing square data into a round database.

 

Each “as a Service” offering provides IT shops, including DBAs, the prerogative to select the right database for the job. Matching team capabilities to specific technology stack layers encourages smart decisions in which the provider’s expertise and the company’s expertise meld together for an optimal business operations solution.

 

Final thought: if DBaaS offerings are too restrictive, going with a PaaS solution provides the opportunity to build databases per your specifications.

Recommend