Bringing database DevOps to Visual Studio 2017 - Redgate · Database DevOps 1 The Database DevOps...

32
The Database DevOps magazine from Redgate Software Bringing database DevOps to Visual Studio 2017 Why 2017 is the year of database DevOps Introducing DevOps into the real world So what’s the real state of database DevOps? DevOps – a DBA’s perspective Spring/Summer 2017

Transcript of Bringing database DevOps to Visual Studio 2017 - Redgate · Database DevOps 1 The Database DevOps...

1Database DevOps

The Database DevOps magazine from Redgate Software

Bringing database DevOps to Visual Studio 2017

Why 2017 is the year of database DevOps

Introducing DevOps into the real world

So what’s the real state of database DevOps?

DevOps – a DBA’s perspective

Spring/Summer 2017

2 Shift LEFT Issue 1

Visual Studio Enterprise 2017 now includes Redgate Data Tools – rolling your SQL Server and Azure SQL databases into DevOps for higher productivity, agility and cross-team performance.

Accelerate release cycles and increase collaboration by making your database part of the DevOps workflow.

Learn morehttps://aka.ms/vsenterprise

Equip your teams for better collaboration

3Database DevOps

Welcome to Shift LEFT

These are interesting times. The recent State of Database DevOps Report revealed that 80% of companies will have adopted DevOps in two years, and 75% of developers are responsible for both database and application development.

Visual Studio 2017 launched on March 7, and the Enterprise Edition integrates Redgate Data Tools in the installer, giving users the opportunity to introduce DevOps practices for the database alongside the application.

DevOps is going mainstream, and the database is being seen as a natural part of DevOps rather than a blocker to the process.

That’s quite a change and it’s where Shift LEFT comes in. Redgate and many of our partners are actively researching, writing about – and doing – database DevOps. This is the place where you can find out what’s been happening, and catch up in one publication, in one place, all of the latest news.

Welcome to database DevOps. If you’re not doing it now, chances are you will be doing it soon.

Matt HilbertEditor

Contents

4 Where’s the database in DevOps?

6 Help me, help you, deliver DevOps

8 Introducing DevOps into the real world

11 So what’s the real state of Database DevOps?

12 Bringing database DevOps to Visual Studio 2017

14 The why (and more importantly the how) of automated database deployment

16 DevOps – a DBA’s perspective

19 Continuous Delivery – how do application and database development really compare?

22 Why 2017 is the year of database DevOps and why database professionals should lead the charge

24 DevOps and Shift Left for databases

28 The Phoenix Project

EDITOR

Matt Hilbert

COVER DESIGN

Anastasiya Rudaya

PHOTOGRAPHY

James Billings

http://jamesbillings.photography

DESIGN AND ARTWORK

Pete Woodhouse

CONTRIBUTORS

Kate Duggan

Grant Fritchey

Bob Walker

Elizabeth Ayer

Paul Ibison

Simon Galbraith

Stephanie Herr

Claire Brooking

www.red-gate.com

www.simple-talk.com

4 Shift LEFT Issue 1

“So, what is DevOps?”

There’s a lack of clarity about what it is and a misconception that it’s just about automation. We hear people using the term DevOps almost interchangeably with automation or even tooling to some degree, but it’s actually more about the culture. DevOps is about creating the right environment and the right kind of team to become collaborative and work towards a shared goal of delivering software effectively. Automation will help you achieve that but you need a DevOps mindset first.

“How do you get the buy-in to create that culture of change if it’s not just about plugging in a tool?” Tools can be accelerators and they can also be a way in. But ultimately it’s about breaking down walls and barriers and bridging the gap. That could mean realigning your teams or changing your organizational structure, so you need a high-level buy-in and the right kind of culture to be accepting of this movement. It can be quite a challenge.

“We hear about companies like Netflix and Etsy using a DevOps approach, but how well is it being adopted in the mainstream?” There are some large companies out there who have famously adopted this highly automated approach to delivering software. But we’re not all Netflix and we haven’t all got automation baked into our DNA. There is no ‘one size fits all’ solution for DevOps for every organization, so you do need to tailor the solution to your own business. The way one company does it might be a roaring failure in another organization because there are so many different ways of adopting DevOps.

Interview

{ kate duggan }

Where’s the database in DevOps?

James Betteley, Head of Education at IT consultancy DevOpsGuys, is a firm believer that DevOps is about people and processes, and that

while automation tools are necessary, they are not sufficient for a team to be a DevOps team. But where does the database come into the picture? Kate Duggan interviewed James to find out.

5Database DevOps

“Should DevOps goals be the same as company goals? How important is this alignment?”

Your company goals will have a huge impact on what your DevOps goals are. DevOps goals need to be tightly coupled with your IT strategy. Your IT strategy and a lot of your DevOps techniques need to be working towards your business goals; otherwise, you’ll be fighting the whole way.

“So how does the database fit into DevOps?”

The database is as important a part of the ecosystem as any other part of the application. It is as important as your infrastructure or your website. So asking Where’s the database in DevOps? is a bit like asking Where’s the website in DevOps?. At the end of the day, it’s a culture, a way of working, and your database fits in there alongside everything else. The main thing is to get the right people in the room at the right time from the beginning, and design solutions sympathetically with consideration for the database and the architecture of choice. We need to be thinking about how we deploy, how we maintain, how we roll back, and all of those things. Traditionally the database has been difficult to fit into this, and has always been the sticking point in the past. But there are now good tools out there to support the database, so we don’t have to just hit and hope any more.

“Why do you think the database hasn’t been included before in practices like continuous integration and automated testing?”

It was too hard to do. We’d been used to developing code that you could just tear down and replace, but Database Adminstrators would say You can’t do that with your data and your database. So instead of working together to come up with best practices and elegant solutions, developers ended up saying Okay, that’s your problem, so I’ll leave it with you. There were all these automation tools available in development, but in operations they were still doing a lot of tasks manually. Thankfully, now there are solutions so we can finally work together. I’ve seen the change happen firsthand – it’s been a cultural change as much as anything else, and new tools are sweeping in that also help us be more collaborative.

“Once an organization has decided to adopt a DevOps model, would you typically see a ‘big bang’ change to the new way of working, or a more gradual approach, starting with specific projects or databases and adopting more as time goes on?”

It will all depend on your organization – its current size, existing culture, structure, and many other factors as well. However, I think it’s fair to say that a more gradual approach is far more likely to be successful than a big bang. At DevOpsGuys, we usually recommend that organizations start on a smaller, isolated product, and concentrate on making a successful transformation within that product team. Once the fire of success has been lit, you then need to fan the flames and spread that success to other teams and across the organization.

“So the smaller an IT team, the easier DevOps is to implement?”

Possibly, because you’ll have fewer people to convince! But generally speaking, I don’t think it’s down to the size of your IT department. You see large organizations with hundreds of IT people broken down into smaller product teams; that’s the favored framework for being successful with agile and DevOps. Collaboration is the main issue, in just the same way as it’s easier for smaller teams and organizations to become agile. But just because it might take a bit longer to adopt DevOps in a larger organization, it doesn’t make it any less valuable.

“What would be the first step or steps to applying a DevOps approach to an already deployed application and database?”

If you mean DevOps best practices, such as automation, then the answer would be to start with version control. Version control, or source control, is a fundamental part of getting your database to work more like your application. The ability to deploy to a known state is only really possible by using the mechanisms within a version control system. Reports like The State of DevOps from Puppet Labs have told us that organizations using version control and automated deployments are higher performing – so we know it’s a good thing to do! It’s only a small part of the bigger DevOps picture but it’s still a very important part, and a key step along the DevOps journey. By version controlling your database, you make it easier to share and collaborate, find and fix issues earlier, and be able to deploy a ‘known state’. Plug in some continuous integration and deployment automation and you’re well on your way.

“Any final thoughts?”

DevOps is primarily about people and processes. While automation and tools are necessary, they’re not sufficient for a team to be a DevOps team. You also need to ensure you’ve got support from the team and organization around you if you’re hoping to start adopting DevOps-like practices.

Help me, help you,deliver DevOpsI believe in DevOps. Actually, that’s a pretty horrible way to put it. It’s not about belief, like keeping Tinkerbell alive.

I have successfully worked within an environment that implemented a DevOps approach to development, deployment and maintenance. I also provide classes and consulting on how to approach DevOps from the Ops perspective as well as writing books on the topic.

Because I’ve seen the DevOps approach work, and work well, despite the fact that my principal job description is in the Ops side of DevOps, I am a very strong and passionate advocate for DevOps.

But!

Despite the fact that I absolutely support the concepts of DevOps, moving development and deployment into the production space, and moving operations into better support of the development space, I frequently find myself, with my face, in my palm.

Why?

Look at the word, DevOps. Note: it doesn’t say Dev. It doesn’t say DEVops. It doesn’t say Dev ops. The core concepts behind DevOps is that development and operations are coming together. It’s not that development is taking over operations, as I so frequently have to argue about, with both development and operations.

This is an important and key concept because there’s a reason why you see some specialization within IT. Frankly, IT is difficult. So people spend time learning smaller aspects of it in order to be good at those aspects. Yes, acknowledged, there is the very rare individual who truly is a superstar and can do it all, but they are, by definition, extremely rare on the ground. Most of us in IT are just ordinary humans.

{ grant fritchey }

Opinion

6 Shift LEFT Issue 1

7Database DevOps

OperationsI get it. To developers, the operations team just seems like this slow moving behemoth, a dinosaur on its way to the tar pits (yeah, I know, mainly mammals in the tar pits from a different era – work with me). Especially the DBA team (No!) and their ability (No!) to slow things down (No!) with their constant (No!) use of the word, “No” (No!).

However, operations exists for a reason, and they’ve developed a lot of special skills and knowledge that seems to be missing from development. Let me outline what I mean.

FailureThere have been quite a few spectacular failures recently. Let’s talk about how a fundamental lack of understanding of why ACID properties on a database are kind of important, especially when setting up something that resembles a bank.

No? How about the fact that backup testing, a pretty fundamental aspect of operations is so easily overlooked. Not to mention evidently being able to use the same login in dev/test and production so that you can corrupt and then drop the production database accidently.

Maybe that you need a secondary location for your Disaster Recovery? Seems like a pretty fundamental part of operations.

Or even my own web site. I didn’t maintain my updates appropriately (and I know better) which allowed me to be hacked. Happily, I have two sets of backups, one I maintain myself and one maintained by my wonderful service provider, Dreamhost. Recovery was possible in under an hour.

Speaking of backups, maybe taking them at all would be a good idea.

KumbayaNot all those losses and outages are even remotely the fault of DevOps, but you do get the sense in our ever more development-focused age that, maybe, a little more attention ought to be paid to the Ops side of things and then we can all hang out, singing kumbaya instead of running around putting out fires.

The key concept of DevOps is that we meet in the middle and cross over. It’s not that one side takes over the other. The old days where things were Ops-centric were silly, and failed. Let’s not repeat those mistakes by going completely Dev-centric.

Let’s just do DevOps.

8 Shift LEFT Issue 1

During the aftermath, my team of developers worked closely with operations, rolling back to a stable state, diagnosing the cause of the problem, working on a solution, and discussing ways the whole situation could have been avoided. As painful as it was, it occurred to me at some point that this was the first time I’d had a chance to collaborate at length with our operations team, and that I was enjoying it.

I didn’t know it then, but it was the start of my journey into DevOps.

Deployment time bombs

This event and its aftermath forced me to reflect on how the various teams hadn’t really been working together as well as they could up to that point. Like most development shops I’ve worked for, there was not a lot of collaboration between my team of application developers and the operations team (Database Administrators or DBAs, Web Administrators, and System Administrators).

Developers would write the code and “throw it over the wall” to the operations

team whose responsibility was to deploy it and keep things running smoothly. By the time the operations team had a chance to look at the changes, it was far too late in the development lifecycle; those changes had to go live.

Unfortunately, on this fateful day, our code contained a time bomb – some new code that just didn’t work on the live system. To stop the hemorrhaging, we rolled back to a working state, and then embarked on a ‘triage’ process with the operations team. We learned a lot during the process. Most of us

Introducing DevOps into the real world

About two years ago, my team deployed a production change to the system we were responsible for maintaining and developing. It wasn’t

a significant new version; just one minor new feature and a couple of bug fixes. Shortly after our operations team pushed the changes live, our logs started filling up with deadlock errors, and the performance of this and other applications degraded rapidly. Within 30 minutes, the whole system was offline. In short, one change to one line of code brought down a production-level SQL Server and stopped 90% of our users from working for two hours.

Whoops.

Case Study

{ bob walker }

9Database DevOps

knew how to run traces on a SQL Server, but the DBAs showed us how to refine them to a point where we could focus on the important queries and ignore the rest.

We also learned about the tooling available to the operations team, which can help find problems while they are happening. On the flip side, our operations staff learned more about the fast search functionality our users needed.

Sadly, after the crisis was over, we all dispersed and went back to our normal jobs. This type of knowledge sharing was very helpful to both sides, but what we needed was a way to extend the collaboration so that it became a standard feature of the development cycle, instead of happening only in response to a crisis.

Defining what DevOps actually means to you

DevOps is about adopting the tools and practices that will enable collaboration between developers and operations, with the goal of improving the overall quality of the applications. This can include working on tools to improve communication and the automation of manual steps.

I knew we needed to adopt some DevOps practices. We needed to find ways to collaborate more, but that phrase is very abstract. It’s akin to I need to lose weight; it’s a goal or desire rather than a strategy. Likewise, I was struggling to know how to define what We need to do DevOps really meant to our organization.

The turning point came when I overheard someone describe DevOps as the answer to the question: What do we have to change in our processes to be able to deploy to production ten times a day?

The implication, of course, is that when the collaboration is done right, it is possible to deploy to production ten times a day.

Excited, I posed the question to a few people, only to be greeted with a lot of worried looks. Firstly, we have to get approval to go to production and that is currently a very manual process. Surely, this was a blocker? True, and my immediate response was that we needed to remove the manual approval step!

Queue more worried looks. It turns out that as soon as you talk about removing manual checks and making statements to the business such as, We want to be able to go to production every 45 minutes, then the business people tend to worry, a lot, and with good reason. After all, we were just starting on this journey and really had no idea about best practices. As Indiana Jones says in Raiders of the Lost Ark, We’re making this up as we go along. In other words, we were going to make mistakes.

Identifying the weak links

The truth was that our deployment process was not well-automated. Each deployment was a big event and took time, so the team tended to delay doing them. This meant that changes arrived too late in the process, and the operations team often needed to do a lot of last-minute tweaking to get the new version running smoothly. This was the real, and perfectly valid, reason why the business felt the need for the manual approval step to get to production.

Surely then, the deployment process was the first weak link we needed to tackle. Maybe we couldn’t deploy to production ten times a day, but what if we could automate and streamline the process of deploying to pre-production to the point where it was easy?

Our pre-production environment was as close to production as possible. Same hardware, same setup, basically the same everything, but only the IT staff used it. If it went down or we messed up, we weren’t stopping our users from doing their jobs. It was also the first environment where the servers were locked down just as they would be in production. In other words, pre-production was a perfect test bed for our applications and if we could make deploying to pre-production a non-event, I saw a couple of immediate benefits:

Proactive instead of reactive troubleshooting. Let’s say we needed to troubleshoot a performance-critical specific feature. Instead of the development team making some changes, running a few tests and throwing the new code over to operations,

we’d collaborate. We’d work with the Web Admins on determining what the current performance level was, and we’d work with a DBA to identify potential bottlenecks on the database. Then we’d make the changes, deploy to pre-production, and re-measure.

Better testing and higher code quality. After each deployment to pre-production, we could run a set of tests, or a load test could run at night. In addition to collaborating with the business and QA, by collaborating with operations we could create tests that would much more closely match the real world.

We refined our statement of DevOps intent to the following: What needs to change in our processes in order to be able to deploy to pre-production every 45 minutes?

Why every 45 minutes? My theory was that most people can handle doing something once a day, even if it sucks. There is always some grumbling, but rarely enough to make change. Inertia will kick in for enough people and no progress will be made. One example in our process is that every database deployment to pre-production automatically generates a deployment script, which is then manually approved by a DBA.

However, if you do something that sucks, like manually approving a deployment script every 45 minutes, everybody will scream, Please make it stop, I’ll do anything to make this suck fest stop.

The start of the DevOps journey

With our goals in mind, I posed the What needs to change? question in our monthly development and operations meeting. Everyone in the room had their own ideas on how to make the 45-minute goal a reality, and it quickly turned into a great brainstorming session.

The DBAs wanted to be involved earlier in the development process, automate the checking of deployment scripts, and introduce a tool to verify database guidelines were being following during development.

The Web Admins wanted regular, realistic load testing for new code, and for us to be proactive about performance so that, even

“DevOps is about adopting the tools and practices that will enable collaboration between

developers and operations, with the goal of improving the overall quality of the application.”

10 Shift LEFT Issue 1

if the performance of the search function dropped by just 5% following a change, the development team would step in and correct it.

The developers were just as enthusiastic. They talked about automated testing to ensure deployments were successful, and breaking apart large applications into smaller components so that only those components that changed would need to be deployed.

The next steps

After the meeting, we refined our ideas still further, and proposed the first four specific steps we would take to help us get to pre-production every 45 minutes:

1. Deployment script verification – we were using Redgate’s DLM Automation for our deployments, which produces a script of the changes. Could we run some regular expressions using PowerShell to look for certain key phrases we knew would cause issues when going to production? Our goal is to run this verification on every deployment to every environment, and stop the deployment if a ‘bad change’ is about to happen.

2. TSQLT unit tests – we considered what kind of verification these tests could provide out of the box, as well as any other tests we could add. These tests would run during

the build and help prevent any bad changes reaching deployment.

3. Scheduled T-SQL script verification – the DBAs and enterprise architects created a set of scripts that can tear through a database and check to see if guidelines are being followed. Where guidelines are not being followed, the scripts will spit out a SQL Statement to update the database so that it does follow guidelines. We’re looking at ways we can schedule them and where we can include them in the process.

4. Smaller builds – we looked into the large builds that take over 15 minutes to complete. Do all the components in the build really need to be built on every check-in to version control? What steps are redundant? Can it be separated into smaller builds?

While working on these ideas, we’re going to focus on getting more and more people on board with the idea of DevOps.

Conclusion

After that first meeting, where we discussed as a group what needed to change in order to be able to deploy to pre-production every 45 minutes, one of the DBAs stopped me in the hall and said he was all for DevOps, and that

most of the DBAs were on board as well, but that we had to be careful not to ram this down their throats.

I couldn’t agree more. During the time I’d spent trying to push through these proposals, I’d learned a lot.

Firstly, change must occur on all sides of the DevOps equation. It can’t just be the operations people being forced into change.

Secondly, in any enterprise, change takes time. If you rush in proposing to remove this step and that manual check in order to reach your goal, expect a lot of push back, often for very valid reasons. You need to prove that your processes can be trusted.

Finally, while most of the time people are for the change, most of the time they are also very busy. Despite itching to get started on my proposed process changes, for the past six months our teams’ focus has been solely on delivering a major new project to the business. Any other changes being proposed had to be postponed. It is a similar story for all of the other development teams in the organization, and each team has their own set of priorities.

That said, I’ve been very encouraged by much of the feedback I’ve received so far. People are very excited to get started on this collaboration, which is great and makes it feel much less of an uphill struggle.

“If you do something that sucks, like manually approving a deployment script every 45 minutes, everybody will scream,

Please make it stop, I’ll do anything to make this suck fest stop.”

How frequently does your team deploy database changes?

Remember the biggest driver for including the database in DevOps being to increase the speed of delivery of database changes? Here’s why. 44% of those companies who practice database DevOps deploy database changes daily or, at the very least, more than once a week. Among those who have no plans to adopt DevOps, this falls to 28%. Now the reason here may be that they face less pressure to shorten the release cycle, but consider the answer to the next question before you make your mind up.

What’s the greatest drawback of traditional siloed database development?

The number one answer to this question was the increased risk of failed deployments or downtime when introducing changes. The next three drawbacks were just as telling: slow development and release cycles; the inability to respond to changing business requirements; and poor communication between development and operations teams. So in other words, companies know there are problems with traditional database development. That said, though, what’s holding them back?

• Within two years, 80% of companies will have adopted DevOps

• The biggest driver for including the database in DevOps is to increase the speed of delivery of database changes

• The key obstacles to implementing DevOps are a shortage of skills and a lack of alignment between development and operations teams

Those are big headlines in themselves, but the deeper you dive into the survey, the more fascinating it becomes, particularly for those who are wondering how on earth to bring the database into the DevOps fold. Some of the key questions – and answers – I found are as follows.

Does your team include any developers who work across both databases and applications?

Across every company surveyed, the answer was broadly the same: 75% of developers are responsible for both database and application development. So developers who are not already familiar with databases should be thinking about honing up their SQL coding skills. 60% of developers also build database deployment scripts, and 39% are responsible for deploying database changes to production. In those companies which have already adopted DevOps, the numbers rise, with 74% of developers building scripts, and 45% also deploying them.

So what’s the real state of database DevOps?

Insight

{ matt hilbert }

Good question. Redgate recently ran a State of Database DevOps survey among 1,000 software professionals. The survey included developers,

Database Administrators, and those at management level, and over half of the companies involved employed more than 500 people. Three of the most surprising findings were:

What’s the biggest challenge integrating database changes into the DevOps process?

No gold stars to hand out here. It’s the same challenge that prompted the move to DevOps in the first place: problems synchronizing application and database changes, and overcoming the different approaches teams have to development. Once you introduce DevOps practices into application development, however, the inclusion of the database in the process becomes easier because many of the issues are part of the way to being resolved.

How long will it take you to introduce a fully automated process for deploying database changes?

This question, for me, provided the most telling answer of the whole survey. Of those who haven’t yet adopted DevOps, 43% say it will take longer than two years to apply the same practices to the database. Now that, perhaps, isn’t very surprising. What is a surprise is that 68% of those who already practice DevOps say it will take less than a year. Given the fact that this was a broad survey covering many different industry sectors and many company sizes, there is no bias in this result. The reason appears to be that once you start doing DevOps, it becomes a lot easier to extend it across all of your development and operations processes.

Database DevOps 11

There was a surprise waiting inside the Enterprise edition of Visual Studio 2017 too. As part of the Data storage and processing workload, Redgate Data Tools could be installed during the standard installation process. At no cost, users could start working with ReadyRoll Core, SQL Prompt Core, and SQL Search.

All of which begs the question: why?

It goes back to that goal of any developer and any app. Microsoft has traditionally supported both application and database development with Visual Studio and SQL Server Management Studio (SSMS), but the lines are beginning to blur. The recent State of Database DevOps Report revealed that 75% of companies have developers in their team who work across both applications and databases.

The survey of 1,000 companies worldwide also revealed that 47% of companies have already adopted a DevOps approach to some or all of their projects – and a further 33% plan to adopt it during the next two years.

As part of this transformation, the development of database code is increasingly being seen as akin to the development of application code. It should be version controlled, tested and included alongside application code in processes like continuous integration and continuous delivery.

That’s an issue. Developers tend to have a favored IDE. They’re familiar with it, they work with it most of the time, and they’re highly productive. Ask them to switch to another IDE, even for a short time, and it breaks the flow. Those handy shortcuts aren’t there, it’s a different navigation experience, efficiency falls.

Bringing database DevOps to Visual Studio 2017

The launch of Visual Studio 2017 on March 7 was a big event. Hundreds of thousands of people watched the virtual event online, there were

700,000 downloads in the first two days, and Microsoft delivered on its goal to support any developer, working on any app, on any platform.

“The development of database code is increasingly being seen as akin to the development of

application code.”

Product

{ matt hilbert }

Redgate has been a leading Microsoft data platform tools vendor for 17 years and popular tools like SQL Prompt, SQL Compare, and SQL Source Control already integrate with and plug into SSMS, the IDE of choice for database developers.

By integrating Redgate Data Tools with Visual Studio, Microsoft now gives application developers the opportunity to switch from application code to database code using their favorite IDE. They can call on SQL Search to find SQL objects quickly, SQL Prompt Core to write SQL faster and more efficiently, and ReadyRoll Core to develop, source control, and safely automate deployments of database changes alongside application changes.

The result? DevOps practices can be extended to SQL Server and Azure SQL databases from inside Visual Studio, increasing the productivity of developers doing database development.

To make it as straightforward as possible to include the database in that DevOps transformation, ReadyRoll Core also works with existing build and release tools such as VSTS, TFS, Octopus Deploy, Jenkins, etc. So developers can stay inside their comfort zone, working with the tools they use every day, but widening that usage to the database.

Simon Galbraith, Redgate CEO, is enthusiastic about the partnership with Microsoft. “We’ve been developing software for the Microsoft data platform for 17 years,” he says, “and we’ve always had a good relationship. Shipping the Redgate Data Tools in Visual Studio Enterprise 2017 is a sign of how strong that relationship is – and also a recognition that database DevOps is now moving into the mainstream.”

Shawn Nandi, Senior Director for Cloud App Development and Data product marketing at Microsoft Corp., agrees: “Redgate tools are a great option for customers who work with SQL Server and Azure SQL databases because they plug into the software they already use for application development. By including three of these tools in Visual Studio Enterprise 2017, Redgate has made it easier for customers to implement DevOps practices when building, deploying and maintaining their databases. We think this is an important step forward to help companies extend DevOps practices to all parts of their applications.”

The inclusion of Redgate Data Tools in Visual Studio Enterprise 2017 creates a natural platform for both application and database development. One of the tools, SQL Search, is also included in the Community and Professional versions of Visual Studio 2017, so all users can benefit from increased productivity when working with SQL.

“By integrating Redgate Data Tools with Visual Studio, Microsoft now gives application developers the opportunity to switch from application code to

database code using their favorite IDE.”

14 Shift LEFT Issue 1

This need for quality has a strange effect. At Redgate we talked recently to 30 teams at various stages of automating database deployment, and we were surprised that for many teams the need for quality deployments prevented them automating at first. After all, if something absolutely has to be right, you do it yourself. You tend to every step, you take your time, you use the full force of your experience, all the good things humans bring to a job, right? Moreover, production changes are a highly visible part of the Database Administrator (DBA) role; it’s easy to think that automation would make a statement to the rest of the organization that the role isn’t important. Then, suddenly, something happens and those same people flip. Sometimes it’s a series of failed deployments, a recognition of human limitations, or maybe a new person joins the team who has worked in a better way. That

same quality imperative now generates the opposite behavior, and the teams recognize a simple truth: if you love something, automate it. This is not to say developers should go crazy with the database. There’s an analogy with the old saw that if you love something, you should set it free; but the opposite is true here. (After all, free-range databases have a way of turning feral.) A high quality process constraining the flow of change eases further development, rather than inhibiting it. But where do you start? In our research calls, process changes were rolled out in three different ways:

Top-downSome teams are (un?)lucky enough to have a top-down initiative to bring an army of wayward databases into line. In this case, a new system might be designed up-front, with all the

components in place. This approach is favored where speed and efficiency can be tied directly to business value. Take one consumer-facing insurance website: new features and bug fixes were being delivered in an environment of intense competitive pressure. Speedy delivery into production was essential, so they worked hard up-front to bring the database into their continuous delivery process. If communication between developers and DBAs is poor, this may be the only approach that can get both sides working together to make the change.

From the ‘left’For teams who bring in change incrementally, automation solutions lend themselves nicely to being rolled out in stages. First, developers start tracking database changes in source control, solving issues around visibility

The why (and more importantly the how) of automateddatabase deployment

{ elizabeth ayer }

Research

Reliability, traceability, speed: these are the top three motivators for automating the deployment of database changes, and bringing DevOps

to the database. Yet, there is no compromising on the level of scrutiny that may just prevent something disastrous happening, especially when it comes to production.

15Database DevOps

and conflict. Once they’re basking in the warm light of traceability, the teams start running changes through their continuous integration system, so that each database change triggers basic sanity checks. Just like continuous integration in application development, this gives fast feedback to developers about their changes. For each successful build, some teams push changes straight to an integration or test environment, or just make the changes available for others to pick up. Either approach, push or pull, can extend to downstream environments like UAT, Staging or Production, but in most situations, a release management tool such as Octopus Deploy formalizes the relationship between environments and gives visibility into what versions are where. It might sound like this approach is always driven by developers, but we’ve seen a surprising number of DBAs

instigate the change. Or maybe it’s not that surprising; repeated rehearsals are going to surface most problems in test rather than production, increasing overall reliability.

Outside-inAnother pattern I’ve seen is to do automation by pincer movement, but I’ll be honest, I’m not sure this gives the best chance of a good outcome. In this case, automation comes in from both ends of the development lifecycle: development and production. The motivation for this is to ease the keenest pain points with tools or scripts, then let automation grow as everyone comes to trust the systems. The reason I’m not so sure about this is that, from the teams I’ve talked to, it’s easy to get stuck at this point. Sure, they address the biggest problems first – generally a good approach.

The downside is that the chaos is compressed into integration and test, areas which are notoriously difficult to budget effort to fix.

ConclusionFor the teams we talked to, getting environments and the process into shape was much more challenging than the automation itself, but even so, we couldn’t find anyone who would accept a step backwards away from deployment automation. Even moving jobs, they would insist on applying their automation knowledge to new situations. Quite simply, putting the work into the deployment process, rather than individual deployments, means that all the love and care is repaid hundreds of times over.

16 Shift LEFT Issue 1

DevOps – a DBA’s perspective

Viewpoint

{ paul ibison }

17Database DevOps

1. Compare the current state of the database to a baseline state using a tool to create a comparison file. The approach may cause issues which result in data loss, like splitting a single column into two new columns, but some tools like those from Redgate allow you to mitigate this. The tool may also compare reference data, depending on the requirements, following which scripts will be created using unit test frameworks like tSQLt to test the code element of the database, such as stored procedures and functions, etc.

2. The database comparison files are published with the other elements of the software release. These files should be checked into an artefact repository at this stage to ensure there is always a record of what was changed with that release, and the base schema and other database code should be in source control too.

3. The database package is then deployed automatically and updates the database, ideally using the same or similar deployment methods to the code.

In the ideal deployment pipeline scenario, it really is this straightforward with databases.

But what if the deployment pipeline scenario isn’t ideal?

I don’t want to trivialize the role of the DBA (especially as I’m one), but in the instance I’ve described, the DBA isn’t really involved in the deployment process in a significant way. This of course leaves them free to concentrate on the (very important) core operational tasks – backups, re-indexing, high-availability, security, optimization, troubleshooting, and so on.

Problems can arise when the database is dependent on other bits of what we can call ‘peripheral architecture’. These extra ‘bits’ could exist in the current database, another user database or one of the system databases or even further afield. To not get overwhelmed with all these variations, I like to separate this peripheral architecture into two classes.

Basic configurations are usually a single row in a system table, they’re quite easy to script, and they often come from system databases. The master database might include logins, server triggers, linked servers, configurations and encryption, while the msdb database will have the jobs, operators, alerts, messages, etc.

Complex configurations involve entries in many different system tables in system databases, and scripting them might involve a fair bit of work. Examples include the setup of replication, log-shipping, SSIS packages, availability groups, and so on.

One thing both configurations have in common is that having them in scope as far as the deployment pipeline is concerned means the database can no longer be treated as a simple black box.

When there is a requirement to address the needs of basic or complex configurations, we end up with a list which contains definite dependencies and potential dependencies for the deployment.

One example of a definite dependency is a build script for a new user, which requires the basic configuration of a login. This is solved by creating a script that can check for the login during the deployment and create it if it doesn’t exist.

A potential dependency might be an SSIS package that needs updating, but we’re not sure.

At this stage, let me say that totally ignoring the list of potential dependencies often works. The build runs and deployments happen and users are happy. That is, until it doesn’t work …Then questions are asked – why is this job that failed not also being updated and deployed? Why didn’t the SSIS package get updated during the build?

What we need is an automated build process which takes the finger-crossing out of the equation as much as we can. So how do we deal with all these configurations and dependencies, and have a build that always works, or at least get us nearer to this goal? There are some guidelines we can follow:

Make a list of your configurations which are to be considered to be in scopeIn particular, examine the list of potential dependencies and clarify which are truly relevant. Are there jobs which refer to user tables? Are the SSIS packages importing into our tables? Are the replicated tables the same as our deployment tables? Are we using SMTP? Are these linked servers being used?

In a DevOps pipeline setup, the methodology is sometimes to treat the database as an isolated

‘black box’ and, in that way, deploy changes to it in the same manner as normal code. To achieve this, the process may look something like:

18 Shift LEFT Issue 1

Decide to what extent each environment should aim to have the same configurationsIf there is replication in Production, then why not have it in De-velopment also? Ideally, all environments should be identical so we can thoroughly test the build before deploying to Production, but there may be financial constraints which make this impos-sible. Clustering is an obvious case – often the cluster exists in the Pre-production and Production environments but not in Development, Test, and UAT due to the expense.

Make a recreate/update decision for each configurationFor a configuration like the database mail setup, this can be done quite nicely, but for some configurations it can be bad practice. We won’t want to lose the job history each time we rec-reate the jobs, for example. In other cases, it is simply impracti-cal. I doubt anyone would agree to having to reinitialize all those replication subscribers or set up log shipping destinations as a part of each deployment.

Then again, do we always recreate or only create if it doesn’t exist, and if it does we update? It’s easiest to drop the extra architecture and recreate each time we deploy. The scripting is easier this way and we ensure the configuration is identical.

Create deployment scripts for the configurationsThese scripts will deploy all the peripheral architecture and will most likely be environment-specific. For example, linked servers in Development are unlikely to be the same as those in Pro-duction, logins are often set up differently in environments by design for security purposes, the SMTP profiles in Development shouldn’t send emails to Production support, and so on. Third party tools might help create some scripts here as a starting point.

Create test scripts for some of the complex configurationOne of the deployment steps checks that replication still works, that the SSIS packages work, that jobs still run, that emails still get sent, and so on. This is a validation – a functional smoke test / integration test – that nothing is broken.

To summarize, there are some elements of a database that can be thought of as code and other elements that can be thought of as infrastructure. When we’re working with these elements, the complexity comes with preserving data and the need for a more holistic view of the database. While it isn’t simple, there are already well-established ways of managing all this and we just need to implement the process to put it in place.

The big picture

From a wider DevOps perspective, much of what I’ve discussed is related to the First Way of DevOps – Systems Thinking. That is, always looking at the holistic ‘Big Picture’. In this case, look-ing beyond the simple database-level solutions and seeking to provide a holistic solution that covers all areas of the database platform (replication, log shipping, SSIS packages, availability groups, clustering, mirroring, and so on).

Paul Ibison is a SQL Server DBA at DevOpsGuys www.devopsguys.com

19Database DevOps

Continuous Delivery - how doapplication & database

development really compare?

Building great software is never just about the code. It’s also about managing multiple teams, timelines, and frequently changing business or customer requirements. As organizations face increasing pressure to create and release software more rapidly, DevOps has emerged as an effective way to help teams collaborate and speed up the delivery cycle.

At the heart of the DevOps approach is the concept of continuous delivery. Coupled with the cultural change that DevOps requires, continuous delivery enables teams to work together to produce software in short cycles so they no longer need to rely on ‘big bang’ releases to provide value to customers. Instead, they have reliable and repeatable processes in place to provide a steadier stream of more frequent deployments, both increasing efficiency and reducing risk in the software release process.

So what could that mean for database teams? No more repetitive and tedious development and testing processes, no more crossing fingers and hoping for the best when deploying to production, no more pain from trying to roll-back after a catastrophic failure when you have no idea what state the database was previously in. Sounds good, right?

Yet our recent State of Database DevOps report confirmed that continuous delivery practices like version control, continuous integration, and automated test and deployment are still far more commonplace for application development than they are for the database.

{ kate duggan }

Analysis

Our recent State of Database DevOps survey showed that DevOps practices like continuous delivery are still far more commonplace

in application development than in database development. So why is that – and why should teams care?

20 Shift LEFT Issue 1

Only 20% of respondents practice continuous integration for the database, compared with 38% for the application, while 22% automate database deployments, again compared with a much higher figure of 38% when it comes to releasing applications. So there are still a lot of database professionals out there who aren’t yet seeing the advantages that continuous delivery can bring.

Why is that?It’s unlikely to be a love for the tradition of laborious, time-consuming, complicated, and error-prone database change management. It’s more probably down to the fact that database professionals are concerned about the protection of their organization’s data.

This was also evident in the results of our survey. ‘Preserving and protecting business critical data’ was cited as one of the top three challenges in integrating database changes into a DevOps process.

Unlike applications, databases contain state that needs to be managed and preserved as part of rolling out new or updating existing software. Yesterday’s database can’t simply be thrown away or overwritten when changes are deployed. To do so risks the loss of business-critical data.

So processes are needed to safeguard that data as well as the behavior of applications. Otherwise, database development can fall out of step with application development, leading to a fragmented workflow and getting in the way of the combined goal of delivering great software faster.

The ideal scenario is to have application and database development teams working in parallel and coordinating changes through the same processes. This increases the likelihood of releasing more stable, consistent builds. It also helps remove unnecessary delays to software delivery.

And the benefits can be enormous. By providing a common series of development, testing, and release processes for application and database development teams, organizations see much greater collaboration on application features and their database requirements as well as opportunities to continuously learn and improve the way software is delivered. These are cornerstones of the DevOps approach, contributing to improved quality over the course of the software lifecycle and reduced risk at the point of release.

The foundations are there for many already. More than half of the respondents to our survey were already versioning their database objects alongside their application code. By putting database changes into source control, teams benefit from a single source of truth on which they can base development and deployment processes. The rest can be built from there.

21Database DevOps

Rates of Adoption of DevOps practices

22 Shift LEFT Issue 1

“Why 2017 is the year of Database DevOps,

and why database professionals should

lead the charge.”

One of the reasons is the 2016 State of DevOps Report from Puppet Labs. They interviewed 4,500 people about their experiences of DevOps and the report provides strong evidence you should now be taking DevOps seriously. Firstly, companies who use DevOps report a third of the rate of failure when making changes compared to those that don’t. That’s a really important benefit and enough on its own to justify adopting DevOps. Secondly, organizations which practice DevOps recover from an incorrect change 24 times faster. That’s also very compelling. Thirdly, people who follow DevOps report that it’s 2,500 times quicker getting changes to customers. That is an unbelievable saving for a business, especially when you think of the working capital tied up in changes that have yet to hit the market. Those three things – the reduced risk of failure, the faster response time to failure, and getting added value into the hands of customers sooner – are seriously valuable advantages for a business. There’s another soft benefit, which is how people feel when they’re working for you. Teams which do DevOps find it’s easier to recruit good people, easier to keep them, and easier to make the team really successful. Apparently those teams also spend less time on unplanned work like resolving unexpected security issues.

Viewpoint

{ simon galbraith }

At Redgate, we think DevOps is now sufficiently mainstream and that it’s worth businesses thinking

of moving to a database DevOps type of world.

23Database DevOps

“Teams which do DevOps find it’s easier to recruit good people,

easier to keep them, and easier to make the team really successful.”

I suspect some of you are thinking you’ve heard this before, but the evidence shows DevOps is now becoming the norm rather than the exception. 50% of businesses doing DevOps have more than 500 employees, it’s easily placed on a Microsoft or LAMP stack, and it’s not just for guys with ponytails and flip-flops any more. The way we see Redgate fitting into this is by saying, on the Microsoft stack, we’re there to help you deliver the database side of DevOps. Whether you’re provisioning a working environment, building a database, releasing it, monitoring it, or backing it up, we offer the technology to make database DevOps possible and real. But there’s a final point I’d like to make which is that people working with databases are in a much better position than application developers were when they were adopting DevOps a couple of years ago.

Research from Dell, for example, shows that database development is already enormously cross-functional. In fact, there’s a 60% overlap between people working on both development and operations, even without a widespread adoption of DevOps. We know this from our customers too – when we survey people, we find the majority work in both Visual Studio and SQL Server Management Studio for database development and administration, and split their time about half and half between them. So really, you’re already doing a lot of this. Although the database is often described as the bit that’s hard to get working, it turns out that the really hard part of DevOps – collaboration, working across organizational divisions, the bit that most companies struggle the most with – is already there. So now the technology is coming through, we’re hopeful you’re going to decide to take DevOps to the database.

24 Shift LEFT Issue 1

Database development increasingly involves a pipeline, or set of databases across your delivery environments, through which your code flows. We make changes to the data-base in the Development environment, such as altering a stored procedure or adding a new column to a table, save the changes in source control, and then deploy these changes through subsequent environments in the pipe-line, such as Build, Integration, Staging and, ultimately, through to Production.

The idea of a pipeline is appropriate because a steady and continuous flow of changes through the environments constantly delivers value to users.

To achieve this, we need to start think-ing earlier about database changes and the impact they’ll have. We need to shift left the database build, test, integration and release processes, performing them as early as pos-sible in the application development cycle, so that we deliver database changes to each sub-sequent environment, frequently and reliably.

What processes need to be shifted left?

Too often database changes are integrated late in the application development lifecycle and the database build, integration, and de-ployment processes are right-sided – they’re devised and implemented close to the time when the changes need to be delivered to users, often without thorough testing. The Database Administrators (DBAs) and System Administrators are frequently left to deal with and repair the inevitable bugs, security vulner-abilities, and performance issues, with the delays in deployment costing organizations a lot of time and money.

In order to avoid this situation, we need to start deploying database changes more frequently – and this means we need to imple-ment them earlier, effectively shifting them left on the project timeline:

Database build – the team should be able to reconstruct any version of a database from scratch using the scripts in version control. Database integration – having established a working version of the database early in the development cycle, the team continue to verify regularly that it remains in a working state by integrating and testing all changes. Database testing – unit testing and perfor-mance testing will give the team confidence that what they are building will work and perform reliably. Database release – a reliable, repeatable, and auditable process is required to deploy changes to production, and practice on a production-like environment will ensure early feedback on potential deployment, security, configuration or other operational issues.

If we can start to shift these processes over to the left, we can identify and fix problems quickly, as well as test and refine the build and deployment processes as early as possible in the timeline. All of this will help to minimize nasty surprises when releasing to Production.

For many organizations, shifting left will be a long term process, but it can be tackled in small, manageable steps.

DevOps and Shift Left for Databases

The easiest way of explaining how a DevOps approach to database deploy-ment can dramatically speed up the process is to use the term ‘Shift Left’.

By performing, or at least preparing for, all the processes and tasks that are necessary for deployment as soon as possible in the development cycle, it is possible to move from big, infrequent releases to ‘little and often’. Stephanie Herr explains how shifting left can get deployment pipelines moving.

{ stephanie herr }

Deep dive

“For many organizations, shifting left will be a long term process, but it can be tackled in

small, manageable steps.”

25Database DevOps

Getting DBAs involved earlier

A key concept in DevOps is breaking down barriers between roles and working together towards a common goal to deliver valuable, reliable software to users quickly.

So, ask DBAs to help with database changes when they are happening in Develop-ment. DBAs can make sure schema changes are designed correctly from the beginning, for example. They can get involved in complex queries to make sure they’re optimized and will work well on larger, more realistic data sets. They can think about what indexes may be needed. They can see the bigger picture and know what other systems may be im-pacted by the change. They can start to think about servers, network capacity, backups and high availability.

This also allows DBAs to understand what changes are being made to the database, which can be extremely helpful when there’s a problem with Production.

Version control your database

Introducing database version control is the first step on the shift left journey because the Version Control System (VCS) becomes a sin-gle source of truth for any database version. If your database is not in version control, then it will be very difficult to perform any subse-quent build, integration, test, and deployment processes frequently or in a repeatable and automated fashion.

Without database version control, a lot of teams rely on production database backups to get the latest version of a database, or to revert an environment to a previous database version, from an older backup, if needed. Restoring production backups can be time-consuming for big databases. If the database contains sensitive data, the DBA may need to remove or sanitize it, requiring more time and effort. Also, what happens if database development takes a wrong turn and the team need to revert to a previous development data-base version? Are the development databases being backed up? If so, how frequently?

By contrast, with the database under version control, the development team can reconstruct any version of a database, and

migrate from one version to another, just from the source-controlled database object scripts. In this way, version control gives the team a safety net. They are less afraid to make data-base changes, because they know they can revert and undo a mistake easily.

Everything related to the application project should be in version control; the code, configurations, documentation, infrastructure, and of course the database. For the database, teams need to version control database crea-tion scripts, configuration scripts, schema objects (tables, stored procedures, views, and so on), reference data, and migration scripts for deployments. In short, everything needed to recreate a working version of the database.

Having achieved this, every database change can be seen and understood. This makes it easier and quicker for busy DBAs to understand and review database changes, to give them more confidence in the changes when it comes time to release to Production, and to reduce the amount of time they spend trying to supply development and test environ-ments with a new version of a database.

A good working set-up for many develop-ment teams is for each developer to work on their own ‘sandbox database’, testing and committing database changes to a local source control repository. This also creates an additional step early on where developers will need to deploy their changes from their sandbox to an integration or test server, which tests a deployment of their changes as early as possible.

“Version control gives the team a safety net. They are less afraid to make database changes, because they know they can revert and undo a

mistake easily.”

26 Shift LEFT Issue 1

Shift left to include your database in automated testing

If the database is built and deployed on each change, assuming the unit tests pass, then au-tomated integration tests can run across both the application and the database, ensuring the tests are working against the latest version of each. Without this, integration tests may mock out the database, which isn’t testing the entire system, leaving this to be done at a later stage (further to the right).

We can think about performance testing or load testing sooner too. For upgrades to existing databases, you might be able to use a copy of the production data, a data generation tool, or simply import a standard data set. To test the performance of a new database, you’ll need some way to estimate how much data you’re likely to collect in a set period. This means you can test against that sort of volume of data and make sure performance isn’t going to deteriorate.

Shift left to include your database in continuous integration processes

As soon as a developer commits application changes to version control, a continuous integration (CI) server will typically build it and run automated unit tests to see if anything is broken. This gives teams a fast feedback loop so problems are found quickly and can be fixed more easily. If the integration fails, then what they changed and why is fresh in the developer’s mind and they can fix it more eas-ily, or revert the change, if needed. Even more important is that the release of the changes through the pipeline stops. Since the CI build failed, this release never got deployed to Pro-duction and negatively impacted your users.

Similarly, we need to achieve a fast feed-back loop as database changes are made to find any problems, and to prevent breaking database changes from ever reaching produc-tion, where they can cause problems for users or for the production data, or both. In short, we need the database to be included in the CI process.

Developers can run database unit tests locally on their changes. If the tests pass, they commit the change to the VCS, and add their tests to the broader suite of tests for the application so that the database changes and database unit tests are also included as part of the CI process.

Shift left to include your database in the build process

Without the database in version control, it’s actually rather hard to ‘build’ a database from scratch, at the required version. What tends to happen instead is that the development team try to come up with a list of just the database changes that are needed to support the new version, generate a set of change scripts, and run them against the database in the test environment. Inevitably scripts will be missed, adapted on the fly to fix issues, and so on. It’s messy and uncontrolled and makes repeat-able builds hard to achieve.

Working from the VCS, however, the build process can create a database at any par-ticular version. It will create a new database, create its objects, and load any reference data, checking that the SQL syntax is valid and dependencies are in order.

As part of shift left, the team needs to start perfecting the art of the database build as early as possible in the pipeline, as it in turn enables early integration and testing, so that you can find out about problems and fix them easier long before those changes hit production.

“You can find out about problems and fix them easier long before those changes hit production.”

27Database DevOps

Summary

This may seem like a lot, but you don’t have to make all these changes right away. First, start encouraging developers and DBAs to collabo-rate earlier so database changes can be done right the first time and DBAs aren’t surprised by last minute production deployments.

Next, get your database in version control so developers and DBAs can understand the database changes that are happening. Version controlling your database is also the first step if you want to start doing continuous integra-tion, automated testing, and/or automated deployments.

Finally, setting up a Staging environment to test deployments in a production-like environment before going to Production is extremely important to reduce risk and protect your production data and your users from any problems.

Shift left to practice database deployments before deploying to Production

Practice makes perfect. We want to practice our deployments as much as possible and have a repeatable, reliable process to reduce the risk when deploying to Production, so we don’t cause any costly downtime or upset users. By deploying as early as possible to a production-like environment, we can perform early acceptance testing, and verify that se-curity, capacity, availability and performance characteristics of the production system are acceptable to the business or user.

The first step is to introduce a temporary staging database into the pipeline, or ideally a staging environment that is configured as closely as possible to mimic the production environment. This staging environment must be under the same security regime as Produc-tion.

This way, we are testing our deployment script before it runs on Production, which gives us more confidence that the Produc-tion deployment will go smoothly. This could be done by restoring the latest Production backup on another server or doing some kind of database copy. We then apply the exact de-ployment script to the Staging database that will subsequently be applied to the Production database.

“Start encouraging developers and DBAs to collaborate earlier so database changes

can be done right the first time.”

28 Shift LEFT Issue 1

I’ve spoken with others about ‘The Phoenix Project’ and the way it puts the DevOps philosophy into sharp definition. Yes, it’s a novel about this and other IT themes – and for some this seems to raise an eyebrow at first – but those I’ve spoken with agree that its status as a novel doesn’t translate into ‘DevOps lite’. W. Edwards Deming’s manufacturing philosophies, in addition to Lean and Agile development methodologies, feature heavily; concepts that sit at the roots of the DevOps philosophy.

In essence, the authors of the novel – Kevin Behr, Gene Kim, and George Spafford – have written a DevOps parable, and strive to make it recognizable at every turn of the page. Indeed, having all worked in organizations of varying sizes, maturity and sectors, they’ve also taken pains to place characters in an easily identifiable setting, and embrace the human elements of the story. So, quite apart from the technical challenges (such as the hauntingly familiar server outages and lagging database performance), you’ll find certain home truths: donuts sweeten meetings and pizza sustains chaotic all-nighters. The evolution Bill and his team undergo is pretty amazing – but then this is also DevOps in fast motion. Like many novels, it takes the reader on an unrelenting journey with the underdog,

{ claire brooking }

The synopsis is simple. Bill, the protagonist, is the Director of Midrange Operations at Parts Unlimited, a US-based $4 billion per year manufacturing and retail company. Bill is swiftly pulled into the spotlight by the CEO and hoodwinked into taking up the post as VP of IT Operations. It soon becomes clear that among standard responsibilities, Bill and his team are also responsible for making the launch of the doomed Project Phoenix a success. Project Phoenix not only seems to be hugely over-scoped for its ambitious – and imminent – timelines, but it also faces enormous pressure elsewhere.

This is mainly because the future of the company appears to hang almost entirely on Project Phoenix. The political storm brewing behind the project, led by Sarah the marketing manager, adds further tension. Faced with the conflict of Project Phoenix requirements against firefighting and competing projects, Bill and his team enter a spiral of problems, mistakes, gloom and despair. Thanks to a growing insight into Lean, Agile and DevOps concepts, Bill and his team gradually evolve their way of working. By the time Project Unicorn arrives later in the story, they’re able to release and support the project reliably, efficiently, and emerge with strengthening morale out the other side.

At both the detailed level of insight into leaner, more efficient ways of working and at the wider level of an organization evolving its approach towards IT, I believe everyone with an IT requirement has something to gain from reading ‘The Phoenix Project’. It’s true that there’s a fair amount of technical detail on the challenges that the IT team face, which may be daunting for some business team readers. It’s also true that the novel describes a big adjustment in behavior and ways of working between the development and operations teams specifically, and it’s this that lays many of the foundations for better collaboration.

But it’s when both the IT and business arms of Parts Unlimited reconsider their biases, share responsibility and better understand each other’s challenges (yes, that also means the business teams working harder to understand the technical risks facing the organization) that the cogs become better oiled. That’s not in any way to suggest that sales, marketing, finance, and other business teams are somehow the secret sauce in the success of IT. They’re not. The point is that in the book we see IT growing from an isolated function to an integral capability across the organization, and everyone has a shared part to play in this.

Review

After hearing that anyone who wants to know what DevOps really means should

read ‘The Phoenix Project’, Claire Brooking picked up a copy of the novel. This parable of an IT project on the brink of destruction is told with humor and insight. Claire reviews the book, finding that conflict, incidents and mistakes are inevitable – what counts is how the team members grow to manage and resolve them.

A brief synopsis IT becomes an organization-wide capability

Overturning preconceptions

29Database DevOps

Bill, as he is thrown into battle, meets a (sort of) wizard also known as Erik, and finally triumphs with the organization at his side.

The setting, the IT challenges and the characters at times feel caricatured, but to dwell on the realism of this journey and its existence as a novel for too long feels like it risks missing two of the other key points of the book – that this is a parable with an enterprising spirit at its heart. Via the medium of fiction based on reality, it unconventionally presents DevOps concepts, and it feels like it also wants to dare the reader to be open-minded. Being a fictionalized depiction, it’s easy for the book to ask you to consider turning accepted practices and attitudes on their head in the search for progress.

Evolution rather than transformation

In the movement of IT from an isolated function to a central capability – both metaphorically and physically, almost all teams are challenged to look beyond their department walls.

This isn’t just about the dev and ops teams turning around an unhealthy relationship that’s damaged by (a) what Jez Humble would perhaps describe as ‘risk management theatre‘ and, (b) throwing, as one character describes it, “the pig over the wall” for the next team to tackle. This is also about an evolution of attitudes and approaches across the business.

In the book, Bill tries a range of processes to bring his dev and ops teams closer together and, as these efforts bring results, he and his teams’ focus starts to shift to the rest of the organization. In fact, one of the book’s surprises is the transformation of the Chief Information Security Officer, John, outcast and ‘political commissar’, into a team member who is instrumental in bringing the business teams and organization objectives closer to IT. As Deming would describe it, John is essential in helping the various teams gain an appreciation for the system’ that IT works within.

DevOps across the organization

For me, John’s series of actions at his point of transformation also raise one of the most important practical examples in the book of DevOps reaching throughout the organization.

Bill and John meet with managers in the business teams, from sales to finance, to

understand performance measures across the organization, how they rely on IT, and the business risks IT brings for each measure. At first this approach feels worryingly like we’re headed down a one-way street. Several business team members, for example, are antagonistic towards the open approach Bill and his IT co-workers encourage.

Yet by the end of the book Maggie, part of the business team in charge of promotions and pricing roadmaps, is attending stand-ups for the IT project linked to her objectives. She’s even demoing results to the team in the kind of cross-functional approach to projects that really sings when everyone has a shared goal, and there’s a sense of energy in the team again.

Early feedback loop

This approach of shared responsibility and ensuring projects are clearly linked with the organization’s objectives also highlights another interesting DevOps message in the book – that an early feedback loop in the lifecycle, involving all stakeholders and project members, feeds efficiency and quality.

As Bill and his team adopt a more Agile approach to prioritizing their backlog of projects and gathering requirements, they avoid the shifting goalposts of requirements and timelines that they suffer earlier in the story when releasing Project Phoenix.

Automate (what you can) and review

By the time Project Unicorn arrives, Bill and his team are ready to run shorter, iterative development lifecycles that mean they can identify issues earlier, get results out to the business, and move work off the line sooner. They also set in place a series of automated practices to standardize deployment processes and build up a release pipeline that eliminates the majority of variance that caused the teams grief previously. Cycle times shorten and any fixes can be developed, tested, reviewed and released much quicker than they were with Project Phoenix.

Quite beyond being a series of examples for how to run an IT project with Agile and DevOps practices, this is also a lesson generally in PDCA (Plan, Do, Check, Act). Essentially, larger projects run better when:

• They’re broken down into manageable

chunks

• We make the effort to check back with

others across different teams

• We check results

• We make adjustments to the project as

needed

• We learn how to standardize (even

automate processes for repeatability and

greater accuracy if possible), improve

for next time, and communicate that

experience

Avoiding isolation

This willingness to open up a project much earlier for feedback touches on another example of DevOps in the book that also carries parallels for the wider organization. In the closing chapters of the book, John’s security team works closely with the development team, integrating security tests into the build procedures, testing and reviewing them earlier in the cycle. As a result, we see a radical shift in approach from earlier chapters, when John and his team were viewed as a disruption to be avoided.

As John and his team make efforts to integrate with other teams, they even start to use the same testing process that the QA team uses. As the security and QA teams open themselves up jointly to the potential of exposing errors, so they have a greater ability to resolve any issues found head on. John no longer needs to parachute in at the last minute with his dreaded black ringbinder of problems to register project shortcomings. Fundamentally, John’s team are no longer ‘special’ or exempt from agreed practices, nor are they isolated and overspecialized in their own departmental universe.

Concluding thoughts

At its most raw, this book can easily feel like a lesson in humility. It’s not the case that at the end of the book, DevOps and Agile approaches to working have magically evaporated all the challenges facing Parts Unlimited. Conflict, incidents and mistakes are inevitable – what counts is how Bill and other team members grow to manage and resolve them. By the end of the book, Bill, his team, and Parts Unlimited have structure, process, and a more open attitude to change and adaptation to stand them in good stead.

At its most practical, ‘The Phoenix Project’ is an illustrative series of examples and suggestions for ways to evolve IT from a function that’s viewed as a bottleneck to one that’s widely agreed to be an indispensable capability. And at both levels, the novel makes it clear – DevOps includes the wider organization, and the wider organization can learn a lot from DevOps.

30 Shift LEFT Issue 1

Thanks to Redgate,you have the best tools.

We help teams deploy with confidence by automatinginfrastructure, code, and database delivery.

Contact us to go from “How do I use this?” to “Let’s do this!” faster.

www.nebbiatech.com | [email protected] | 407.930.2400

Now make themost of them.

Educate Accelerate TransformEducate Accelerate TransformEducate Accelerate Transform

We’re Hiring!

We Transform and Accelerate the Way That Organisations

Deliver Software

Educate | Automate | Transform

0800 368 7378

@DevOpsGuys

[email protected]

www.devopsguys.com

Cogsworth

Partner

31Database DevOps

“DevOps is the union of people, process, and products to enable continuous delivery of value to our end users.” Donovan Brown, Principal DevOps Program Manager, Microsoft

If you’d like advice on which partner is best able to help you, talk to us. We know them, we’ve trained them, and we can recommend which of our experts is the most suitable for your particular requirements. Contact [email protected].

If you’re interested in becoming a Redgate Certified Partner, we’d love to hear from you. Visit www.red-gate.com/consultant for more details.

EuropeArchitecture & PerformanceLyon, [email protected]

Axians IT Solutions GmbHUlm, [email protected]

DataMovementsLondon, [email protected]

DevOpsGuysCardiff, [email protected]

DLM ConsultantsCambridge, [email protected]

Drost ConsultingAmsterdam, [email protected]

GDS Business Intelligence GmbHEssen, [email protected]

Jarrin ConsultancyNewcastle-upon-Tyne, [email protected]

xTENLeeds, [email protected]

USACentino SystemsOxford, [email protected]

Crafting BytesSan Diego, [email protected]

Iowa Computer GurusWest Des Moines, [email protected]

Nebbia TechnologyOrlando, [email protected]

Northwest CadenceBellevue, [email protected]

AsiaHuman Interactive Technology Inc.Tokyo, [email protected]

SQL MaestrosBangalore , [email protected]

More and more companies and organizations are exploring DevOps. But while you have the people, and Redgate and Microsoft have the products, there is often a gap in the process. Step in Redgate’s Certified Partners. We have a growing number of consultants across the globe who can help you with the installation, customization and training of our products to get you on the road to database DevOps.

Partners

We do DevOps for databases

What did you do that was wonderful today?

We’re looking for Account Executives in Pasadena CA and Austin TX.

We’re looking for Software Engineers and UX Designers in Cambridge UK.

red-gate.com/careers