Page 1
Container through the ages
Christoph Glaubitz Cloud Architect at SysEleven GmbH ContainerDays Hamburg, 28.06.2016
Page 2
WHAT COULD YOU EXPECT FROM THIS TALK?
A little bit of history on containersHow we used containers in the pastWhat we did to manage our infrastructure
Page 3
What is a container?
https://www.flickr.com/photos/sewm/
Page 4
Simplified: Each container gets its own runtimeenvironment, but all share the same kernel.
Page 5
With the advantage of just running one kernel.
Page 6
But the disadvantage of just running one kernel.
Page 7
Same kernel version for all the containers.
Page 8
It is not possible to load different modules for containers.
Page 9
OS type in containers must be the same like the host.
(Which is not true any more!)
Page 10
The most simple way of containering is a traditionalchroot environment. Introduced in 1979.
Page 11
Which isolates runtime environments, but does not isolateprocess space and devices, and does nothing with
resource control.
Page 12
In 2000, FreeBSD jails were introduced. chroot wasextended by further isolation mechanisms, like processnamespaces. Processes in one jail can only see processes
in the same jail.
Page 13
In this time, Virtuozzo started with containers on Linux.
Page 14
Without a large popularity, similar concepts landed inmainline over the years
Page 15
Namespaces to limit things, processes can see
Page 16
cgroupsIPC (2.6.30)Network (2.6.24)Mount (2.4.19)PIDUserUTS
Page 17
cgroups to limit resources a process can use
Page 18
cpumemorydevicesblkio[…]
Page 19
My first contact with containers was around 2007
https://en.wikipedia.org/wiki/Star_Trek:_First_Contact#/media/File:Star_Trek_08-poster.png
Page 20
When I worked in the Tivoli Storage Manager Team atIBM, where we had to support AIX WPARs and Solaris
Zones.
Page 21
The ideas are still the same, also in the times ofApplication Containers.
Using the same kernel.Isolate environments as strong as possible by using thementioned namespaces and cgroups.
Page 22
WHY DID CONTAINERS GET THIS TRACTION INTHE PAST FEW YEARS?
Page 23
I think, it the easy way to create and deploy images.
https://www.flickr.com/photos/smemon/
Page 24
We can run the image (almost) everywhere!
Page 25
… in a chroot without isolation and resource control usingrkt fly by CoreOS
Page 26
… in a small KVM (with an injected kernel), again using rkt,thanks to CoreOS and Intel
Page 27
… on FreeBSD using JetPack
Page 28
… even on Windows!
Page 29
And all this without any modification to the image!
Page 30
It changed the way we think about software.
Page 31
One application in one container.
Page 32
Only one process where possible.
Page 33
Applications being stateless…
Page 34
… and more fault tolerant.
Page 35
Smaller chunks of source code doing one thing…
Page 36
… which could be deployed independent and often.
Page 37
BUT THERE WILL BE DRAGONS!
Page 38
Unfortunately it is also very easy to build crappy images!!
https://www.flickr.com/photos/57402879@N00/
Page 39
In my experience, deployments fail more likely because ofmisconfigured next-hops (like DB), instead of crappy
software.
Page 40
Plugging services together is even more difficult in afloating environment.
Page 41
More management software is needed!
Page 42
I strongly encourage you to read on this.
https://charity.wtf/2016/05/31/wtf-is-operations-serverless/
https://charity.wtf/2016/05/31/operational-best-practices-serverless/
Page 43
"Microservices: because solving business problems is hardbut building loosely coupled fault-tolerant distributed
systems is easy."
https://twitter.com/neil_conway/status/743086761493008384
Page 44
SO… WHAT ARE YOU GUYS AT SYSELEVENDOING?
Page 45
RUNNING TWO TYPES OF INFRASTRUCTURE!
Page 46
A NEW ONE, BASED ON OPENSTACK…
Page 47
… AN OLD ONE, BASED ON PARALLELSVIRTUOZZO…
Page 48
… but most important: Provide, maintain and monitor theinfrastructure into which our customers can deploy their
code into.
Page 50
1 . The hardware and hypervisor "layer".
Page 51
2 . The running Virtual Environments/containers, with allthe services in it.
Page 52
LET'S STEP INTO THE VIRTUOZZO PLATFORM
Page 53
In the past, Virtuozzo was basically
image formatlocal storagenetwork isolationLinux based container runtime(but maintained outside of mainline)
Page 54
… treating containers like usual virtual machines
with a complete Linux running insidestarting with the init systemsure, except of the kernel
target distribution has to be supported by virtuozzoto set up the network, hostname, etcthis is extensible by shell scripts
Page 55
But with a huge performance benefit over VMs
https://www.flickr.com/photos/17612257@N00/
Page 56
SOME MORE HISTORY
Page 57
From the early days, we have decent trending and alertingfor all the services running in the containers.
Page 58
We even built a web frontend, which gathered data frommultiple nagios instances to give us a single overview.
Page 59
… and an AdminTV with relevant graphs
Page 60
We run a pool of hardware nodes…
… on which we schedule the containers of all customers.
https://www.flickr.com/photos/prinsotel/
Page 61
In the beginning we track the containers and hardwarenodes in some kind of sheet.
Page 62
Creating a new container was like…
Page 63
Having a look into the sheet, and select a host that doesnot run the same kind of instance of this customer
Page 64
Having a look at the metrics to verify the host still hasresources
Page 65
ssh into the host and create the container, using thevirtuozzo tools
Page 66
Assigning an ip and a hostname like customer.project.app1
Page 67
Enter the container, and install stuff by hand
Page 68
Register the new container with its services to themonitoring by hand
Page 69
… we developed some tools to automate most of this stuff
Page 70
… with a recommend system to return the best Hardwarefor the requested container.
https://www.flickr.com/photos/jm3/
Page 71
Hostnames are partly calculated. customer.project is pre-filled, but the name app2 has to be selected by hand.
Page 72
We came up with a tagging system to be a bit moreflexible.
Page 73
But we still totally lack a customer facing API.
Page 74
Everything works via phone and ticketing system.
https://www.flickr.com/photos/nekudo/
Page 75
Over time, we built billing, a local DNS-API, SSL-Cert"shop", some own metering-endpoints, Backups…
Page 76
At some point, Configuration Management became cool
Page 77
… and we added a Puppet Master to our infrastructure,and registered all the containers to it. Based on this, weautomated much stuff, like provisioning of basic stuff tothe container and registering services to the monitoring.
Page 78
We wrote much glue-code
https://www.flickr.com/photos/samcatchesides/
Page 79
But we still treat the containers like computers.
Page 81
THESE DAYS WE ARE RUNNING
Page 82
HOW DOES A TRADITIONAL SETUP LOOK
Page 83
MySQL master slave setupAdmin servera bunch of App servermaybe some kind of search engine
Page 84
We start the containers, using our internal toolchain.
Page 85
This also triggers puppet to install required software inthe container, setting up configuration and register them
to our monitoring.
Page 86
We hand over exactly this container to the customer.
https://www.flickr.com/photos/lindzgraham/
Page 87
Customers deploy their software to the admin server.From there it will be rsynced to the app server.
https://www.flickr.com/photos/auxesis/
Page 88
Same thing for dev or test setups.
https://www.flickr.com/photos/christianjann/
Page 89
All this worked very well for a long period of time!
Page 90
But is very inflexible!
Page 91
Installation of containers is strongly coupled to ourinfrastructure.
Page 92
So changes to the infra hurt.
Page 93
We have startup dependencies rather than builddependencies.
Page 94
Don't get me wrong!
Page 95
You can build such problems in the shiny new containerworld as well!
Page 96
Often a new container is just a clone of an old one!
https://www.flickr.com/photos/arenamontanus/
Page 97
With things like hostname and monitoring stuff changed
Page 98
There is
in it.
https://www.flickr.com/photos/trevorandmarjee/
Page 99
So we keep the containers alive, no matter what happens.In the worst case, we clone one back from the nightly
backup.
https://www.flickr.com/photos/vagawi/
Page 100
Updates have to go to all containers…
https://www.flickr.com/photos/bovinity/
Page 101
… rather than building one new image and replace oldcontainers.
https://www.flickr.com/photos/nicowa/
Page 102
Because customers software is not packaged in any way…
Page 103
… there is no overview of the versions of the deployedsoftware.
Page 104
Rollbacks to a defined set of installed software anddeployed software are nearly impossible.
https://www.flickr.com/photos/thejesse/
Page 105
Anyway.
This again worked fine for a very long time…
Page 106
But these days we need a public facing API
Page 107
We didn't have the man power to develop our own…
Page 108
… and would have to replace too much of the inner core.
Page 109
And we did not want to build another proprietary
again!
https://www.flickr.com/photos/simuh/
Page 110
So we came up with OpenStack, and created…
Page 112
We run managed setups on SysEleven Stack.
Page 113
Some customers choose to run production fully managed,and dev on self managed instances.
Page 114
The core concept of running the managed setups is prettymuch the same as before.
https://www.flickr.com/photos/yoroy/
Page 115
But with lessons learned from the old platform!
https://www.flickr.com/photos/pictoquotes/
Page 116
The setups are flexible, and we can also run them on AWS.
Page 117
It based on ansible, which creates VMs using the specificAPI, and runs the deployment in it.
Page 118
We enable customers to be able to do relevant tasks ontheir own, and get the full power out of the Cloud.
Page 119
Despite of this…
Page 120
…in the past half year one question raised…
Page 122
Basically you can run every OS you like to. Yet Windows.
Page 124
Just install docker into the VM.
Page 125
Sure. The containers run in VMs. But there is IMHO nocontainer vs. VM, while VMs are one possible
infrastructure for containers.
Page 126
Are there future plans to provide managed containerservices?
Page 128
In a first step, we will help you to get the orchestrationlayer up and running.
Page 129
We help you with setting up monitoring.
Page 130
We yet know what cantainers are. We know the tools andhow to debug in the container world.
Page 131
We will help you to get your deployment chains to work.
Page 132
But that is not "managed conatainer services" at all!
Page 133
We think about providing managed orchestration layers.
Page 135
… or Docker Swarm
Page 136
We would manage and monitor all the services of theconductor.
Page 137
We may manage additional services like a gitlab, jenkinsor a private image registry.
Page 138
The customer still is responsible all for the containerworkloads.
Page 139
Maybe in a far
https://www.flickr.com/photos/79909830@N04/
Page 140
We could provided managed services running incontainers.
Page 141
But there are many things to think about!
https://www.flickr.com/photos/dharmabum1964/
Page 142
SOME RESOURCES
https://charity.wtf/2016/05/31/wtf-is-operations-serverless/
https://charity.wtf/2016/05/31/operational-best-practices-serverless/
https://twitter.com/neil_conway/status/743086761493008384
http://kubernetes.io/
https://docs.docker.com/swarm/
https://github.com/chrigl/heat-examples
https://chrigl.de/slides/sysconf15-docker/#/ecosystem
https://chrigl.de/slides/sysconf15-docker/#/resources
Page 144
THANKS! QUESTIONS?Contact me: [email protected]
Get Awesome Hosting: SysEleven.de