Docker for Ruby Developers
Click here to load reader
-
Upload
aptible -
Category
Technology
-
view
36 -
download
0
Transcript of Docker for Ruby Developers
Docker for Ruby DevelopersNYC.rb, August 2015
Frank Macreery CTO, Aptible @fancyremarker https://speakerdeck.com/fancyremarker
DockerWhat’s it all about?
DockerContainers: The new virtual machines?
https://www.docker.com/whatisdocker
https://www.docker.com/whatisdocker
Getting Started with Docker
boot2docker
Kitematic
Getting Started with DockerInstall boot2docker viahttp://boot2docker.io/#installation
echo 'eval "$(boot2docker shellinit)"' >> ~/.bashrc
Getting Started with DockerAlternatively (if you have Homebrew and VirtualBox installed)… brew install boot2docker brew install docker
Getting Started with DockerImages and containers
docker pull quay.io/aptible/nginx:latest
docker pull quay.io/aptible/nginx:latest
Reference to a Docker image
docker pull quay.io/aptible/nginx:latest
Image "repositories" can have many "tags"
docker pull quay.io/aptible/nginx:latest
Pulls an image from a Docker "registry"
A single Docker image consists of many "layers"
docker images
docker run -d -p 80:80 -p 443:443 \ quay.io/aptible/nginx:latest
docker run -d -p 80:80 -p 443:443 \ quay.io/aptible/nginx:latest
Launches a new container from an existing image
docker run -d -p 80:80 -p 443:443 \ quay.io/aptible/nginx:latest
Forward container ports to the host (host port:container port)
docker run -d -p 80:80 -p 443:443 \ quay.io/aptible/nginx:latest
Run container in background
docker ps
Simplifying SOAService-oriented architectures without the complexity
Dev/prod parity?
What makes dev/prod parity so hard?
What makes dev/prod parity so hard?1 production deployment, many development/staging environments
What makes dev/prod parity so hard?SOA simplifies each service’s responsibilities, but often at the cost of additional deployment complexity
What makes dev/prod parity so hard?The more services you have, the harder it is to achieve dev/prod parity
What makes dev/prod parity so hard?The more engineers you have, the harder it is to standardize dev parity
README != Automation
Dev/prod parity via Docker
Dev/prod parity via DockerDefine services in terms of Docker images
# docker-compose.yml for Aptible Auth/API
auth: build: auth.aptible.com/ ports: "4000:4000" links: - postgresql environment: RAILS_ENV: development DATABASE_URL: postgresql://postgresql/aptible_auth_development
api: build: api.aptible.com/ ports: "4001:4001" links: - redis - postgresql environment: RAILS_ENV: development DATABASE_URL: postgresql://postgresql/aptible_api_development REDIS_URL: redis://redis APTIBLE_AUTH_ROOT_URL: http://docker:4000
redis: image: quay.io/aptible/redis:latest ports: "6379:6379"
postgresql: image: quay.io/aptible/postgresql:aptible-seeds ports: "5432:5432"
# docker-compose.yml for Aptible Auth/API
auth: build: auth.aptible.com/ ports: "4000:4000" links: - postgresql environment: RAILS_ENV: development DATABASE_URL: postgresql://postgresql/aptible_auth_development
api: build: api.aptible.com/ ports: "4001:4001" links: - redis - postgresql environment: RAILS_ENV: development DATABASE_URL: postgresql://postgresql/aptible_api_development REDIS_URL: redis://redis APTIBLE_AUTH_ROOT_URL: http://docker:4000
redis: image: quay.io/aptible/redis:latest ports: "6379:6379"
postgresql: image: quay.io/aptible/postgresql:aptible-seeds ports: "5432:5432"
# docker-compose.yml for Aptible Auth/API
auth: build: auth.aptible.com/ ports: "4000:4000" links: - postgresql environment: RAILS_ENV: development DATABASE_URL: postgresql://postgresql/aptible_auth_development
api: build: api.aptible.com/ ports: "4001:4001" links: - redis - postgresql environment: RAILS_ENV: development DATABASE_URL: postgresql://postgresql/aptible_api_development REDIS_URL: redis://redis APTIBLE_AUTH_ROOT_URL: http://docker:4000
redis: image: quay.io/aptible/redis:latest ports: "6379:6379"
postgresql: image: quay.io/aptible/postgresql:aptible-seeds ports: "5432:5432"
# docker-compose.yml for Aptible Auth/API
auth: build: auth.aptible.com/ ports: "4000:4000" links: - postgresql environment: RAILS_ENV: development DATABASE_URL: postgresql://postgresql/aptible_auth_development
api: build: api.aptible.com/ ports: "4001:4001" links: - redis - postgresql environment: RAILS_ENV: development DATABASE_URL: postgresql://postgresql/aptible_api_development REDIS_URL: redis://redis APTIBLE_AUTH_ROOT_URL: http://docker:4000
redis: image: quay.io/aptible/redis:latest ports: "6379:6379"
postgresql: image: quay.io/aptible/postgresql:aptible-seeds ports: "5432:5432"
Dev/prod parity via DockerUse the same service/image configuration in production as in development (Docker Compose, Swarm, Kubernetes…)
Containerized SSLInfrastructure management made easy
Elastic Load Balancer (ELB)
EC2 Instance EC2 Instance
NGiNX NGiNX
TCP/HTTPSTCP/HTTPS
HTTP HTTP
How to configure NGiNX with multiple dynamic upstreams?Chef? Salt? Ansible?
ENV configuration$UPSTREAM_SERVERS
docker run -d -p 80:80 -p 443:443 \ -e UPSTREAM_SERVERS=docker:4000,docker:4001 \ quay.io/aptible/nginx:latest
docker run -d -p 80:80 -p 443:443 \ -e UPSTREAM_SERVERS=docker:4000,docker:4001 \ quay.io/aptible/nginx:latest
ENV configurationMakes testing easier
https://github.com/sstephenson/bats
# Dockerfile
# Install and configure NGiNX... # ...
ADD test /tmp/test RUN bats /tmp/test
https://github.com/aptible/docker-nginx Image: quay.io/aptible/nginx
#!/usr/bin/env bats # /tmp/test/nginx.bats
@test "It should accept a list of UPSTREAM_SERVERS" { simulate_upstream UPSTREAM_SERVERS=localhost:4000 wait_for_nginx run curl localhost 2>/dev/null [[ "$output" =~ "Hello World!" ]] }
#!/usr/bin/env bats # /tmp/test/nginx.bats
@test "It should accept a list of UPSTREAM_SERVERS" { simulate_upstream UPSTREAM_SERVERS=localhost:4000 wait_for_nginx run curl localhost 2>/dev/null [[ "$output" =~ "Hello World!" ]] }
@test "It should accept a list of UPSTREAM_SERVERS" { simulate_upstream UPSTREAM_SERVERS=localhost:4000 wait_for_nginx run curl localhost 2>/dev/null [[ "$output" =~ "Hello World!" ]] }
simulate_upstream() { nc -l -p 4000 127.0.0.1 < upstream-response.txt }
ENV configurationAbstracts implementation details: could be NGiNX, HAProxy, …
ENV configurationSimplifies configuration management: central store doesn’t need to know parameters in advance
ENV configuration$UPSTREAM_SERVERS $FORCE_SSL $DISABLE_WEAK_CIPHER_SUITES (…)
Vulnerability ResponseFix, test, docker push, restart
Heartbleed
Heartbleed
POODLEbleed
Heartbleed
POODLEbleed
xBleed???
Integration TestsDocument and test every vulnerability response
#!/usr/bin/env bats # /tmp/test/nginx.bats
@test "It should pass an external Heartbleed test" { install_heartbleed wait_for_nginx Heartbleed localhost:443 uninstall_heartbleed }
@test "It should pass an external Heartbleed test" { install_heartbleed wait_for_nginx Heartbleed localhost:443 uninstall_heartbleed }
install_heartbleed() { export GOPATH=/tmp/gocode export PATH=${PATH}:/usr/local/go/bin:${GOPATH}/bin go get github.com/FiloSottile/Heartbleed go install github.com/FiloSottile/Heartbleed }
Integration tests happen during each image build
Integration tests happen during each image buildImages are built automatically via Quay Build Triggers
Integration tests happen during each image buildBuild status is easy to verify at a glance
Integration tests happen during each image buildQuay Time Machine lets us roll back an image to any previous state
Database DeploymentStandardizing an "API" across databases
New databases mean new dependencies
New databases mean new dependenciesHow to document setup steps for engineers?
New databases mean new dependenciesHow to deploy in production?
New databases mean new dependenciesHow to perform common admin tasks? Backups? Replication? CLI access? Read-only mode?
Wrap Databases in a Uniform APIStandardizing an "API" across databases
#!/bin/bash # run-database.sh
command="/usr/lib/postgresql/$PG_VERSION/bin/postgres -D "$DATA_DIRECTORY" -c config_file=/etc/postgresql/$PG_VERSION/main/postgresql.conf"
if [[ "$1" == "--initialize" ]]; then chown -R postgres:postgres "$DATA_DIRECTORY"
su postgres <<COMMANDS /usr/lib/postgresql/$PG_VERSION/bin/initdb -D "$DATA_DIRECTORY" /etc/init.d/postgresql start psql --command "CREATE USER ${USERNAME:-aptible} WITH SUPERUSER PASSWORD '$PASSPHRASE'" psql --command "CREATE DATABASE ${DATABASE:-db}" /etc/init.d/postgresql stop COMMANDS
elif [[ "$1" == "--client" ]]; then [ -z "$2" ] && echo "docker run -it aptible/postgresql --client postgresql://..." && exit psql "$2"
elif [[ "$1" == "--dump" ]]; then [ -z "$2" ] && echo "docker run aptible/postgresql --dump postgresql://... > dump.psql" && exit pg_dump "$2"
elif [[ "$1" == "--restore" ]]; then [ -z "$2" ] && echo "docker run -i aptible/postgresql --restore postgresql://... < dump.psql" && exit psql "$2"
#!/bin/bash # run-database.sh
command="/usr/lib/postgresql/$PG_VERSION/bin/postgres -D "$DATA_DIRECTORY" -c config_file=/etc/postgresql/$PG_VERSION/main/postgresql.conf"
if [[ "$1" == "--initialize" ]]; then chown -R postgres:postgres "$DATA_DIRECTORY"
su postgres <<COMMANDS /usr/lib/postgresql/$PG_VERSION/bin/initdb -D "$DATA_DIRECTORY" /etc/init.d/postgresql start psql --command "CREATE USER ${USERNAME:-aptible} WITH SUPERUSER PASSWORD '$PASSPHRASE'" psql --command "CREATE DATABASE ${DATABASE:-db}" /etc/init.d/postgresql stop COMMANDS
elif [[ "$1" == "--client" ]]; then [ -z "$2" ] && echo "docker run -it aptible/postgresql --client postgresql://..." && exit psql "$2"
elif [[ "$1" == "--dump" ]]; then [ -z "$2" ] && echo "docker run aptible/postgresql --dump postgresql://... > dump.psql" && exit pg_dump "$2"
elif [[ "$1" == "--restore" ]]; then [ -z "$2" ] && echo "docker run -i aptible/postgresql --restore postgresql://... < dump.psql" && exit psql "$2"
--initialize: Initialize data directory --client: Start a CLI client --dump: Dump database to STDOUT --restore: Restore from dump --readonly: Start database in RO mode
db-launch () { container=$(head -c 32 /dev/urandom | md5); passphrase=${PASSPHRASE:-foobar}; image="${@: -1}"; docker create --name $container $image docker run --volumes-from $container \ -e USERNAME=aptible -e PASSPHRASE=$passphrase \ -e DB=db $image --initialize docker run --volumes-from $container $@ }
http://bit.ly/aptible-dblaunch
docker create --name $container $image
http://bit.ly/aptible-dblaunch
1. Create "volume container"
docker run --volumes-from $container \ -e USERNAME=aptible -e PASSPHRASE=$passphrase \ -e DB=db $image --initialize
http://bit.ly/aptible-dblaunch
2. Initialize database data volume
docker run --volumes-from $container $@
http://bit.ly/aptible-dblaunch
3. Run database
Thank you
@fancyremarker [email protected]
https://speakerdeck.com/fancyremarker