Oo Deployment Guide Comprehensive

download Oo Deployment Guide Comprehensive

of 52

description

Deployment Guide for OpenShift Origin v2.Taked from https://github.com/openshift/origin-server/blob/openshift-origin-release-3/documentation/oo_deployment_guide_comprehensive.adoc

Transcript of Oo Deployment Guide Comprehensive

  • = OpenShift Origin Comprehensive Deployment GuideOpenShift Origin Documentation Project v2.0, July 2013:data-uri::toc2::icons::numbered:

    [float]== OverviewPlatform as a Service is changing the way developers approach developing software. Developers typically use a local sandbox with their preferred application server and only deploy locally on that instance. For instance, developers typically start JBoss EAP locally using the startup.sh command and drop their .war or .ear file in the deployment directory and they are done. Developers have a hard time understanding why deploying to the production infrastructure is such a time consuming process.

    System Administrators understand the complexity of not only deploying the code, but procuring, provisioning and maintaining a production level system. They need to stay up to date on the latest security patches and errata, ensure the firewall is properly configured, maintain a consistent and reliable backup and restore plan, monitor the application and servers for CPU load, disk IO, HTTP requests, etc.

    OpenShift Origin provides developers and IT organizations an open source auto-scaling cloud application platform for quickly deploying new applications on secure and scalable resources with minimal configuration and management headaches. This means increased developer productivity and a faster pace in which IT can support innovation.

    [float]=== The _Comprehensive_ Deployment GuideThis guide goes into excruciating details about deploying OpenShift Origin. You will become wise in the ways of OpenShift if you choose this path. However, if you are looking for a faster way to get up and running, consider the link:oo_deployment_guide_puppet.html[Puppet-based deployment] or the pre-built link:oo_deployment_guide_vm.html[OpenShift Origin virtual machine].

    [float]=== Getting up and Running with OpenShift OriginOpenShift Origin is "infrastructure agnostic". That means that you can run OpenShift on bare metal, virtualized instances, or on public/private cloud instances. The only thing that is required is Fedora Linux or Red Hat Enterprise Linux as the underlying operating system. We require this in order to take advantage of SELinux so that you can ensure your installation is rock solid and secure.

    What does this mean? This means that in order to take advantage of OpenShift Origin, you can use any existing resources that you have in your hardware pool today. It doesn't matter if your infrastructure is based on EC2, VMware, RHEV, Rackspace, OpenStack, CloudStack, or even bare metal as long as your CPUs are 64 bit processors.

    **Many possible configurations** +This document covers one possible OpenShift topology, specifically:

    * All necessary services on one host* Hosted applications on another host

    This is a good reference configuration for proof-of-concept. However, _many othe

  • r topologies and combinations of platforms are supported_. At a minimum, a production installation of OpenShift Origin would probably include four hosts:

    * Broker* Applications Node* MongoDB* ActiveMQ

    For help with your specific setup, you can ask the OpenShift team at IRC channel #openshift-dev on FreeNode, or check out the OpenShift forums.

    This document assumes that you have a working knowledge of SSH, git, and yum, and are familiar with a Linux-based text editor like vi or emacs. Additionally, you will need to be able to install / and or administer the systems described in the next section.

    [float]=== Installation PrerequisitesBefore OpenShift Origin can be installed, the following services must be available in your network:

    * DNS* MongoDB* ActiveMQ

    And the hosts (or nodes) in your system must have the following clients installed:

    * NTP* MCollective

    This document includes chapters on how to install and configure these services and clients on a single host, along with the OpenShift Origin _broker_ component. However, in a production environment these services may already be in place, and it may not be necessary to modify them.

    [float]=== Electronic version of this documentThis document is available online at http://openshift.github.io/documentation/oo_deployment_guide_comprehensive.html

    == Prerequisite: Preparing the Host Systems

    The following steps are required for both Broker and Node hosts.

    === Setup Yum repositories

    Configure the openshift-dependencies RPM repository:

    .RHEL6----cat

  • .Fedora----cat
  • * SSH* `yum`

    First, you need to update the operating system to have all of the latest packages that may be in the yum repository for Fedora. This is important to ensure that you have a recent update to the SELinux packages that OpenShift Origin relies on. In order to update your system, issue the following command:

    ----yum clean allyum -y update----

    NOTE: Depending on your connection and speed of your broker host, this installation make take several minutes.

    ==== Configure the Clock to Avoid Time Skew

    *Server used:*

    * broker host

    *Tools used:*

    * SSH* `ntpdate`

    OpenShift Origin requires NTP to synchronize the system and hardware clocks. This synchronization is necessary for communication between the broker and node hosts; if the clocks are too far out of synchronization,MCollective will drop messages. Every MCollective request (discussed in a later chapter) includes a time stamp, provided by the sending host's clock. If a sender's clock is substantially behind a recipient's clock,the recipient drops the message. This is often referred to as clock skew and is a common problem that users encounter when they fail to sync all of the system clocks.

    .RHEL6----yum install -y ntpdate ntpntpdate clock.redhat.comchkconfig ntpd onservice ntpd start----

    .Fedora----yum install -y ntpdate ntpntpdate clock.redhat.comsystemctl enable ntpd.servicesystemctl start ntpd.service----

    ==== Setting up the Ruby Environment

    If you are running on a RHEL system, you will need to install and setup https://access.redhat.com/site/documentation/en-US/Red_Hat_Developer_Toolset/1/html-single/Software_Collections_Guide/index.html[SCL]Ruby193. This will provide you with a Ruby 1.9.3 environment which we will use for the rest of the setup.

  • NOTE: Fedora installations should skip this section.

    .RHEL----yum install -y ruby193

    cat

  • To proceed, ensure that bind and the bind utilities have been installed on the broker host:

    ----yum install -y bind bind-utils----

    ==== Create DNS environment variables and a DNSSEC key fileOpenShift recommends that you set an environment variable for the domain name that you will be using tofacilitate faster configuration of BIND. This section describes the process of setting that up.

    First, run this command, replacing "example.com" with your domain name. This sets the bash environment variable named "$domain" to your domain:

    ----domain=example.com----

    DNSSEC, which stands for DNS Security Extensions, is a method by which DNS servers can verify that DNS data is coming from the correct place. You create a private/public key pair to determine the authenticity of the source domain name server. In order to implement DNSSEC on your new PaaS, you need to create a key file, which will be stored in /var/named. For convenience, set the "$keyfile" variable now to the location of the this key file:

    ----keyfile=/var/named/${domain}.key----

    Now create a DNSSEC key pair and store the private key in a variable named "$KEY" by using the following commands:

    ----pushd /var/namedrm K${domain}*dnssec-keygen -a HMAC-MD5 -b 512 -n USER -r /dev/urandom ${domain}KEY="$(grep Key: K${domain}*.private | cut -d ' ' -f 2)"popd----

    Verify that the key was created properly by viewing the contents of the $KEY variable:

    ----echo $KEY----

    You must also create an rndc key, which will be used by the init script to query the status of BIND when you run _service named status_:

    ----rndc-confgen -a -r /dev/urandom----

    Configure the ownership, permissions, and SELinux contexts for the keys that you've created:

    ----

  • restorecon -v /etc/rndc.* /etc/named.*chown -v root:named /etc/rndc.keychmod -v 640 /etc/rndc.key----

    ==== Create a fowarders.conf file for host name resolutionThe DNS forwarding facility of BIND can be used to create a large site-wide cache on a few servers, reducing traffic over links to external name servers. It can also be used to allow queries by servers that do not have direct access to the Internet, but wish to look up exterior names anyway. Forwarding occurs only on those queries for which the server is not authoritative and does not have the answer in its cache.

    Create the forwarders.conf file with the following commands:

    ----echo "forwarders { 8.8.8.8; 8.8.4.4; } ;" >> /var/named/forwarders.confrestorecon -v /var/named/forwarders.confchmod -v 640 /var/named/forwarders.conf----

    ==== Configure subdomain resolution and create an initial DNS databaseTo ensure that you are starting with a clean _/var/named/dynamic_ directory, remove this directory if it exists:

    ----rm -rvf /var/named/dynamicmkdir -vp /var/named/dynamic----

    Issue the following command to create the _$\{domain}.db_ file (before running this command, verify that the $domain variable that you set earlier is still available):

    ----cat

  • ----$ORIGIN .$TTL 1 ; 1 secondexample.com IN SOA ns1.example.com. hostmaster.example.com. ( 2011112916 ; serial 60 ; refresh (1 minute) 15 ; retry (15 seconds) 1800 ; expire (30 minutes) 10 ; minimum (10 seconds) ) NS ns1.example.com. MX 10 mail.example.com.$ORIGIN example.com.ns1 A 127.0.0.1----

    Now we need to install the DNSSEC key for our domain:

    ----cat

  • bindkeys-file "/etc/named.iscdlv.key";

    // set forwarding to the next nearest server (from DHCP response forward only; include "forwarders.conf";};

    logging { channel default_debug { file "data/named.run"; severity dynamic; };};

    // use the default rndc keyinclude "/etc/rndc.key";

    controls { inet 127.0.0.1 port 953 allow { 127.0.0.1; } keys { "rndc-key"; };};

    include "/etc/named.rfc1912.zones";

    include "${domain}.key";

    zone "${domain}" IN { type master; file "dynamic/${domain}.db"; allow-update { key ${domain} ; } ;};EOF----

    Finally, set the permissions for the new configuration file that you just created:

    ----chown -v root:named /etc/named.confrestorecon /etc/named.conf----

    ==== Configure host name resolution to use new the _BIND_ serverNow you need to update the resolv.conf file to use the local _named_ service that you just installed and configured. Open up your _/etc/resolv.conf_ file and add the following entry *as the first nameserver entry in the file*:

    ----nameserver 127.0.0.1----

    We also need to make sure that _named_ starts on boot and that the firewall is configured to pass through DNS traffic:

    .RHEL6----lokkit --service=dnschkconfig named on----

  • .Fedora----firewall-cmd --add-service=dnsfirewall-cmd --permanent --add-service=dnssystemctl enable named.service----

    NOTE: If you get unknown locale error when running _lokkit_, consult the troubleshooting section at the end of this manual.

    ==== Start the _named_ serviceNow you are ready to start up your new DNS server and add some updates.

    .RHEL6----service named start----

    .Fedora----systemctl start named.service----

    You should see a confirmation message that the service was started correctly. If you do not see an OK message, run through the above steps again and ensure that the output of each command matches the contents of this document. If you are still having trouble after trying the steps again, refer to your help options.

    === Add the Broker Node to DNSIf you configured and started a BIND server per this document, or you are working against a BIND server that was already in place, you now need to add a record for your broker node (or host) to BIND's database. To accomplish this task, you will use the `nsupdate` command, which opens an interactive shell. Replace "broker.example.com" with your preferred hostname:

    ----# nsupdate -k ${keyfile}> server 127.0.0.1> update delete broker.example.com A> update add broker.example.com 180 A 10.4.59.x> send----

    Press control-D to exit from the interactive session.

    In order to verify that you have successfully added your broker node to your DNS server, you can perform:

    ----ping broker.example.com----

    and it should resolve to the local machine that you are working on. You can also perform a dig request using the following command:

    ----dig @127.0.0.1 broker.example.com----

    === DHCP Client and Hostname

  • *Server used:*

    * broker host

    *Tools used:*

    * text editor* Commands: hostname

    ==== Create _dhclient-eth0.conf_In order to configure your broker host to use a specific DNS server, you will need to edit the _/etc/dhcp/dhclient-\{$network device}.conf file_ or create the file if it does not exist. Without this step, the DNS server information in _/etc/resolv.conf_ would default back the server returned from your DHCP server on the next boot of the server.

    For example, if you are using eth0 as your default ethernet device, you would need to edit the following file:

    ----/etc/dhcp/dhclient-eth0.conf----

    If you are unsure of which network device that your system is using, you can issue the _ifconfig_ command to list all available network devices for your machine.

    NOTE: the _lo_ device is the loopback device and is not the one you are looking for.

    Once you have the correct file opened, add the following information making sure to substitute the IP address of the broker host:

    ----prepend domain-name-servers 10.4.59.x;supersede host-name "broker";supersede domain-name "example.com";----

    Ensure that you do not have any typos. Command errors include forgetting a semicolon, putting in the node's IP address instead of the broker's, or typing "server" instead of "servers."

    ==== Update network configuration

    Update your network scripts to use the DNS server. Update /etc/sysconfig/network-scripts/ifcfg- file and add the following information making sure to substitute the IP address of the broker host:

    ----PEERDNS="no"DNS1=10.4.59.x----

    ==== Set the host name for your serverYou need to set the hostname of your broker host. We need to change this to reflect the new hostname that we are going to apply to this server. For this chapter, we will be using broker.example.com.

  • .RHEL6====In order to accomplish this task, edit the _/etc/sysconfig/network_ file and locate the section labeled _HOSTNAME_. The line that you want to replace should look like this:

    ----HOSTNAME=localhost.localdomain----

    Change the _/etc/sysconfig/network_ file to reflect the followingchange:

    ----HOSTNAME=broker.example.com----====

    .Fedora====----echo "broker.example.com" > /etc/hostname----====

    Now that we have configured our hostname, we also need to set it for ourcurrent session by using the following command:

    ----hostname broker.example.com----

    == Prerequisite: MongoDB

    *Server used:*

    * broker host

    *Tools used:*

    * text editor* yum* mongo* chkconfig* service* lokkit* firewall-cmd

    OpenShift Origin makes heavy use of MongoDB for storing internal information about users, gears, and other necessary items. If you are not familiar with MongoDB, you can read up on it at the officialMongoDB site (http://www.mongodb.org). For the purpose of OpenShift Origin, you need to know that MongoDB isa document data storage system that uses JavaScript for the command syntax and stores all documents in a JSON format.

    === Install the _mongod_ serverIn order to use MongoDB, you will need to install the mongod server:

    ----

  • yum install -y mongodb-server----

    At the time of this writing, you should see the following packages being installed:

    .RHEL6----Package Name Arch Package Version Repo Sizemongodb-server x86_64 2.0.2-2.el6op rhel-server-ose-infra-6-rpms 3.8 Mboost-program-options x86_64 1.41.0-11.el6_1.2 rhel-6-server-rpms 105 kboost-thread x86_64 1.41.0-11.el6_1.2 rhel-6-server-rpms 105 klibmongodb x86_64 1.41.0-11.el6_1.2 rhel-6-server-rpms 41 kboost-program-options x86_64 2.0.2-2.el6op rhel-server-ose-infra-6-rpms 531 kmongodb x86_64 2.0.2-2.el6op rhel-server-ose-infra-6-rpms 21 M----

    .Fedora----Package Name Arch Package Version Repo Sizemongodb-server x86_64 2.2.4-2.fc19 fedora 3.3 Mboost-filesystem x86_64 1.53.0-7.fc19 updates 64 kboost-program-options x86_64 1.53.0-7.fc19 updates 151 kboost-system x86_64 1.53.0-7.fc19 updates 35 kboost-thread x86_64 1.53.0-7.fc19 updates 53 kgperftools-libs x86_64 2.0-11.fc19 fedora 270 klibicu x86_64 50.1.2-5.fc19 fedora 6.8 Mlibmongodb x86_64 2.2.4-2.fc19 fedora 441 klibunwind x86_64 1.1-2.fc19 fedora 61 kmongodb x86_64 2.2.4-2.fc19 fedora 23 Msnappy x86_64 1.1.0-1.fc19 fedora 40 kv8 x86_64 1:3.14.5.10-1.fc19 fedora 3.0 M----

    === Configure _mongod_MongoDB uses a configuration file for its settings. This file can be found at _/etc/mongodb.conf_. You will need to make a few changes to this file to ensure that MongoDB handles authentication correctly and that is enabled to use small files.

  • ==== Setup MongoDB _smallfiles_ option

    By default, this line is commented out so just remove the hash mark _(#)_ at the beginning of the line to enable the setting. To enable small files support, add the following line:

    ----smallfiles=true----

    Setting _smallfiles=true_ configures MongoDB not to pre-allocate a huge database, which wastes a surprising amount of time and disk space and is unnecessary for the comparatively small amount of data that the brokerwill store in it. It is not absolutely necessary to set _smallfiles=true_. For a new installation it save a minute or two of initialization time and saves a fair amount of disk space.

    ==== Setup MongoDB authentication

    To set up MongoDB, first ensure that auth is turned off in the _/etc/mongodb.conf_ file. Edit the file and ensure that _auth=true_ is commented out.

    ----#auth=true----

    Start the MongoDB server so that we can run commands against the server.

    ----service mongod start----

    Create the OpenShift broker user.

    ----/usr/bin/mongo localhost/openshift_broker_dev --eval 'db.addUser("openshift", "")'/usr/bin/mongo localhost/admin --eval 'db.addUser("openshift", "")'----

    Stop the MongoDB server so that we can continue with other configuration.

    ----service mongod stop----

    Edit the configuration file and ensure the two following conditions are set correctly:

    ----auth=true----

    === Firewall setup

    If MongoDB is setup on a machine that is not running the broker, you will need to ensure that the MongoDB is configured to listen on the external IP and that the firewall allows MongoDB connections to pass-through.

  • Edit the mongodb.conf file and update the bind_ip setting.

    ----bind_ip=127.0.0.1,10.4.59.x----

    Enable MongoDB access on the firewall.

    .RHEL6----lokkit --port=27017:tcp----

    .Fedora----firewall-cmd --add-port=27017/tcpfirewall-cmd --permanent --add-port=27017/tcp----

    === Set _mongod_ to Start on BootMongoDB is an essential part of the OpenShift Origin platform. Because of this, you must ensure that mongod is configured to start on system boot:

    .RHEL6----chkconfig mongod on----

    .Fedora----systemctl enable mongod.service----

    By default, when you install _mongod_ via the yum command, the service is not started. You can verify this with the following:

    .RHEL6----service mongod status----

    .Fedora----systemctl status mongod.service----

    This should return "_mongod is stopped_". In order to start the service, simply issue:

    .RHEL6----service mongod start----

    .Fedora----systemctl start mongod.service----

    Now verify that mongod was installed and configured correctly. To do this, use t

  • he `mongo` shell client tool. If you are familiar with MySQL or Postgres, this is similar to the mysql client's interactive SQL shell. However, because MongoDB is a NoSQL database, it does not respond to traditional SQL-style commands.

    In order to start the mongo shell, enter the following command:

    ----mongo admin----

    You should see a confirmation message that you are using MongoDB shell version: x.x.x and that you are connecting to the admin database. Authenticate against the database with the user you created above.

    ----db.auth('openshift',"")----

    To verify even further, you can list all of the available databases that the database currently has:

    ----show dbs----

    You will then be presented with a list of valid databases that are currently available to the mongod service.

    ----admin 0.203125GBlocal (empty)openshift_broker_dev 0.203125GBtest (empty)----

    To exit the Mongo shell, you can simply type exit:

    ----exit----

    == Prerequisite: ActiveMQActiveMQ is a fully open source messenger service that is available for use across many different programming languages and environments. OpenShift Origin makes use of this technology to handle communications between the broker host and the node hosts in the deployment. In order to make use of this messaging service, you need toinstall and configure ActiveMQ on your broker node.

    *Server used:*

    * broker host

    *Tools used:*

    * text editor* yum* wget* lokkit* firewall-cmd

  • * chkconfig* service

    === InstallationInstalling ActiveMQ on Fedora is a fairly easy process as the packages are included in the rpm repositories that are already configured on your broker node. You need to install both the server and client packages by using the following command:

    ----yum install -y activemq activemq-client----

    NOTE: This will also install all of the dependencies required for the packages if they aren't already installed. Notably, Java 1.6 and the libraries for use with the Ruby programming language may be installed.

    === ConfigurationActiveMQ uses an XML configuration file that is located at _/etc/activemq/activemq.xml_. This installation guide is accompanied by a template version of activemq.xml that you can use to replace this file. *But first: back up the original file*:

    ----cd /etc/activemqmv activemq.xml activemq.orig----

    Copy the link:files/activemq.xml[basic configuration template] in to /etc/activemq/activemq.xml.

    ----curl -o /etc/activemq/activemq.xml ----

    Copy the link:files/jetty.xml[jetty template] in to /etc/activemq/jetty.xml.

    ----curl -o /etc/activemq/jetty.xml ----

    Copy the link:files/jetty-realm.properties[jetty auth template] in to /etc/activemq/jetty-realm.properties.

    ----curl -o /etc/activemq/jetty-realm.properties ----

    Once you have the configuration template in place, you will need to make a few minor changes to the configuration.

    First, replace the hostname provided (activemq.example.com) to the FQDN of your broker host. For example, thefollowing line:

    ----

    ----

  • Should become:

    ----

    ----

    NOTE: The _$\{activemq.data}_ text should be entered as stated as it does not refer to a shell variable

    The second change is to provide your own credentials for authentication. The authentication information is stored inside of the __ block of code. Make the changes that you desire to the following code block:

    ----

    ----

    Next modify the /etc/activemq/jetty-realm.properties and set a password for the admin user

    ----admin: [password], admin----

    === Firewall Rules / Start on BootThe broker host firewall rules must be adjusted to allow MCollective to communicate on port 61613:

    .RHEL6----lokkit --port=61613:tcp----

    .Fedora----firewall-cmd --add-port=61613/tcpfirewall-cmd --permanent --add-port=61613/tcp----

    Finally, you need to enable the ActiveMQ service to start on boot as well as start the service for the first time.

    ----chkconfig activemq onservice activemq start----

    Note: activemq server has not transitioned to systemd startup scripts yet.

    === Tmpfs setup

    On Fedora systems, /var/run is a tmpfs mount and needs some additonal configurat

  • ion. Create a _/etc/tmpfiles.d/activemq.conf_.This step can be skipped on RHEL 6.4 systems.

    .Fedora----cat

  • == Prerequisite: MCollective client

    *Server used:*

    * broker host

    *Tools used:*

    * text editor* yum

    For communication between the broker host and the gear nodes, OpenShift Origin uses MCollective. You may be wondering how MCollective is different from ActiveMQ. ActiveMQ is the messenger server that provides a queue of transport messages. You can think of MCollective as the client that actually sends and receives those messages. For example, if we want to create a new gear on an OpenShift Origin node, MCollective would receive the "create gear" message from ActiveMQ and perform the operation.

    === InstallationIn order to use MCollective, first install it via yum:

    ----yum install -y mcollective-client----

    === ConfigurationReplace the contents of the _/etc/mcollective/client.cfg_ with the following information:

    .Fedora----cat

  • libdir =/opt/rh/ruby193/root/usr/libexec/mcollective#logfile = /var/log/mcollective-client.logloglevel = debug

    # Pluginssecurityprovider = pskplugin.psk = unset

    connector = activemqplugin.activemq.pool.size = 1plugin.activemq.pool.1.host = localhostplugin.activemq.pool.1.port = 61613plugin.activemq.pool.1.user = mcollectiveplugin.activemq.pool.1.password = marionetteEOF----

    Update the _plugin.activemq.pool.1.password_ password to match what you set up in the active configuration.

    Now you have configured the MCollective client to connect to ActiveMQ running on the local host. In a typical deployment, you will configure MCollective to connect to ActiveMQ running on a remote server by putting the appropriate hostname for the plugin.stomp.host setting.

    == The Broker

    *Server used:*

    * broker host

    *Tools used:*

    * text editor* yum* sed* chkconfig* lokkit* openssl* ssh-keygen* fixfiles* restorecon

    === Install Necessary PackagesIn order for users to interact with the OpenShift Origin platform, they will typically use client tools or the web console. These tools communicate with the broker via a REST API that is also accessible for writing third party applications and tools. In order to use the broker application, we need to install several packages from the OpenShift Origin repository.

    ----yum install -y openshift-origin-broker openshift-origin-broker-util \ rubygem-openshift-origin-auth-remote-user \ rubygem-openshift-origin-auth-mongo \ rubygem-openshift-origin-msg-broker-mcollective \ rubygem-openshift-origin-dns-avahi \ rubygem-openshift-origin-dns-nsupdate \ rubygem-openshift-origin-dns-route53 \ rubygem-passenger mod_passenger

  • ----

    NOTE: Depending on your connection and speed of your broker host, this installation make take several minutes.

    === Configure the Firewall and Enable Service at BootThe broker application requires a number of services to be running in order to function properly. Configure them to start at boot time:

    .RHEL6----chkconfig network onchkconfig sshd on----

    .Fedora----systemctl enable network.servicesystemctl enable sshd.service----

    Additionally, modify the firewall rules to ensure that the traffic for these services is accepted:

    .RHEL6----lokkit --service=sshlokkit --service=httpslokkit --service=http----

    .Fedora----firewall-cmd --add-service=sshfirewall-cmd --add-service=httpfirewall-cmd --add-service=httpsfirewall-cmd --permanent --add-service=sshfirewall-cmd --permanent --add-service=httpfirewall-cmd --permanent --add-service=https----

    === Generate access keysNow you will need to generate access keys that will allow some of the services (Jenkins for example) to communicate to the broker.

    ----openssl genrsa -out /etc/openshift/server_priv.pem 2048openssl rsa -in /etc/openshift/server_priv.pem -pubout > /etc/openshift/server_pub.pem----

    You will also need to generate a ssh key pair to allow communication between the broker host and any nodes that you have configured. For example, the broker host will use this key in order to transfer data between nodes when migrating a gear from one node host to another.

    NOTE: Remember, the broker host is the director of communications and the node hosts actually contain all of the application gears that your users create.

    In order to generate this SSH keypair, perform the following commands:

  • ----ssh-keygen -t rsa -b 2048 -f ~/.ssh/rsync_id_rsa----

    Press for the passphrase. This generates a passwordless key which is convenient for machine-to-machine authentication but is inherently less secure than other alternatives. Finally, copy the private and public key files to the openshift directory:

    ----cp ~/.ssh/rsync_id_rsa* /etc/openshift/----

    Later, during configuration of the node hosts, you will copy this newly created key to each node host.

    === Configure SELinuxSELinux has several variables that we want to ensure are set correctly. These variables include the following:

    .SELinux Boolean Values[options="header"]|===| Variable Name | Description| httpd_unified | Allow the broker to write files in the "http" file context| httpd_can_network_connect | Allow the broker application to access the network| httpd_can_network_relay | Allow the SSL termination Apache instance to access the backend Broker application| httpd_run_stickshift | Enable passenger-related permissions| named_write_master_zones | Allow the broker application to configure DNS| allow_ypbind | Allow the broker application to use ypbind to communicate directly with the name server| httpd_verify_dns | Allow Apache to query NS records| httpd_enable_homedirs | Allows Apache to use its access home directories| httpd_execmem | allows httpd to execute programs that require memory addresses that are both executable and writeable|===

    In order to set all of these variables correctly, enter the following:

    ----setsebool -P httpd_unified=on httpd_can_network_connect=on httpd_can_network_relay=on \ httpd_run_stickshift=on named_write_master_zones=on allow_ypbind=on \ httpd_verify_dns=on httpd_enable_homedirs=on httpd_execmem=on----

    You will also need to set several files and directories with the proper SELinux contexts. Issue the following commands:

    ----fixfiles -R rubygem-passenger restorefixfiles -R mod_passenger restorerestorecon -rv /var/run----

    The _fixfiles_ command updates SELinux's database that associates pathnames with

  • SELinux contexts. The _restorecon_ command uses this database to update the SELinux contexts of the specified files on the file system itself so that those contexts will be in effect when the kernel enforces policy. See the manual pages of the _fixfiles_ and _restorecon_ commands for further details.

    === Understand and Change the Broker Configuration

    ==== Gear Sizes

    The OpenShift Origin broker uses a configuration file to define several of the attributes for controlling how the platform as a service works. This configuration file is located at _/etc/openshift/broker.conf_. For instance, the valid gear types that a user can create are defined using the _VALID_GEAR_SIZES_ variable.

    ----# Comma separated list of valid gear sizesVALID_GEAR_SIZES="small,medium"----

    ==== Cloud Domain

    Edit this file and ensure that the _CLOUD_DOMAIN_ variable is set to correctly reflect the domain that you are using to configure this deployment of OpenShift Origin.

    ----# Domain suffix to use for applications (Must match node config)CLOUD_DOMAIN="example.com"----

    ==== MongoDB settings

    Edit the mongo variables to connect to the Mongo DB server

    ----#Set to true if MongoDB is set up in replica set modeMONGO_REPLICA_SETS=false

    # Comma seperated list of replica set servers. Eg: ":,:,..."MONGO_HOST_PORT=":27017"

    #Mongo DB user configured earlierMONGO_USER="openshift"

    #Password for user configured earlierMONGO_PASSWORD=""

    #Broker metadata databaseMONGO_DB="openshift_broker_dev"----

    ==== Authentication Salt

    Generate some random bits which we will use for the broker auth salt.

    ----openssl rand -base64 64----

  • Output from this command should look like:

    ----ds+R5kYI5Jvr0uanclmkavrXBSl0KQ34y3Uw4HrsiUNaKjYjgN/tVxV5mYPukpFRradl1SiQ5lmr41zDo4QQww==----

    Copy this value and set the AUTH_SALT variable in the _/etc/openshift/broker.conf_ file.

    ----AUTH_SALT="ds+R5kYI5Jvr0uanclmkavrXBSl0KQ34y3Uw4HrsiUNaKjYjgN/tVxV5mYPukpFRradl1SiQ5lmr41zDo4QQww=="----

    Note: If you are setting up a multi-broker infrastructure, the authentication salt must be the same on all brokers.

    ==== Session Secret

    Generate some random bits which we will use for the broker session secret.

    ----openssl rand -base64 64----

    Copy this value and set the SESSION_SECRET variable in the _/etc/openshift/broker.conf_ file.

    ----SESSION_SECRET="rFeKpEGI0TlTECvLgBPDjHOS9ED6KpztUubaZFvrOm4tJR8Gv0poVWj77i0hqDj2j1ttWTLiCIPRtuAfxV1ILg=="----

    Note: If you are setting up a multi-broker infrastructure, the session secret must be the same on all brokers.

    While you are in this file, you can change any other settings that need to be configured for your specific installation.

    == Broker Plugins

    *Server used:*

    * broker host

    *Tools used:*

    * text editor* cat* echo* environment variables* pushd* semodule* htpasswd* mongo* bundler* chkconfig* service

  • OpenShift Origin uses a plugin system for core system components such as DNS, authentication, and messaging. In order to make use of these plugins, you need to configure them and provide the correct configuration items to ensure that they work correctly. The plugin configuration files are located in the _/etc/openshift/plugins.d_directory. Begin by changing to that directory:

    ----cd /etc/openshift/plugins.d----

    Once you are in this directory, you will see that OpenShift Originprovides several example configuration files for you to use to speed upthe process of configuring these plugins. You should see three examplefiles.

    * openshift-origin-auth-remote-user.conf.example* openshift-origin-dns-nsupdate.conf.example* openshift-origin-msg-broker-mcollective.conf.example

    === Create Configuration FilesTo begin, copy the .example files to actual configuration files that will be used by OpenShift Origin:

    ----cp openshift-origin-auth-remote-user.conf.example openshift-origin-auth-remote-user.confcp openshift-origin-msg-broker-mcollective.conf.example openshift-origin-msg-broker-mcollective.conf----

    The broker application will check the plugins.d directory for files ending in .conf. The presence of .conf file enables the corresponding plug-in. Thus, for example, copying the openshift-origin-auth-remote-user.conf.example file to openshift-origin-auth-remote-user.conf enables the auth-remote-user plug-in.

    === Configure the DNS pluginIf you installed a DNS server on the same host as the broker by following the instructions at TK, you can create a DNS configuration file using the `cat` command instead of starting with the example DNS configuration file. You can do that by taking advantage of the $domain and $keyfile environment variables that you created during that process. If you no longer have these variables set, you can recreate them with the following commands:

    ----domain=example.comkeyfile=/var/named/${domain}.keycd /var/namedKEY="$(grep Key: K${domain}*.private | cut -d ' ' -f 2)"----

    To verify that your variables were recreated correctly, echo the contents of your keyfile and verify your $KEY variable is set correctly:

    ----cat $keyfileecho $KEY----

    If you performed the above steps correctly, you should see output similar to thi

  • s:

    ----key example.com { algorithm HMAC-MD5; secret "3RH8tLp6fvX4RVV9ny2lm0tZpTjXhB62ieC6CN1Fh/2468Z1+6lX4wpCJ6sfYH6u2+//gbDDStDX+aPMtSiNFw==";};----

    and

    ----3RH8tLp6fvX4RVV9ny2lm0tZpTjXhB62ieC6CN1Fh/2468Z1+6lX4wpCJ6sfYH6u2+//gbDDStDX+aPMtSiNFw==----

    Now that you have your variables setup correctly, you can create the _openshift-origin-dns-bind.conf_ file. *Ensure that you are still in the _/etc/openshift/plugins.d_ directory* and issue the following command:

    ----cd /etc/openshift/plugins.dcat openshift-origin-dns-nsupdate.confBIND_SERVER="127.0.0.1"BIND_PORT=53BIND_KEYNAME="${domain}"BIND_KEYVALUE="${KEY}"BIND_ZONE="${domain}"EOF----

    After running this command, cat the contents of the file and ensure they look similar to the following:

    ----BIND_SERVER="127.0.0.1"BIND_PORT=53BIND_KEYNAME="example.com"BIND_KEYVALUE="3RH8tLp6fvX4RVV9ny2lm0tZpTjXhB62ieC6CN1Fh/2468Z1+6lX4wpCJ6sfYH6u2+//gbDDStDX+aPMtSiNFw=="BIND_ZONE="example.com"----

    === Configure an Authentication PluginOpenShift Origin supports various different authentication systems for authorizing a user. In a production environment, you will probably want to use LDAP, kerberos, or some other enterprise class authorization and authentication system. For this reference system we will use a system called Basic Auth that relies on a _htpasswd_ file to configure authentication. OpenShift Origin provides three example authentication configuration files in the _/var/www/openshift/broker/httpd/conf.d/_ directory:

    .Authentication Sample Files[options="header"]|===| Authentication Type | Description| Mongo Auth | openshift-origin-auth-mongo.conf.sample| Basic Auth | openshift-origin-auth-remote-user-basic.conf.sample| Kerberos | openshift-origin-auth-remote-user-kerberos.conf.sample

  • | LDAP | openshift-origin-auth-remote-user-ldap.conf.sample|===

    Using Basic Auth, you need to copy the sample configuration file to the actual configuration file:

    ----cp /var/www/openshift/broker/httpd/conf.d/openshift-origin-auth-remote-user-basic.conf.sample /var/www/openshift/broker/httpd/conf.d/openshift-origin-auth-remote-user.conf ----

    This configuration file specifies that the _AuthUserFile_ is located at _/etc/openshift/htpasswd_. At this point, that file doesn't exist, so you will need to create it and add a user named _demo_.

    ----htpasswd -c /etc/openshift/htpasswd demo----

    NOTE: The -c option to htpasswd creates a new file, overwriting any existing htpasswd file. If your intention is to add a new user to an existing htpasswd file, simply drop the -c option.

    After entering the above command, you will be prompted for a password for the user _demo_. Once you have provided that password, view the contents of the htpasswd file to ensure that the user was added correctly. Make a note of the password as you will need it later.

    ----cat /etc/openshift/htpasswd----

    If the operation was a success, you should see output similar to the following:

    ----demo:$apr1$Q7yO3MF7$rmSZ7SI.vITfEiLtkKSMZ/----

    === Verify the Ruby BundlerThe broker Rails application depends on several gem files in order to operate correctly. You need to ensure that the Ruby bundler can find the appropriate gem files.

    ----cd /var/www/openshift/brokerbundle --local----

    You should see a lot of information scroll by letting you know what gem files the system is actually using. The last line of output should be:

    ----Your bundle is complete! Use `bundle show [gemname]` to see where a bundled gem is installed.----

    === Set Services to Start on BootThe last step in configuring our broker application is to ensure that all of the necessary services are started and that they are configured to start upon syste

  • m boot.

    .RHEL6----chkconfig openshift-broker on----

    .Fedora----systemctl enable openshift-broker.service----

    This will ensure that the broker starts upon next system boot. However, you also need to start the broker application to run now.

    .RHEL6----service openshift-broker start----

    .Fedora----systemctl start openshift-broker.service----

    === Verify the Broker REST API

    In order to verify that the REST API is functioning for the broker host, you can use the following _curl_ command:

    ----curl -u : http://localhost:8080/broker/rest/api.json----

    You should see the following output:

    ----{ "api_version": 1.5, "data": { "API": { "href": "https://broker.example.com/broker/rest/api", "method": "GET", "optional_params": [], "rel": "API entry point", "required_params": [] }, "GET_ENVIRONMENT": { "href": "https://broker.example.com/broker/rest/environment", "method": "GET", "optional_params": [], "rel": "Get environment information", "required_params": [] }, "GET_USER": { "href": "https://broker.example.com/broker/rest/user", "method": "GET", "optional_params": [], "rel": "Get user information", "required_params": []

  • }, "LIST_DOMAINS": { "href": "https://broker.example.com/broker/rest/domains", "method": "GET", "optional_params": [], "rel": "List domains", "required_params": [] }, "ADD_DOMAIN": { "href": "https://broker.example.com/broker/rest/domains", "method": "POST", "optional_params": [], "rel": "Create new domain", "required_params": [{ "description": "Name of the domain", "invalid_options": [], "name": "id", "type": "string", "valid_options": [] }] }, "LIST_CARTRIDGES": { "href": "https://broker.example.com/broker/rest/cartridges", "method": "GET", "optional_params": [], "rel": "List cartridges", "required_params": [] }, "LIST_AUTHORIZATIONS": { "href": "https://broker.example.com/broker/rest/user/authorizations", "method": "GET", "optional_params": [], "rel": "List authorizations", "required_params": [] }, "SHOW_AUTHORIZATION": { "href": "https://broker.example.com/broker/rest/user/authorization/:id", "method": "GET", "optional_params": [], "rel": "Retrieve authorization :id", "required_params": [{ "description": "Unique identifier of the authorization", "invalid_options": [], "name": ":id", "type": "string", "valid_options": [] }] }, "ADD_AUTHORIZATION": { "href": "https://broker.example.com/broker/rest/user/authorizations", "method": "POST", "optional_params": [{ "default_value": "userinfo", "description": "Select one or more scopes that this authorization will grant access to:\n\n* session\n Grants a client the authority to perform all API actions against your account. Valid for 1 day.\n* read\n Allows the client to access resources you own without making changes. Does not allow access to view authorization tokens. Valid for 1 day.\n* userinfo\n Allows a client to view your login name, unique id, and your user capabilities. Valid for 1 day.", "name": "scope",

  • "type": "string", "valid_options": ["session", "read", "userinfo"] }, { "default_value": null, "description": "A description to remind you what this authorization is for.", "name": "note", "type": "string", "valid_options": [] }, { "default_value": -1, "description": "The number of seconds before this authorization expires. Out of range values will be set to the maximum allowed time.", "name": "expires_in", "type": "integer", "valid_options": [] }, { "default_value": false, "description": "Attempt to locate and reuse an authorization that matches the scope and note and has not yet expired.", "name": "reuse", "type": "boolean", "valid_options": [true, false] }], "rel": "Add new authorization", "required_params": [] }, "LIST_QUICKSTARTS": { "href": "https://broker.example.com/broker/rest/quickstarts", "method": "GET", "optional_params": [], "rel": "List quickstarts", "required_params": [] }, "SHOW_QUICKSTART": { "href": "https://broker.example.com/broker/rest/quickstart/:id", "method": "GET", "optional_params": [], "rel": "Retrieve quickstart with :id", "required_params": [{ "description": "Unique identifier of the quickstart", "invalid_options": [], "name": ":id", "type": "string", "valid_options": [] }] } }, "messages": [], "status": "ok", "supported_api_versions": [1.0, 1.1, 1.2, 1.3, 1.4, 1.5], "type": "links", "version": "1.5"}----

    === Start apache

    Start the apache server on the node to proxy web traffic to the broker.

  • .RHEL----chkconfig httpd onservice httpd start----

    .Fedora----systemctl enable httpd.servicesystemctl start httpd.service----

    In order to verify that the REST API is functioning for the broker host, you can use the following _curl_ command:

    ----curl -u : -k https://broker.example.com/broker/rest/api.json----

    At this point you have a fully functional Broker. In order to work with it, proceed through the Web Console installation.

    == The Web Console

    *Server used:*

    * broker host

    *Tools used:*

    * text editor* yum* service* chkconfig

    The OpenShift Origin Web Console is written in Ruby and will provide a graphical user interface for users of thesystem to create and manage application gears that are deployed on the gear hosts.

    === Install the Web Console RPMsThe installation of the web console can be performed with a simple _yum install_ command, but note that it will pull in many dependencies from the Ruby programming language. At the time of this writing, executing thefollowing command installed 77 additional packages.

    ----yum install -y openshift-origin-console----

    NOTE: Depending on your connection and speed of your broker host, this installation may take several minutes.

    === Configure Authentication for the ConsoleIf you are building the reference configuration described in this document, then you have configured the broker application for Basic Authentication. What you actually configured was authentication for the Broker REST API. The console application uses a separate authentication scheme for authenticating users to the web console. This will enable you to restrict which users you want to have access to the REST API and keep that authentication separate from the web based user con

  • sole.

    The openshift-console package created some sample authentication files for us. These files are locatedin the _/var/www/openshift/console/httpd/conf.d_ directory. For this reference configuration, you will use the same htpasswd file that you created when you set up authentication for the Broker application. In order to do this, issue the following commands:

    ----cd /var/www/openshift/console/httpd/conf.dcp openshift-origin-auth-remote-user-basic.conf.sample openshift-origin-auth-remote-user-basic.conf----

    === Verify the Ruby BundlerThe console Rails application depends on several gem files in order to operate correctly. You need to ensure that the Ruby bundler can find the appropriate gem files.

    ----cd /var/www/openshift/consolebundle --local----

    You should see a lot of information scroll by letting you know what gem files the system is actually using. The last line of output should be:

    ----Your bundle is complete! Use `bundle show [gemname]` to see where a bundled gem is installed.----

    === Set Console to Start on BootStart the service and ensure it starts on boot:

    .RHEL6----chkconfig openshift-console onservice openshift-console start----

    .Fedora----systemctl enable openshift-console.servicesystemctl start openshift-console.service----

    Once completed, the console will prompt the user to provide their login credentials as specified in the _/etc/openshift/htpasswd_ file.

    NOTE: Seeing an error page after authenticating to the console is expected at this point. The web console will not be fully active until you add a node host to the Origin system

    == The Node Host

    *Servers used:*

    * Node host

  • * Broker host

    *Tools used:*

    * text editor* yum* ntpdate* dig* oo-register-dns* cat* scp* ssh

    === Register a DNS entry for the Node Host*SSH to your broker application host* and set a variable that points to your keyfile. The following commandshould work after you replace "example.com" with the domain that you are going to use.

    NOTE: You can skip this section if you are build a all-in-one environment.

    ----keyfile=/var/named/example.com.key----

    In order to configure your DNS to resolve your node host, we need to tell our BIND server about the host. Run the following command and *replace the IP address with the correct IP address of your node*.

    *Execute the following on the broker host*:

    ----oo-register-dns -h node -d example.com -n 10.4.59.y -k ${keyfile}----

    Now that you have added your node host to the DNS server, the broker application host should be able to resolve the node host by referring to it by name. Let's test this:

    ----dig @127.0.0.1 node.example.com----

    This should resolve to the 10.4.59.y IP address that you specified for the node host in the _oo-register-node_ command.

    === Configure SSH Key AuthenticationWhile on the broker application host, you need to copy the SSH key that you previously created over to the node. This will enable operations to work from inside of OpenShift Origin without requiring a password. Once you connect to the broker host, copy the key with the following command:

    *Execute the following on the broker host*:

    .Seperate Broker and Node Setup----scp /etc/openshift/rsync_id_rsa.pub [email protected]:/root/.ssh----

    .All-In-One Setup

  • ----cp -f /etc/openshift/rsync_id_rsa.pub /root/.ssh/----

    Once you enter that command, you will be prompted to authenticate to the node host.

    At this point, you need to login to your node host to add the newly copied key to our authorized_keys. SSH into your node host and run the following:

    *Execute the following on the node host*:

    ----cat /root/.ssh/rsync_id_rsa.pub >> /root/.ssh/authorized_keys----

    Now that your key has been copied from your broker application host to your node host, let's verify that is copied correctly and was added to the authorized_keys file. Once you issue the following command, you shouldbe authenticated to the node host without having to specify the root user password.

    *Verify the key by executing the following on the broker host*:

    .Seperate Broker and Node Setup----ssh -i /root/.ssh/rsync_id_rsa [email protected]

    .All-In-One Setup----ssh -i /root/.ssh/rsync_id_rsa [email protected]

    === Configure DNS Resolution on the NodeNow you need to configure the node host to use the BIND server that was installed and configured on the broker application host. This is a fairly straightforward process of adding the IP address of the DNS server to the _/etc/resolv.conf_ on the node host.

    NOTE: You can skip this section if you are build a all-in-one environment.

    Edit this file and add the following line, making sure to use the correct IP address of your broker host:

    *Perform this change on the node host*:

    ----nameserver 10.4.59.x----

    === Configure the DHCP Client and HostnameOn the node host, configure your system settings to prepend the DNS server to the resolv.conf file on systemboot. This will allow the node host to resolve references to broker.example.com to ensure that all pieces of OpenShift Origin can communicate with one another. This process is similar to setting up the _dhclient-eth0.conf_ configuration file for the broker application.

    NOTE: You can skip this section if you are build a all-in-one environment.

  • NOTE: This step assumes that your node host is using the eth0 device for network connectivity. If that is not the case, replace eth0 with the correct Ethernet device for your host.

    Edit the _/etc/dhcp/dhclient-eth0.conf_ file, or add it if it doesn't exist, and add the following information ensuring that you replace theIP address with the correct IP of your broker application host:

    ----prepend domain-name-servers 10.4.59.x;supersede host-name "node";supersede domain-name "example.com";----Update your network scripts to use the DNS server. Update /etc/sysconfig/network-scripts/ifcfg- file and add the following information making sure to substitute the IP address of the broker host:

    ----PEERDNS="no"DNS1=10.4.59.x----

    Now set the hostname for node host to correctly reflect node.example.com.

    .RHEL6====Edit the _/etc/sysconfig/network_ file and change the _HOSTNAME_ entry to the following:----HOSTNAME=node.example.com----====

    .Fedora====----# echo "node.example.com" > /etc/hostname----====

    Finally, set the hostname for your current session by issuing the hostname command at the command prompt.

    ----# hostname node.example.com----

    Verify that the hostname was set correctly by running the `hostname` command. If the hostname was set correctly, you should see _node.example.com_ as the output of the hostname command.

    ----# hostname----

    === MCollective on the Node Host

    *Server used:*

  • * node host

    *Tools used:*

    * text editor* yum* chkconfig* service* mco ping

    MCollective is the tool that OpenShift Origin uses to send and receive messages via the ActiveMQ messaging server. In order for the node host to send and receive messages with the broker application, you need to install and configure MCollective on the node host to communicate with the broker application.

    ==== Install MCollectiveIn order to install MCollective on the node host, you will need to install the _openshift-origin-msg-node-mcollective_ package that is provided by the OpenShift Origin repository:

    ----yum install -y openshift-origin-msg-node-mcollective----

    NOTE: Depending on your connection and speed of your broker host, this installation make take several minutes.

    === Configure MCollectiveConfigure the MCollective client to communicate with the broker application service. In order to accomplish this, replace the contents of the MCollective server.cfg configuration file to point to your correct stomp host. Edit the _/etc/mcollective/server.cfg_ file and add the following information. If you used a different hostname for your broker application host, ensure that you provide the correct stomp host. You also need to ensure that you use the same username and password that you specified in your ActiveMQ configuration.

    .Fedora----cat

  • # Factsfactsource = yamlplugin.yaml = /etc/mcollective/facts.yamlEOF----

    .RHEL----cat

  • KillMode=process

    [Install]WantedBy=multi-user.targetEOF----

    Reload the Systemd service files:

    ----systemctl --system daemon-reload----

    Now ensure that MCollective is set to start on boot and also start the service for our current session.

    .RHEL6----chkconfig mcollective onservice mcollective start----

    .Fedora----systemctl enable mcollective.servicesystemctl start mcollective.service----

    At this point, MCollective on the node host should be able to communicate with the broker application host. You can verify this by running the _mco ping_ command on the broker.example.com host.

    ----mco ping----

    If MCollective was installed and configured correctly, you should see node.example.com in the output from the previous command.

    === Node Host Packages

    *Server used:*

    * node host

    *Tools used:*

    * text editor* yum* lokkit* chkconfig

    Just as we installed specific packages that provide the source code andfunctionality for the broker application to work correctly, the nodehost also has a set of packages that need to be installed to properlyidentify the host as a node that will contain application gears.

    ==== Install the Core PackagesThe following packages are required for your node host to work correctly:

  • * rubygem-openshift-origin-node* rubygem-passenger-native* openshift-origin-port-proxy* openshift-origin-node-util

    Installing these packages can be performed in one yum install command.

    ----yum install -y rubygem-openshift-origin-node \ rubygem-passenger-native \ openshift-origin-port-proxy \ openshift-origin-node-util \ rubygem-openshift-origin-container-selinux----

    NOTE: Depending on your connection and speed of your broker host, this installation make take several minutes.

    ==== Select and Install Built-In Cartridges to be SupportedCartridges provide the functionality that a consumer of the PaaS can use to create specific application types, databases, or other functionality. OpenShift Origin provides a number of built-in cartridges as well as an extensive cartridge API that will allow you to create your own custom cartridge types for your specific deployment needs.

    At the time of this writing, the following optional application cartridges are available for consumption on the node host.

    * openshift-origin-cartridge-python: Python cartridge* openshift-origin-cartridge-ruby: Ruby cartridge* openshift-origin-cartridge-nodejs: Provides Node.js* openshift-origin-cartridge-perl: Perl cartridge* openshift-origin-cartridge-php: Php cartridge* openshift-origin-cartridge-diy: DIY cartridge* openshift-origin-cartridge-jbossas: Provides JBossAS7 support* openshift-origin-cartridge-jenkins: Provides Jenkins-1.4 support

    If you want to provide scalable PHP applications for your consumers, youwould want to install the openshift-origin-cartridge-haproxy and theopenshift-origin-cartridge-php cartridges.

    For database and other system related functionality, OpenShift Origin provides the following:

    * openshift-origin-cartridge-cron: Embedded cron support for OpenShift* openshift-origin-cartridge-jenkins-client: Embedded jenkins client support for OpenShift* openshift-origin-cartridge-mongodb: Embedded MongoDB support for OpenShift* openshift-origin-cartridge-10gen-mms-agent: Embedded 10gen MMS agent for performance monitoring of MondoDB* openshift-origin-cartridge-postgresql: Provides embedded PostgreSQL support* openshift-origin-cartridge-mariadb: Provides embedded MariaDB support (Fedora 19 systems only)* openshift-origin-cartridge-mysql: Provides embedded mysql support (RHEL systems only)* openshift-origin-cartridge-phpmyadmin: phpMyAdmin support for OpenShift

    The only required cartridge is the openshift-origin-cartridge-cron package.

    NOTE: If you are installing a multi-node configuration, it is important to remem

  • ber that each node host _must_ have the same cartridges installed.

    Start by installing the cron package, which is required for all OpenShift Origin deployments.

    ----yum install -y openshift-origin-cartridge-cron----

    If you are planning to install the openshift-origin-cartridge-jenkins* packages. You will first need to configure and install jenkins:

    .Jenkins----curl -o /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat/jenkins.reporpm --import http://pkg.jenkins-ci.org/redhat/jenkins-ci.org.keyyum install -y jenkins-1.510----

    NOTE: The OpenShift Jenkins plugin currently requires jenkins-1.510. You may need to downgrade your jenkins installation for it to work. "yum downgrade -y jenkins-1.510"

    As an example, this additional command will install the cartridges needed for scalable PHP applications that can connect to MySQL:

    .Fedora----yum install -y openshift-origin-cartridge-haproxy openshift-origin-cartridge-php openshift-origin-cartridge-mariadb----

    .RHEL----yum install -y openshift-origin-cartridge-haproxy openshift-origin-cartridge-php openshift-origin-cartridge-mysql----

    For a complete list of all cartridges that you are entitled to install,you can perform a search using the yum command that will output allOpenShift Origin cartridges.

    ----# yum search origin-cartridge----

    To install all cartridges RPMs run:

    ----yum install -y openshift-origin-cartridge-\*----

    Finally run the following to install the cartridges

    ----/usr/sbin/oo-admin-cartridge --recursive -a install -s /usr/libexec/openshift/cartridges/----

  • === Start Required ServicesThe node host will need to allow HTTP, HTTPS, and SSH traffic to flow through the firewall. We also want to ensure that the httpd, network, and sshd services are set to start on boot.

    .RHEL6----lokkit --service=sshlokkit --service=httpslokkit --service=httplokkit --port=8000:tcplokkit --port=8443:tcpchkconfig httpd onchkconfig network onchkconfig sshd onchkconfig oddjobd onchkconfig openshift-node-web-proxy on----

    .Fedora----firewall-cmd --add-service=sshfirewall-cmd --add-service=httpfirewall-cmd --add-service=httpsfirewall-cmd --add-port=8000/tcpfirewall-cmd --add-service=8443/tcpfirewall-cmd --permanent --add-service=sshfirewall-cmd --permanent --add-service=httpfirewall-cmd --permanent --add-service=httpsfirewall-cmd --permanent --add-port=8000/tcpfirewall-cmd --permanent --add-port=8443/tcpsystemctl enable network.servicesystemctl enable sshd.servicesystemctl enable oddjobd.servicesystemctl enable openshift-node-web-proxy.service----

    == Configuring Multi-Tenancy on the Node Host

    *Server used:*

    * node host

    *Tools used:*

    * text editor* sed* restorecon* chkconfig* service* mount* quotacheck* augtool

    === Install augeas tools

    Augeas is a very useful toolset to perform scripts updates to configuration files. Run the following to install it:

    ----

  • yum install -y augeas----

    === Configure PAM ModulesThe pam_namespace PAM module sets up a private namespace for a session with _polyinstantiated_ directories. A polyinstantiated directory provides a different instance of itself based on user name, or when using SELinux, user name, security context or both. OpenShift Origin ships with its own PAM configuration and we need to configure the node to use the configuration.

    ----cat

  • set /files/etc/pam.d/su/01/argument[1] no_unmount_on_closeset /files/etc/pam.d/su/01/#comment 'Managed by openshift_origin'

    set /files/etc/pam.d/system-auth-ac/01/type sessionset /files/etc/pam.d/system-auth-ac/01/control requiredset /files/etc/pam.d/system-auth-ac/01/module pam_namespace.soset /files/etc/pam.d/system-auth-ac/01/argument[1] no_unmount_on_closeset /files/etc/pam.d/system-auth-ac/01/#comment 'Managed by openshift_origin'saveEOF----

    ----cat

  • When a consumer of OpenShift Origin creates an application gear, you will need to be able to control and set the amount of disk space that the gear can consume. This configuration is located in the _/etc/openshift/resource_limits.conf_ file. The two settings of interest are the qouta_files and the quota_blocks. The usrquota setting specifies the total number of files that a gear / user is allowed to own. The quota_blocks is the actual amount of disk storage that the gear is allowed to consume where 1 block is equal to 1024 bytes.

    In order to enable _usrqouta_ on the filesystem, you will need to add the _usrquota_ option in the _/etc/fstab_ for the mount of /var/lib/openshift. In this chapter, the /var/lib/openshift directory is mounted as part of the root filesystem. The corresponding line in the /etc/fstab file looks like

    .RHEL----/dev/mapper/VolGroup-lv_root / ext4 defaults 1 1----

    .Fedora----/dev/mapper/fedora-root / ext4 defaults 1 1----

    In order to add the usrquota option to this mount point, change the entry to the following:

    .RHEL----/dev/mapper/VolGroup-lv_root / ext4 defaults,usrquota 1 1----

    .Fedora----/dev/mapper/fedora-root / ext4 defaults,usrquota 1 1----

    For the usrquota option to take effect, you can reboot the node host or simply remount the filesystem:

    ----mount -o remount /----

    And then generate user quota info for the mount point:

    ----quotacheck -cmug /----

    === Configure SELinux and System Control Settings

    *Server used:*

    * node host

    *Tools used:*

    * text editor

  • * setsebool* fixfiles* restorecon* sysctl

    ==== Configuring SELinuxThe OpenShift Origin node requires several SELinux boolean values to be set in order to operate correctly.

    .SELinux Boolean Values[options="header"]|===| Variable Name | Description| httpd_run_stickshift | Enable passenger-related permissions| httpd_execmem | Allow httpd to execute programs that require memory addresses that are both executable and writeable| httpd_unified | Allow the broker to write files in the "http" file context| httpd_can_network_connect | Allow the broker application to access the network| httpd_can_network_relay | Allow the SSL termination Apache instance to access the backend Broker application | httpd_run_stickshift | Enable passenger-related permissions| httpd_read_user_content | Allow the node to read application data| httpd_enable_homedirs | Allow the node to read application data| allow_polyinstantiation | Allow polyinstantiation for gear containment|===

    To set these values and then relabel files to the correct context, issue the following commands:

    ----setsebool -P httpd_unified=on httpd_can_network_connect=on httpd_can_network_relay=on \ httpd_read_user_content=on httpd_enable_homedirs=on httpd_run_stickshift=on \ allow_polyinstantiation=on httpd_run_stickshift=on httpd_execmem=onrestorecon -rv /var/runrestorecon -rv /usr/sbin/mcollectived /var/log/mcollective.log /var/run/mcollectived.pidrestorecon -rv /var/lib/openshift /etc/openshift/node.conf /etc/httpd/conf.d/openshift----

    ==== Configuring System Control SettingsYou will need to modify the _/etc/sysctl.conf_ configuration file to increase the number of kernel semaphores (to allow many httpd processes), increase the number ephemeral ports, and to also increase the connection tracking table size. Edit the file in your favorite text editor and add the following lines to the bottom of the file:

    ----cat

  • ----sysctl -p /etc/sysctl.conf----

    You may see error messages about unknown keys. Check that these error messages did not result from typos in the settings you have added just now. If they result from settings that were already present in _/etc/sysctl.conf_, you can ignore them.

    === Configure SSH, OpenShift Port Proxy, and Node Configuration

    *Server used:*

    * node host

    *Tools used:*

    * text editor* perl* lokkit* chkconfig* service* openshift-facts

    ==== Configuring SSH to Pass Through the _GIT_SSH_ Environment VariableEdit the _/etc/ssh/sshd_config_ file and add the following lines

    ----cat /etc/ssh/sshd_configAcceptEnv GIT_SSHEOF----

    When a developer pushes a change up to their OpenShift Origin gear, an SSH connection is created. Because this may result in a high number of connections, you need to increase the limit of the number of connections allowed to the node host.

    ----cat

  • ==== Configuring the Port ProxyMultiple application gears can and will reside on the same node host. In order for these applications to receive HTTP requests to the node, you need to configure a proxy that will pass traffic to the gear application that is listening for connections on the loopback address. To do this, you need to open up a range of ports that the node can accept traffic on as well as ensure the port-proxy is started on boot.

    .RHEL6----lokkit --port=35531-65535:tcpchkconfig openshift-port-proxy onservice openshift-port-proxy start----

    .Fedora----firewall-cmd --add-port=35531-65535/tcpfirewall-cmd --permanent --add-port=35531-65535/tcpsystemctl enable openshift-port-proxy.servicesystemctl restart openshift-port-proxy.service----

    If a node is restarted, you want to ensure that the gear applications are also restarted. OpenShift Origin provides a script to accomplish this task, but you need to configure the service to start on boot.

    .RHEL6----chkconfig openshift-gears on----

    .Fedora----systemctl enable openshift-gears.service----

    ==== Configuring Node Settings for Domain NameEdit the _/etc/openshift/node.conf_ file and *specify the correct settings for your _CLOUD_DOMAIN, PUBLIC_HOSTNAME, and BROKER_HOST_ IP address*. For example:

    .Seperate Broker and Node Setup----PUBLIC_HOSTNAME="node.example.com" # The node host's public hostnamePUBLIC_IP="10.4.59.y" # The node host's public IP addressBROKER_HOST="broker.example.com" # IP or DNS name of broker host for REST APIEXTERNAL_ETH_DEV='enp0s5' # Update to match name of external network device----

    .All-In-One Setup----PUBLIC_HOSTNAME="broker.example.com" # The node host's public hostnamePUBLIC_IP="10.4.59.x" # The node host's public IP addressBROKER_HOST="broker.example.com" # IP or DNS name of broker host for REST APIEXTERNAL_ETH_DEV='enp0s5' # Update to ma

  • tch name of external network device----

    NOTE: Ensure that EXTERNAL_ETH_DEV and PUBLIC_IP have accurate values or node will be unable to create gears

    === Update login.defs

    Update the minimum UID and GID for the machine to match GEAR_MIN_UID from node.conf. This value is 500 by default.

    ----cat

  • == Testing the ConfigurationIf everything to this point has been completed successfully, you can now test your deployment of OpenShift Origin. To run a test, first setup an SSH tunnel to enable communication with the broker and node hosts. This will allow you to connect to localhost on your desktop machine and forward all traffic to your OpenShift Origin installation. In the next test, you will update your local machine to point directly to your DNS server, but for now, an SSH tunnel will suffice.

    NOTE: You can also just use the IP address of your broker node instead of using port forwarding.

    On your local machine, issue the following command, replacing the IP address with the IP address of your broker node:

    ----sudo ssh -f -N -L 80:broker.example.com:80 -L 8161:broker.example.com:8161 -L 443:broker.example.com:443 [email protected]

    We have to use the sudo command in order to allow forwarding of low range ports. Once, you have entered the above command, and authenticated correctly, you should be able to view the web console by pointing your local browser to:

    ----http://127.0.0.1----

    You will notice that you may, depending on your browser settings, have to accept the SSL certificate. In Firefox, the page will look similar to this:

    image:cert.png[image]

    Once you have accepted and added the SSL certificate, you will prompted to authenticate to the OpenShift console. Use the credentials that we created in a previous chapter, which should be:

    * Username: demo* Password: demo

    After you have authenticated, you should be presented with the OpenShift web console as shown below:

    image:console.png[image]

    If you do not see the expected content, consult the troubleshooting section at the end of this manual.

    == Configuring local machine for DNS resolution

    *Server used:*

    * local machine

    *Tools used:*

    * text editor* networking tools

    At this point, you should have a complete and correctly functioning OpenShift Or

  • igin installation. During the next portion of the training, we will be focussing on administration and usage of the OpenShift Origin PaaS. To make performing these tasks easier, it is suggested that you add the DNS server that we created in lab 2 to be the first nameserver that your local machine uses to resolve hostnames. The process for this varies depending on the operating system. This lab manual will cover the configuration for both the Linux and Mac operating systems. If you are using a Microsoft Windows operating system, consult the instructor for instructions on how to perform this lab.

    === Configure example.com resolution for LinuxIf you are using Linux, the process for updating your name server is straightforward. Simply edit the _/etc/resolv.conf_ configuration file and add the IP address of your broker node as the first entry. Forexample, add the following at the top of the file, replacing the 10.4.59.x IP address with the correct address of your broker node:

    ----nameserver 10.4.59.x----

    Once you have added the above nameserver, you should be able to communicate with your OpenShift Origin PaaS by using the server hostname. To test this out, ping the broker and node hosts from your local machine:

    ----$ ping broker.example.com$ ping node.example.com----

    === Configure example.com resolution for OS XIf you are using OSX, you will notice that the operating has a _/etc/resolv.conf_ configuration file. However, the operating system does not respect this file and requires users to edit the DNS servers via the _System Preferences_ tool.

    Open up the _System Preferences_ tool and select the _Network_ utility:

    image:network.png[image]

    On the bottom left hand corner of the _Network_ utility, ensure that the lock button is unlocked to enable user modifications to the DNS configuration. Once you have unlocked the system for changes, locate the ethernet device that is providing connectivity for your machine and click the advanced button:

    image:network2.png[image]

    Select the DNS tab at the top of the window:

    image:network3.png[image]

    NOTE: Make a list of the current DNS servers that you have configured for your operating system. When you add a new one, OS X removes the existing servers forcing you to add them back.

    Click the _+_ button to add a new DNS server and enter the 10.4.59.x IP address of your broker host.

    image:network4.png[image]

    NOTE: Add your existing nameservers back that you made a note of above.

  • After you have applied the changes, we can now test that name resolution is working correctly. To test this out, ping the broker and node hosts from your local machine:

    ----$ ping broker.example.com$ ping node.example.com----