Ceph Storage for Oracle® Linux Release 1.0 - Release Notes · Ceph Storage for Oracle Linux...

26
Ceph Storage for Oracle® Linux Release 1.0 Release Notes E66514-04 May 2016

Transcript of Ceph Storage for Oracle® Linux Release 1.0 - Release Notes · Ceph Storage for Oracle Linux...

Page 1: Ceph Storage for Oracle® Linux Release 1.0 - Release Notes · Ceph Storage for Oracle Linux Release 1.0 is based on the Ceph Community Firefly release (v0.80). The supported features

Ceph Storage for Oracle® Linux Release 1.0

Release Notes

E66514-04May 2016

Page 2: Ceph Storage for Oracle® Linux Release 1.0 - Release Notes · Ceph Storage for Oracle Linux Release 1.0 is based on the Ceph Community Firefly release (v0.80). The supported features

Oracle Legal Notices

Copyright © 2016, Oracle and/or its affiliates. All rights reserved.

This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protectedby intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce,translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverseengineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited.

The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report themto us in writing.

If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, thenthe following notice is applicable:

U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software, any programs installed on the hardware,and/or documentation, delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal AcquisitionRegulation and agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs,including any operating system, integrated software, any programs installed on the hardware, and/or documentation, shall be subject to licenseterms and license restrictions applicable to the programs. No other rights are granted to the U.S. Government.

This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended foruse in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardwarein dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure itssafe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerousapplications.

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and aretrademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks orregistered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group.

This software or hardware and documentation may provide access to or information about content, products, and services from third parties.Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content,products, and services unless otherwise set forth in an applicable agreement between you and Oracle. Oracle Corporation and its affiliates will notbe responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services, except as setforth in an applicable agreement between you and Oracle.

Documentation Accessibility

For information about Oracle's commitment to accessibility, visit the Oracle Accessibility Program website athttp://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc.

Access to Oracle Support

Oracle customers that have purchased support have access to electronic support through My Oracle Support. For information, visithttp://www.oracle.com/pls/topic/lookup?ctx=acc&id=info or visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs if you are hearing impaired.

Abstract

This document contains information about the Ceph release available from Oracle. It describes the differencesfrom the upstream version, includes notes on installing and configuring Ceph, and provides a statement of what issupported.

Document generated on: 2016-05-10 (revision: 3724)

Page 3: Ceph Storage for Oracle® Linux Release 1.0 - Release Notes · Ceph Storage for Oracle Linux Release 1.0 is based on the Ceph Community Firefly release (v0.80). The supported features

iii

Table of ContentsPreface .............................................................................................................................................. v1 Release Notes ................................................................................................................................ 1

1.1 About Ceph Storage for Oracle Linux Release 1.0 ................................................................. 11.2 About Ceph .......................................................................................................................... 11.3 Differences from the Upstream Release ................................................................................ 11.4 Enabling Access to the Ceph Packages ................................................................................ 11.5 Preparing the Storage Cluster Nodes Before Installing Ceph .................................................. 21.6 Installing and Configuring Ceph on the Storage Cluster Deployment Node ............................... 41.7 Installing Ceph on the Other Storage Cluster Nodes .............................................................. 41.8 Configuring the Storage Cluster ............................................................................................ 41.9 Configuring an Object Gateway for OpenStack and Swift Access ............................................ 6

1.9.1 Install and Configure Apache and FastCGI ................................................................. 71.9.2 Install and Configure a Simple Ceph Object Gateway .................................................. 8

1.10 Installing a Ceph Client ..................................................................................................... 131.11 Creating a Storage Pool ................................................................................................... 131.12 Configuring a Block Device on a Ceph Client ..................................................................... 141.13 Known Issues ................................................................................................................... 14

1.13.1 ceph-deploy Debugging and Warning Messages for Non-existent Packages .............. 141.13.2 ceph-deploy Reports Errors .................................................................................... 151.13.3 ceph-deploy Reports an Error on Exit ..................................................................... 151.13.4 ceph-deploy purge Does Not Remove Dependent Packages .................................... 151.13.5 ceph-deploy Python Errors on Oracle Linux 6 .......................................................... 151.13.6 ceph-radosgw Service Fails to Start on Oracle Linux 7 ............................................. 151.13.7 ceph-radosgw Service Produces Error and Warning Messages when Starting onOracle Linux 6 .................................................................................................................. 161.13.8 Mounting a CephFS File System Produces a Spurious Error Message ...................... 161.13.9 OSD Daemon Fails to Start .................................................................................... 161.13.10 radosgw-admin Generates JSON Escape Characters in Keys ................................. 161.13.11 sosreport Takes a Long Time to Run on a Storage Cluster Node ............................ 17

Ceph Terminology ............................................................................................................................ 19

Page 4: Ceph Storage for Oracle® Linux Release 1.0 - Release Notes · Ceph Storage for Oracle Linux Release 1.0 is based on the Ceph Community Firefly release (v0.80). The supported features

iv

Page 5: Ceph Storage for Oracle® Linux Release 1.0 - Release Notes · Ceph Storage for Oracle Linux Release 1.0 is based on the Ceph Community Firefly release (v0.80). The supported features

v

PrefaceThis document provides details of Ceph Storage for Oracle Linux Release 1.0.

Audience

This document is written for developers who want to use Ceph with Oracle Linux 6 or Oracle Linux 7. It isassumed that readers have a general understanding of the Linux operating system.

Related Documents

The latest version of this document and other documentation for this product are available at:

http://www.oracle.com/technetwork/server-storage/linux/documentation/index.html

Conventions

The following text conventions are used in this document:

Convention Meaning

boldface Boldface type indicates graphical user interface elements associated with anaction, or terms defined in text or the glossary.

italic Italic type indicates book titles, emphasis, or placeholder variables for whichyou supply particular values.

monospace Monospace type indicates commands within a paragraph, URLs, code inexamples, text that appears on the screen, or text that you enter.

Page 6: Ceph Storage for Oracle® Linux Release 1.0 - Release Notes · Ceph Storage for Oracle Linux Release 1.0 is based on the Ceph Community Firefly release (v0.80). The supported features

vi

Page 7: Ceph Storage for Oracle® Linux Release 1.0 - Release Notes · Ceph Storage for Oracle Linux Release 1.0 is based on the Ceph Community Firefly release (v0.80). The supported features

1

Chapter 1 Release Notes

1.1 About Ceph Storage for Oracle Linux Release 1.0

Ceph Storage for Oracle Linux Release 1.0 is currently available for Oracle Linux 6 Update 7 (x86_64) orlater and Oracle Linux 7 Update 1 (x86_64) or later with the Unbreakable Enterprise Kernel Release 4 andlater or the Unbreakable Enterprise Kernel Release 3 Quarterly Update 6 and later.

Note

The source RPMs for Ceph are available from Oracle Public Yum at http://public-yum.oracle.com.

Ceph Storage for Oracle Linux Release 1.0 is based on the Ceph Community Firefly release (v0.80). Thesupported features include the Object Store, Block Device and Storage Cluster components. The Cephfile system (CephFS) and using a federated architecture with the Ceph Object Gateway are technologypreview features.

For a quick-start guide to using Ceph, see http://ceph.com/docs/master/start/quick-ceph-deploy/.

For more information about Ceph, go to http://ceph.com/.

1.2 About Ceph

Ceph presents a uniform view of object and block storage from a cluster of multiple physical and logicalcommodity-hardware storage devices. Ceph can provide fault tolerance and enhance I/O performance byreplicating and striping data across the storage devices in a Storage Cluster. Ceph's monitoring and self-repair features minimize administration overhead. You can configure a Storage Cluster on non-identicalhardware from different manufacturers.

1.3 Differences from the Upstream Release

The differences between the Oracle versions of the software collections and the upstream release includethe addition of Oracle Linux GPG keys and Oracle Linux as an operating system that is supported by Cephto the ceph-deploy command and Python libraries.

1.4 Enabling Access to the Ceph Packages

The ceph-deploy package is available on Oracle Public Yum or ULN. This procedure describes how toenable access to Oracle Public Yum for the system that will act as the Storage Cluster deployment node.

Perform the following steps on the system that will act as the Storage Cluster deployment node:

1. Use a command such as curl or wget to download a yum repository file that includes the :

• For example, using wget with Oracle Linux 6:

# wget -O /etc/yum.repos.d/public-yum-ol6.repo \ http://public-yum.oracle.com/public-yum-ol6.repo

• For example, using wget with Oracle Linux 7:

# wget -O /etc/yum.repos.d/public-yum-ol7.repo \

Page 8: Ceph Storage for Oracle® Linux Release 1.0 - Release Notes · Ceph Storage for Oracle Linux Release 1.0 is based on the Ceph Community Firefly release (v0.80). The supported features

Preparing the Storage Cluster Nodes Before Installing Ceph

2

http://public-yum.oracle.com/public-yum-ol7.repo

The downloaded file contains entries for the ol6_ceph10 or ol7_ceph10 Ceph repository asappropriate.

2. Edit the downloaded yum repository file and set the value of the enabled parameter for the Cephrepository to 1.

The downloaded yum repository file also contains disabled entries for additional repositories thatcontain packages on which Ceph depends:

• Oracle Linux 6: ol6_latest

• Oracle Linux 7: ol7_latest and ol7_optional_latest

3. If the system does not already have access to the latest repositories on Public Yum or ULN, you canenable access by editing the downloaded yum repository file:

• On Oracle Linux 6, set the values of the enabled parameter for the ol6_latest repository to 1.

• On Oracle Linux 7, set the values of the enabled parameters for the ol7_latest andol7_optional_latest repositories to 1.

Caution

Do not enable access to the latest repositories on both Public Yum and ULN.

4. For Oracle Linux 7, enable access to the ol7_addons repository on Public Yum or ULN.

You can now prepare the Storage Cluster nodes for Ceph installation. See Section 1.5, “Preparing theStorage Cluster Nodes Before Installing Ceph”.

1.5 Preparing the Storage Cluster Nodes Before Installing Ceph

Note

For data integrity, a Storage Cluster should contain two or more nodes for storingcopies of an object.

For high availability, a Storage Cluster should contain three or more nodes thatstore copies of an object.

In the example used in the following steps, the administration node is ceph-node1.mydom.com (192.168.1.51).

To prepare the Storage Cluster nodes:

1. Perform the following steps on each system that will act as a Storage Cluster node:

a. If SELinux is enabled, disable it and reboot the system:

# sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config

You can re-enable SELinux on each node after configuring Ceph.

b. If the NTP service if not already configured, install and start it. See the Oracle Linux 6Administrator's Guide or the Oracle Linux 7 Administrator's Guide as appropriate.

Page 9: Ceph Storage for Oracle® Linux Release 1.0 - Release Notes · Ceph Storage for Oracle Linux Release 1.0 is based on the Ceph Community Firefly release (v0.80). The supported features

Preparing the Storage Cluster Nodes Before Installing Ceph

3

Note

Use the hwclock --show command to ensure that all nodes agree on thetime. By default, the Ceph monitors report health HEALTH_WARN clockskew detected on mon errors if the clocks on the nodes differ by morethan 50 milliseconds.

c. Stop and disable the firewall service.

For Oracle Linux 6, enter:

# service iptables stop# service ip6tables stop# chkconfig iptables off# chkconfig ip6tables off

For Oracle Linux 7, enter:

# systemctl stop firewalld# systemctl disable firewalld

You can restart, re-enable and reconfigure the firewall on each node after configuring Ceph.

d. Edit /etc/hosts and add entries for the IP address and host name of all of the nodes in theStorage Cluster, for example:

192.168.1.51 ceph-node1.mydom.com ceph-node1192.168.1.52 ceph-node2.mydom.com ceph-node2192.168.1.53 ceph-node3.mydom.com ceph-node3192.168.1.54 ceph-node4.mydom.com ceph-node4

Note

Although you can use DNS to configure host name to IP address mapping,Oracle recommends that you also configure /etc/hosts in case the DNSservice becomes unavailable.

2. Enable SSH on the nodes:

a. On the administration node, generate the SSH key, specifying an empty passphrase:

# ssh-keygen

b. Copy the key to the other nodes in the Storage Cluster, for example:

# ssh-copy-id root@ceph-node2# ssh-copy-id root@ceph-node3# ssh-copy-id root@ceph-node4

3. To prevent errors when running ceph-deploy as a user with passwordless sudo privileges, usevisudo to comment out the Defaults requiretty setting in /etc/sudoers or change it toDefaults:ceph !requiretty.

You can now install and configure the Storage Cluster deployment node, which is usually the same systemas the administration node. See Section 1.6, “Installing and Configuring Ceph on the Storage ClusterDeployment Node”.

Page 10: Ceph Storage for Oracle® Linux Release 1.0 - Release Notes · Ceph Storage for Oracle Linux Release 1.0 is based on the Ceph Community Firefly release (v0.80). The supported features

Installing and Configuring Ceph on the Storage Cluster Deployment Node

4

1.6 Installing and Configuring Ceph on the Storage ClusterDeployment Node

Note

In the example used in the following steps, the deployment node is ceph-node1.mydom.com (192.168.1.51), which is the same as the administration node.

Perform the following steps on the deployment node:

1. Install the ceph-deploy package.

# yum install ceph-deploy

2. Create a Ceph configuration directory for the Storage Cluster and change to this directory, for example:

# mkdir /var/mydom_ceph# cd /var/mydom_ceph

3. Use the ceph-deploy command to define the members of the Storage Cluster, for example:

# ceph-deploy --cluster mydom_ceph new ceph-node{1,2,3,4}

Note

If you do not intend to run more than one Storage Cluster on the samehardware, you do not need to specify a cluster name using the --clusteroption.

4. Edit /etc/ceph/ceph.conf and set the default number of replicas, for example:

osd pool default size = 3

You can now install Ceph on the remaining Storage Cluster nodes. See Section 1.7, “Installing Ceph onthe Other Storage Cluster Nodes”.

1.7 Installing Ceph on the Other Storage Cluster Nodes

Having installed and configured the Ceph deployment node, you can use this node to install Ceph on theother nodes.

To install Ceph on all the Storage Cluster nodes, run the following command on the deployment node:

# ceph-deploy install ceph-node{1,2,3,4}

You can now configure the Storage Cluster. See Section 1.8, “Configuring the Storage Cluster”.

1.8 Configuring the Storage Cluster

To configure the Storage Cluster, perform the following steps on the administration node:

1. Initialize Ceph monitoring and deploy a Ceph Monitor on one or more nodes in the Storage Cluster, forexample:

# ceph-deploy mon create-initial# ceph-deploy mon create ceph-node{2,3,4}

Page 11: Ceph Storage for Oracle® Linux Release 1.0 - Release Notes · Ceph Storage for Oracle Linux Release 1.0 is based on the Ceph Community Firefly release (v0.80). The supported features

Configuring the Storage Cluster

5

Note

For high availability, Oracle recommends that you configure at least three nodesas Ceph Monitors.

2. Gather the monitor keys and the OSD and MDS bootstrap keyrings from one of the Ceph Monitors, forexample:

# ceph-deploy gatherkeys ceph-node3

3. Use the following command to prepare the back-end storage devices for each node in the StorageCluster:

# ceph-deploy osd --zap-disk --fs-type fstype create node:device

Note

This command deletes all data on the specified device.

The supported file system types (fstype) are btrfs and xfs.

For example, prepare a btrfs file system as the back-end storage device on /dev/sdb1 for all nodes ina Storage Cluster:

# ceph-deploy osd --zap-disk --fs-type btrfs create ceph-node{1,2,3,4}:sdb1

4. When you have configured the Storage Cluster and established that it works correctly, re-enableSELinux in enforcing mode on each of the nodes where you previously disabled it and then reboot eachnode.

# sed -i '/SELINUX/s/disabled/enforcing/' /etc/selinux/config# reboot

5. Restart, re-enable, and reconfigure the firewall service on each of the nodes where you previouslydisabled it.

For Oracle Linux 6:

a. Restart and re-enable the firewall service.

# service iptables start# service ip6tables start# chkconfig iptables on# chkconfig ip6tables on

b. Allow access to TCP ports 6800 through 7300 that are used by the Ceph OSD, for example:

# iptables -A INPUT -i interface -p tcp -s network-address/netmask \ --match multiport --dports 6800:7300 -j ACCEPT

c. If a node runs Ceph Monitor, allow access to TCP port 6789, for example:

# iptables -A INPUT -i interface -p tcp -s network-address/netmask \ --dport 6789 -j ACCEPT

d. If a node if configured as an Object Gateway, allow access to port 7480 (or an alternate port thatyou have configured), for example:

# iptables -A INPUT -i interface -p tcp -s network-address/netmask \

Page 12: Ceph Storage for Oracle® Linux Release 1.0 - Release Notes · Ceph Storage for Oracle Linux Release 1.0 is based on the Ceph Community Firefly release (v0.80). The supported features

Configuring an Object Gateway for OpenStack and Swift Access

6

--dport 7480 -j ACCEPT

e. Save the configuration:

# service iptables save

For Oracle Linux 7:

a. Restart and re-enable the firewall service.

# systemctl start firewalld# systemctl enable firewalld

b. Allow access to TCP ports 6800 through 7300 that are used by the Ceph OSD, for example:

# firewall-cmd --permanent --zone=zone --add-port=6800-7300/tcp

c. If a node runs Ceph Monitor, allow access to TCP port 6789 , for example:

# firewall-cmd --permanent --zone=zone --add-port=6789/tcp

d. If a node if configured as an Object Gateway, allow access to port 7480 (or an alternate port thatyou have configured), for example:

# firewall-cmd --permanent --zone=zone --add-port=7480/tcp

6. Use the following command to check the status of the Storage Cluster:

# ceph status

It usually takes several minutes for the Storage Cluster to stabilize before its health is shown asHEALTH_OK.

1.9 Configuring an Object Gateway for OpenStack and Swift AccessIf you want to enable access by OpenStack and Swift to Ceph, configure at least one Storage Cluster nodeas an Object Gateway. You may use the admin node within the cluster to host the Object Gateway.

If you are running the Ceph Object Storage service within a single data center, you are able to configure asimple Ceph Object Gateway that does not involve any configuration of regions or zones. It is also possibleto configure a federated Ceph Object Gateway to handle a geographically distributed storage servicedeployed for fault tolerance and failover. In this case you must configure different regions and zones.Ceph Object Gateway federation configuration is in technology preview and is not described further in thisdocumentation.

An Apache web server daemon that is configured to load the FastCGI module is required to run a simpleCeph Object Gateway.

The Ceph Object Gateway is a client of the Ceph Storage Cluster. As a Ceph Storage Cluster client, itrequires:

• A running Ceph Storage Cluster

• A name for the gateway instance

• A storage cluster user name with appropriate permissions in a keyring

• Pools to store its data

• A data directory for the gateway instance

Page 13: Ceph Storage for Oracle® Linux Release 1.0 - Release Notes · Ceph Storage for Oracle Linux Release 1.0 is based on the Ceph Community Firefly release (v0.80). The supported features

Install and Configure Apache and FastCGI

7

• An instance entry in the Ceph Configuration file

• A configuration file for the web server to interact with FastCGI

1.9.1 Install and Configure Apache and FastCGI

The Ceph Object Gateway is a FastCGI service that provides a RESTful HTTP API to store objects andmetadata. Therefore, it is necessary to install and configure a web service that can run the FastCGImodule to expose the API over HTTP. This installation and setup is performed on the gateway host.

The gateway host must already be configured as a node within the Ceph Storage Cluster. You can installthe Ceph Object Gateway on the admin node in the cluster, or you may use an alternate node if required.

Installation and Configuration of Apache and FastCGI

Install Apache and FastCGI 1. Install the Apache web server daemon on the gateway host:

# yum install httpd

2. If the gateway host is running Oracle Linux 6, you must also installthe separate mod_proxy_fcgi package:

# yum install mod_proxy_fcgi

You do not need to install this package if the gateway host isrunning Oracle Linux 7.

Configure Apache for FastCGI Edit the Apache web server configuration on the gateway host. As theroot user, open /etc/httpd/conf/httpd.conf in an editor andmake the following changes:

1. Make sure that the ServerName directive has been uncommentedand that you have set this value to the fully qualified domain name ofthe gateway host.

ServerName gw.example.org

where gw.example.org must be replaced with the fully qualifieddomain of the gateway host.

2. By default, the listen directive usually does not specify an IP addressand only specifies the port, which allows Apache to bind to allinterfaces. Force Apache to bind to the public IP address of thehost by making sure that you specify the IP address as part of theListen directive. For example:

Listen 198.51.100.1:80

where 198.51.100.1 must be replaced with the public facing IPaddress of the host.

3. To load the mod_proxy_fcgi module, make sure that the followinglines exist in your configuration file and that they are uncommented:

<IfModule !proxy_fcgi_module> LoadModule proxy_fcgi_module modules/mod_proxy_fcgi.so </IfModule>

Restart Apache Start Apache if it is not already running, or restart it using the followingcommand if the gateway host is running Oracle Linux 6:

Page 14: Ceph Storage for Oracle® Linux Release 1.0 - Release Notes · Ceph Storage for Oracle Linux Release 1.0 is based on the Ceph Community Firefly release (v0.80). The supported features

Install and Configure a Simple Ceph Object Gateway

8

# service httpd restart

Alternately, if the gateway host is running Oracle Linux 7, use thefollowing command:

# systemctl restart httpd

1.9.2 Install and Configure a Simple Ceph Object GatewayInstall and Set Up the Ceph Object Gateway

Install Ceph Object GatewayDaemon

Install and configure the Ceph Object Gateway daemon on the gatewayhost:

# yum install ceph-radosgw

Note that for federated architectures, you would additionally installthe synchronization agent, radosgw-agent, to handle the metadatasynchronization between zones and regions.

Additional Basic RequirementsTo Run The Object GatewayDaemon

Some actions are not performed automatically during the installation ofthe Ceph Object Gateway daemon, since these steps may vary if youare configuring a federated gateway or you have chosen an alternativedeployment approach. If you have followed the instructions provided sofar, continue by performing the following steps:

1. Create the Ceph Object Gateway data directory manually, if it doesnot already exist:

# mkdir -p /var/lib/ceph/radosgw/ceph-radosgw.gateway

2. Update the ownership of the socket directory to allow the ObjectGateway daemon to write to it. The daemon runs as the unprivilegedApache UID. To grant permissions to the default socket location, dothe following on the gateway host:

# chown apache:apache /var/run/ceph

3. The root user owns the log file, by default, but since the ObjectGateway daemon runs as the unprivileged Apache UID, ownershipof this file must be changed to allow the Object Gateway to write toit. Do the following on the gateway host:

# chown apache:apache /var/log/radosgw/client.radosgw.gateway.log

Create a User and KeyringTo Authenticate The ObjectGateway To The Ceph StorageCluster

The Ceph Object Gateway must have a user name and key tocommunicate with a Ceph Storage Cluster. In the following steps, anadmin node in the Ceph Storage Cluster is used to create a keyring. Aclient user name and key is then created for the Ceph Object Gateway.The key is added to the Ceph Storage Cluster. Finally, the keyring iscopied to the node running the Ceph Object Gateway, so that it can useit to access the Ceph Storage Cluster.

Execute the following steps on the admin node of your cluster:

1. Create a keyring for the gateway:

# ceph-authtool --create-keyring /etc/ceph/ceph.client.radosgw.keyring

Page 15: Ceph Storage for Oracle® Linux Release 1.0 - Release Notes · Ceph Storage for Oracle Linux Release 1.0 is based on the Ceph Community Firefly release (v0.80). The supported features

Install and Configure a Simple Ceph Object Gateway

9

# chmod +rw /etc/ceph/ceph.client.radosgw.keyring

2. Generate a Ceph Object Gateway user name and key and add it tothe keyring:

# ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n client.radosgw.gateway --gen-key

Note that the Ceph Object Gateway user name is set toclient.radosgw.gateway.

3. Add capabilities to the key:

# ceph-authtool -n client.radosgw.gateway --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/ceph/ceph.client.radosgw.keyring

4. Add the key to the Ceph Storage Cluster to enable the Ceph ObjectGateway access:

# ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.gateway -i /etc/ceph/ceph.client.radosgw.keyring

5. If the admin node of the cluster is not also used to host the ObjectGateway, you must copy the keyring to the gateway host. Typically,this is done using scp, for example:

# scp /etc/ceph/ceph.client.radosgw.keyring [email protected]:

6. On the gateway host, you must ensure that the keyringis moved to the correct location, at /etc/ceph/ceph.client.radosgw.keyring:

# mv ceph.client.radosgw.keyring /etc/ceph/ceph.client.radosgw.keyring

Create Pools for the ObjectGateway

Ceph Object Gateways require Ceph Storage Cluster pools to storespecific gateway data. If the user you created has permissions, thegateway daemon creates the pools automatically. If you intend to allowthe gateway daemon to create pools automatically, you should ensurethat an appropriate default number of placement groups per pool isset in your Ceph configuration file (/etc/ceph/ceph.conf) on theadmin node. Ceph Object Gateways have multiple pools, so you shouldkeep the default number of placement groups per pool low to maximizeperformance.

You can manually create the pools if you wish to use alternative valuesfor the number of placement groups per pool. The default pool namesfor an Object Gateway are as follows:

• .rgw.root

• .rgw.control

• .rgw.gc

• .rgw.buckets

• .rgw.buckets.index

• .log

• .intent-log

Page 16: Ceph Storage for Oracle® Linux Release 1.0 - Release Notes · Ceph Storage for Oracle Linux Release 1.0 is based on the Ceph Community Firefly release (v0.80). The supported features

Install and Configure a Simple Ceph Object Gateway

10

• .usage

• .users

• .users.email

• .users.swift

• .users.uid

Use the following command to manually create each of these pools,substituting poolname with the name of the pool that you are creating,pg-num with the number of placement groups to create for the pool,and pgp-num for the number of placement groups for placement(usually the same or greater than the number of placement groups).

# ceph osd pool create poolname pg-num pgp-num

You can list the available pools and check that pools have been createdfor the gateway by running:

# rados lspools

Note that if you have not yet started the gateway daemon, only poolsthat you have manually created are listed at this point.

Add the Ceph GatewayConfiguration Details To TheCeph Configuration File

Details of the Ceph Object Gateway configuration must be madeavailable to the Ceph Storage Cluster. Edit the Ceph Configurationfile on the admin node of the cluster. Create a configuration entrythat identifies the Ceph Object Gateway instance, provides the shorthostname of the gateway host, provides a path to the keyring file,provides a path to a lock file and specifies the socket information forFastCGI. This entry is slightly different depending on whether you arerunning Ceph on Oracle Linux 6 or on Oracle Linux 7, since OracleLinux 6 uses localhost TCP for the FastCGI socket, while Oracle Linux 7uses Unix Domain Sockets.

For Oracle Linux 6, append the following configuration to /etc/ceph/ceph.conf on the admin node of the Ceph Storage Cluster:

[client.radosgw.gateway]host = hostnamekeyring = /etc/ceph/ceph.client.radosgw.keyringrgw socket path = ""log file = /var/log/radosgw/client.radosgw.gateway.logrgw frontends = fastcgi socket_port=9000 socket_host=0.0.0.0

For Oracle Linux 7, append the following configuration to /etc/ceph/ceph.conf on the admin node of the Ceph Storage Cluster:

[client.radosgw.gateway]host = hostnamekeyring = /etc/ceph/ceph.client.radosgw.keyringrgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.socklog file = /var/log/radosgw/client.radosgw.gateway.logrgw print continue = false

Page 17: Ceph Storage for Oracle® Linux Release 1.0 - Release Notes · Ceph Storage for Oracle Linux Release 1.0 is based on the Ceph Community Firefly release (v0.80). The supported features

Install and Configure a Simple Ceph Object Gateway

11

Replace hostname with the short hostname of the gateway host. Youcan obtain this value by running hostname -s on the gateway host.

Update Ceph ConfigurationAcross All Cluster Nodes

The updated Ceph configuration file must be copied to all Ceph clusternodes. This is achieved using the ceph-deploy command on theadmin node of the cluster.

First, copy the ceph.conf file to the root directory used by the clusteron the admin node:

# cp /etc/ceph/ceph.conf /var/mydom_ceph

Next, pull the configuration from the cluster directory into the adminnode:

# ceph-deploy --overwrite-conf config pull hostname

Substitute hostname with the short hostname of the Ceph admin node.You can obtain this value by running hostname -s on the admin node.These commands cause the contents of the ceph.conf file to beoverwritten.

Finally, push the updated configuration from the admin node to allother nodes in the cluster including the gateway host. Run the followingcommand for each host in the cluster:

# ceph-deploy --overwrite-conf config push hostname

Substitute hostname with the short hostname of each node in thecluster, including the gateway host. You may run this as a singlecommand by substituting hostname with a space-separated list of all ofthe hostnames that you wish to push the configuration update to.

Create a CGI wrapper script A wrapper CGI script is used to provide an interface between Apacheand the Object Gateway daemon. You must create the script yourself ina location that is accessible to Apache on the gateway host.

1. Create /var/www/html/s3gw.fcgi and open it in an editor. Addthe following content to the file:

#!/bin/sh exec /usr/bin/radosgw -c /etc/ceph/ceph.conf -n client.radosgw.gateway

2. Provide execute permissions to the script and update ownership sothat Apache is able to execute the script:

# chmod +x /var/www/html/s3gw.fcgi # chown apache:apache /var/www/html/s3gw.fcgi

Start The Object GatewayDaemon

Start the Ceph Object Gateway daemon on the gateway host.

On Oracle Linux 6, run:

# service ceph-radosgw start# chkconfig ceph-radosgw on

On Oracle Linux 7, run:

Page 18: Ceph Storage for Oracle® Linux Release 1.0 - Release Notes · Ceph Storage for Oracle Linux Release 1.0 is based on the Ceph Community Firefly release (v0.80). The supported features

Install and Configure a Simple Ceph Object Gateway

12

# systemctl start ceph-radosgw# chkconfig ceph-radosgw on

Create An Apache VirtualHostEntry For the Ceph ObjectGateway

Apache must be configured to provide access to the FastCGI wrapperscript so that the Ceph Object Gateway is able to function. To do this,create a VirtualHost entry in your Apache configuration. It is best tocreate this as an individual configuration file in /etc/httpd/conf.d.The VirtualHost entry differs slightly depending on the version of OracleLinux you are using. Instructions are provided for both.

• On Oracle Linux 6, create the file /etc/httpd/conf.d/rgw.confand add the following content:

<VirtualHost 198.51.100.1:80>ServerName gw.example.comDocumentRoot /var/www/htmlErrorLog /var/log/httpd/rgw_error.logCustomLog /var/log/httpd/rgw_access.log combinedRewriteEngine OnRewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]SetEnv proxy-nokeepalive 1ProxyPass / fcgi://localhost:9000/</VirtualHost>

where 198.51.100.1 must be replaced with the public facingIP address of the host. Substitute gw.example.com so that theServerName directive points to the hostname or fully qualifieddomain name of the gateway host.

• On Oracle Linux 7, create the file /etc/httpd/conf.d/rgw.confand add the following content:

<VirtualHost 198.51.100.1:80>ServerName gw.example.comDocumentRoot /var/www/htmlErrorLog /var/log/httpd/rgw_error.logCustomLog /var/log/httpd/rgw_access.log combinedRewriteEngine OnRewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]SetEnv proxy-nokeepalive 1ProxyPass / unix:///var/run/ceph/ceph.radosgw.gateway.fastcgi.sock|fcgi://localhost:9000/</VirtualHost>

where 198.51.100.1 must be replaced with the public facingIP address of the host. Substitute gw.example.com so that theServerName directive points to the hostname or fully qualifieddomain name of the gateway host.

Restart Apache The httpd service needs to be restarted to use the new configuration.

On Oracle Linux 6, do:

# service httpd restart# chkconfig httpd on

Alternately, on Oracle Linux 7, do:

# systemctl restart httpd# systemctl enable httpd

Page 19: Ceph Storage for Oracle® Linux Release 1.0 - Release Notes · Ceph Storage for Oracle Linux Release 1.0 is based on the Ceph Community Firefly release (v0.80). The supported features

Installing a Ceph Client

13

At this point, the Ceph Object Gateway should be running and the REST interfaces available. You mustcreate and initial Ceph Object Gateway user for the S3 interface and a subuser for the Swift interface.

1.10 Installing a Ceph ClientTo install a Ceph Client:

1. Perform the following steps on the system that will act as a Ceph Client:

a. If SELinux is enabled, disable it and then reboot the system.

# sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config# reboot

b. Stop and disable the firewall service.

For Oracle Linux 6 or Oracle Linux 7 (where iptables is used instead of firewalld), enter:

# service iptables stop# service ip6tables stop# chkconfig iptables off# chkconfig ip6tables off

For Oracle Linux 7, enter:

# systemctl stop firewalld# systemctl disable firewalld

2. On the administration node of the Storage Cluster, copy the SSH key to the Ceph Client system, forexample:

# ssh-copy-id root@ceph-client

Note

This example assumes that you have configured entries for the Ceph Clientsystem in DNS and/or in /etc/hosts.

3. On the deployment node (which is usually the same as the administration node), use ceph-deploy toinstall Ceph on the Ceph Client system, for example:

# ceph-deploy install ceph-client

4. On the administration node, copy the Ceph configuration file and the Ceph keyring to the Ceph Clientsystem, for example:

# ceph -deploy admin ceph-client

You can now configure a Block Device on the Ceph Client. See Section 1.12, “Configuring a BlockDevice on a Ceph Client”.

5. When you have established that the Ceph Client works with a Storage Cluster, re-enable SELinux inenforcing mode if you previously disabled it and then reboot the system.

# sed -i '/SELINUX/s/disabled/enforcing/' /etc/selinux/config# reboot

1.11 Creating a Storage PoolTo create a storage pool for Block Devices in the OSD, use the following command:

Page 20: Ceph Storage for Oracle® Linux Release 1.0 - Release Notes · Ceph Storage for Oracle Linux Release 1.0 is based on the Ceph Community Firefly release (v0.80). The supported features

Configuring a Block Device on a Ceph Client

14

# ceph osd pool create datastore 150 150

This example creates a pool named datastore with a placement group value of 150.

1.12 Configuring a Block Device on a Ceph ClientNote

Ensure that the Storage Cluster is active and healthy before configuring a blockdevice.

To configure a Block Device on a Ceph Client:

1. Use the rbd command to create a Block Device image in the pool, for example:

# rbd create --size 4096 --pool datastore vol01

This example creates a 4096 MB volume named vol01 in the datastore pool.

Note

If you do not specify a storage pool, rbd uses the default rbd pool:

# rbd create --size 4096 vol01

2. Use the rbd command to map the image to a Block Device, for example:

# rbd map vol01 --pool datastore

Ceph creates the Block Device under /dev/rbd/pool/volume.

The rdb ls command lists the images that you have mapped for a storage pool, for example:

# rbd ls -p datastorevol01

You can make a file system on the Block Device and mount this file system on a suitable mount point,for example:

# mkfs.ext4 -m0 /dev/rbd/datastore/vol01# mkdir /var/vol01# mount /dev/rbd/datastore/vol01 /var/vol01

1.13 Known IssuesThe following sections describe known issues in this release.

1.13.1 ceph-deploy Debugging and Warning Messages for Non-existentPackages

Messages such as the following might be seen when using the ceph-deploy install and ceph-deploy purge commands:

[DEBUG ] No package ceph-mon available.[DEBUG ] No package ceph-osd available.[WARNIN] No Match for argument: ceph-mon[WARNIN] No Match for argument: ceph-osd

You can ignore these messages as the ceph-mon and ceph-osd packages do not exist.

Page 21: Ceph Storage for Oracle® Linux Release 1.0 - Release Notes · Ceph Storage for Oracle Linux Release 1.0 is based on the Ceph Community Firefly release (v0.80). The supported features

ceph-deploy Reports Errors

15

(Bug ID 21193694)

1.13.2 ceph-deploy Reports Errors

If ceph-deploy reports errors when run as a user with passwordless sudo privileges, run visudo, locatethe Defaults requiretty entry, and either change this entry to Defaults:ceph !requiretty orcomment out the entry.

1.13.3 ceph-deploy Reports an Error on Exit

Under Oracle Linux 6, ceph-deploy reports the error Error in sys.exitfunc: when it exits. Thiserror is produced by Python's threading module and can safely be ignored. To prevent the error from beingreported, set CEPH_DEPLOY_TEST=YES in the shell environment. (Bug ID 20755074)

1.13.4 ceph-deploy purge Does Not Remove Dependent Packages

The ceph-deploy purge command removes the ceph and ceph-common packages from a serverbut does not remove other dependent packages such as python-ceph, librbd1, librados2, andlibcephfs1. The workaround is to use rpm -e to remove dependent packages.

(Bug ID 21193844)

1.13.5 ceph-deploy Python Errors on Oracle Linux 6

Under Oracle Linux 6, ceph-deploy reports errors such as the following in syslog and saves a Pythoncrash dump:

timestamp server abrt: detected unhandled Python exception in'/usr/bin/ceph-deploy'timestamp server abrtd: New client connectedtimestamp server abrt-server[PID']: Saved Python crash dump ofpid PID to /var/spool/abrt/pyhook-timestamp-PIDtimestamp server abrtd: Directory'pyhook-timestamp-PID' creation detected

timestamp server abrtd: Duplicate: UUIDtimestamp server abrtd: DUP_OF_DIR:/var/spool/abrt/pyhook-timestamp-PID'timestamp server abrtd: Deleting problem directorypyhook-timestamp-PID (dup of pyhook-timestamp-PID')timestamp server abrtd: Sending an email...timestamp server abrtd: Email was sent to: root@localhost

Despite the apparent errors, the command succeeds. To prevent the errors from being reported and tostop the crash dump file from being written, set CEPH_DEPLOY_TEST=YES in the shell environment. (BugID 21089938)

1.13.6 ceph-radosgw Service Fails to Start on Oracle Linux 7

If the owner of /var/run/ceph is root, the ceph-radosgw service fails to start on Oracle Linux 7, forexample:

# systemctl start ceph-radosgwStarting ceph-radosgw (via systemctl): Job for ceph-radosgw.service failed.See 'systemctl status ceph-radosgw.service' and 'journalctl -xn' for details.[FAILED]

The workaround is to run visudo, locate the Defaults requiretty entry, and either change this entryto Defaults:ceph !requiretty or comment out the entry. (Bug ID 21082202)

Page 22: Ceph Storage for Oracle® Linux Release 1.0 - Release Notes · Ceph Storage for Oracle Linux Release 1.0 is based on the Ceph Community Firefly release (v0.80). The supported features

ceph-radosgw Service Produces Error and Warning Messages when Starting on Oracle Linux 6

16

1.13.7 ceph-radosgw Service Produces Error and Warning Messages whenStarting on Oracle Linux 6

Under Oracle Linux 6, you might see the following warnings when you start the ceph-radosgw service:

# service ceph-radosgw start

Starting radosgw instance(s).../usr/bin/dirname: extra operand `-n'Try `/usr/bin/dirname --help' for more information.timestamp ID -1 WARNING: libcurl doesn't support curl_multi_wait()timestamp ID -1 WARNING: cross zone / regiontransfer performance may be affectedStarting client.radosgw.gateway... [ OK ]/usr/bin/radosgw is running.

The curl_multi_wait function was added in libcurl version 7.28.0, which is not available on OracleLinux 6 systems. This error does not prevent the service from starting. (Bug ID 20881228)

The dirname extra operand error and other warnings do not prevent the service from starting. (Bug ID20876085)

1.13.8 Mounting a CephFS File System Produces a Spurious Error Message

Attempting to mount a CephFS file system on a client might produce the error mount: errorwriting /etc/mtab: Invalid argument. This error can safely be ignored.

Note

Although CephFS is provided, it is an unsupported technical preview feature.

(Bug ID 20469655)

1.13.9 OSD Daemon Fails to Start

Under Oracle Linux 7 Update 2, the OSD daemon might fail to start and fail to mount the OSD disk partitionif you specify the --zap-disk option to the ceph-deploy osd command. A workaround is to mount theOSD disk partition and start the OSD daemon manually, for example:

# mount /dev/sdb1 /var/lib/ceph/osd/ceph-0# /etc/init.d/ceph start osd

An alternative workaround is to reboot the server and restart the OSD daemon manually, for example:

# reboot# /etc/init.d/ceph restart osd

(Bug ID 22264455)

1.13.10 radosgw-admin Generates JSON Escape Characters in Keys

The radosgw-admin user create command to create an Object Storage service user can generateJSON escape characters (\) in a key that some clients are not able to handle. Suggested workaroundsinclude:

• Remove the escape character and encapsulate the string in quotes.

• Regenerate the key until it does not contain an escape character.

Page 23: Ceph Storage for Oracle® Linux Release 1.0 - Release Notes · Ceph Storage for Oracle Linux Release 1.0 is based on the Ceph Community Firefly release (v0.80). The supported features

sosreport Takes a Long Time to Run on a Storage Cluster Node

17

• Use radosgw-admin key create to create the access and secret keys manually.

(Bug ID 21384788)

1.13.11 sosreport Takes a Long Time to Run on a Storage Cluster Node

The sosreport command can take up to 45 minutes to run on a storage cluster node. (Bug ID 20523517)

Page 24: Ceph Storage for Oracle® Linux Release 1.0 - Release Notes · Ceph Storage for Oracle Linux Release 1.0 is based on the Ceph Community Firefly release (v0.80). The supported features

18

Page 25: Ceph Storage for Oracle® Linux Release 1.0 - Release Notes · Ceph Storage for Oracle Linux Release 1.0 is based on the Ceph Community Firefly release (v0.80). The supported features

19

Ceph TerminologyBlock Device

A Ceph component that provides access to Ceph storage as a thinly provisioned block device. When anapplication writes to a Block Device, Ceph implements data redundancy and enhances I/O performance byreplicating and striping data across the Storage Cluster.

Also known as a RADOS Block Device or RBD.

Ceph OSDA Ceph component that provides access to an OSD.

Also known as a Ceph OSD Daemon.

ClientA host that can access the data stored in a Storage Cluster. A Ceph Client need not be a member node of aStorage Cluster.

Monitor (MON)A Ceph component used for tracking active and failed nodes in a Storage Cluster.

NodeA system that is a member of a Storage Cluster.

Object GatewayA Ceph component that provides a RESTful gateway that can use the Amazon S3 and OpenStack Swiftcompatible APIs to present OSD data to Ceph Clients, OpenStack, and Swift clients. An Object Gateway isconfigured on a node of a Storage Cluster.

Also known as a RADOS Gateway or RGW.

Object Storage Device (OSD)Storage on a physical device or logical unit (LUN). Typically, data on an OSD is configured as a btrfs file systemto take advantage of its snapshot features. However, other file systems such as XFS can also be used.

Storage ClusterA Ceph component that stores MON and OSD data across cluster nodes.

Also known as a Ceph Object Store, RADOS Cluster, or Reliable Autonomic Distributed Object Store.

Page 26: Ceph Storage for Oracle® Linux Release 1.0 - Release Notes · Ceph Storage for Oracle Linux Release 1.0 is based on the Ceph Community Firefly release (v0.80). The supported features

20