White Paper - TS7700 Best Practices - Cache Management in ... · larger available cache sizes,...

47
March 2013 © Copyright IBM Corporation, 2008, 2013 Page 1 of 47 Jim Fisher [email protected] IBM Advanced Technical Skills – North America IBM® Virtualization Engine TS7700 Series Best Practices Cache Management in the TS7720 V1.6

Transcript of White Paper - TS7700 Best Practices - Cache Management in ... · larger available cache sizes,...

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 1 of 47

Jim Fisher [email protected] IBM Advanced Technical Skills – North America

IBM® Virtualization Engine TS7700 Series Best Practices

Cache Management in the TS7720 V1.6

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 2 of 47

1 Introduction ............................................................................................................................. 3

1.1 Change History ................................................................................................................. 4 1.2 Release 1.6 Disk Cache Capacity ..................................................................................... 4 1.3 Release 1.7 Cache Capacity ............................................................................................. 5 1.4 Release 2.1 - iRPQ 8B3604 for Second Expansion Frame .............................................. 8 1.5 Release 3.0 CS9/XS9 Base Disk Cache ........................................................................... 9

2 Monitoring Cache Usage ...................................................................................................... 12 2.1 Using the TS7700 Management Interface ...................................................................... 13 2.2 Using Host Console Request .......................................................................................... 22 2.3 Using DISPLAY SMS, LIB ........................................................................................... 22 2.4 Attention Messages ........................................................................................................ 22

3 Managing Cache Usage ........................................................................................................ 25 3.1 Overwriting Existing Volumes....................................................................................... 25 3.2 Expiring Volume Data on Return to Scratch ................................................................. 26 3.3 Ejecting Volumes ........................................................................................................... 32 3.4 Altering Copy Consistency Points ................................................................................. 32 3.5 Retain Copy Mode ......................................................................................................... 32 3.6 Removal Policies ............................................................................................................ 32

3.6.1 Automatic Removal Policy ..................................................................................... 32 3.6.2 Enhanced Removal Policies .................................................................................... 34

3.7 Temporary Removal Threshold ..................................................................................... 40 4 Impact of Being in the Out of Cache Resource State ........................................................... 45 References: .................................................................................................................................... 46 Disclaimers: .................................................................................................................................. 46

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 3 of 47

1 Introduction Similar to the TS7740, whose capacity is limited by the number of physical tapes, the TS7720’s capacity is limited by the size of its cache. The customer in both cases needs to manage keeping the amount of data stored below the limits imposed by the number of physical tapes or cache size. The intent of this document is to make recommendations for managing the cache usage in the TS7720. It details the monitoring of cache usage, messages and attentions presented as the cache approaches the full state, consequences of reaching the full state and the methods used for managing the amount of data stored in the cache.

With TS7700 Release 1.6 hybrid grids are supported. With this support a cache management policy is added for the TS7720 to help keep the TS7720 cache from overflowing. This is called the Automatic Removal Policy. The policy removes the oldest logical volumes from the TS7720 cache as long as a consistent copy exists elsewhere in the grid. This policy only applies to hybrid grids. Also, a Temporary Removal Threshold is added to prevent the TS7720 cache from filling whilst a TS7740 is in service.

With Release 1.7 the TS7720 Cache Removal Policies are enhanced to include pinning of logical volumes and preferring to keep or remove a logical volume from cache. Also, the removal of volumes from the TS7720 can now occur in a homogeneous grid. With R1.6 the grid had to contain a TS7740 before Automatic Removal took place. This is now allowed since, with the larger available cache sizes, there may be a diversity of TS7720 cache sizes in the grid.

With Release 2.1 an RPQ was made available to add a second expansion frame containing the CS8 based disk cache. This increased the maximum disk cache capacity from 440TB to 580TB.

With Release 3.0 a new disk cache was made available. The new disk cache, the CS9, provides a maximum disk cache capacity of 624TB with just the base frame and a single expansion frame. The CS9 based disk cache in a base frame requires the VEB virtualization engine. The CS9 based disk cache is supported by the VEA virtualization engine in an expansion frame.

The cache can be filled with volumes that have been written directly from host I/O, or from copy activity between clusters when the TS7720 is part of a grid configuration. The following methods will be described for removing data from the cache to provide room for new data:

• Overwrite existing volumes • Expiring volume data • Ejecting volumes • Altering copy consistency points • Automatic Removal Policy • Enhanced Removal Policies • Temporary Removal Threshold

Note: This document and the TS7700 Management Interface panels display disk capacity in decimal format. This means that 1 GB is equal to 1,000,000,000 bytes, and 1 TB is equal to 1,000,000,000,000 bytes. The document uses the binary representation of a megabyte for logical

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 4 of 47

volume size. In this case the notation MiB will be used where 1 MiB = 1024 x 1024 = 1,048,576 bytes.

1.1 Change History Version 1.6 – March 2013 • Add details concerning the Automatic Removal threshold. • Add discussion of new Host Console Request command SETTING CACHE REMVTHR

which allows the automatic removal threshold to be changed. • Add discussion of new Host Console Request command SETTING ALERT REMOVMSG

which allows the automatic removal CBR message to be disabled. • Add the Management Interface panels for R3.0 code. Version 1.5 • Change minimum TS7720 cache configuration from 1 Controller + 1 Drawer to just 1

controller. • Add cache configurations with second expansion frame and CS8 based disk cache. • Add cache configurations with CS9 based disk cache. Version 1.4 – July 2010 • Add description of how Storage Class actions are handled in a hybrid grid. Version 1.3 – June, 2010 • Update for Release 1.7 and R1.7 PGA1

o Add enhanced removal policies o Add table of possible cache sizes (in Introduction section above)

Version 1.2 – December, 2009 • Update for Release 1.6

o Add discussion of Automatic Removal Policy o Add discussion of Temporary Removal Threshold o Add pointer to Hybrid Grid Best Practices White Paper on Techdocs for Retain Copy

Mode Version 1.1 – December, 2008 • Original Release

1.2 Release 1.6 Disk Cache Capacity With Release 1.6, the RAID6 capacity of the TS7720 is 40 TB or 70 TB. With overhead, the file system provides either 39,213 GB or 68,622 GB of usable space. The MI displays the amount of allocated cache as 39,000 GB and 68,000 GB. This field is rounded down to the nearest TB on the TS7720 panels. With its initial release, the TS7720 can be configured with customer usable space on the cache of 39,213 GB or 68,622 GB. This cache size results from disk drawers (either 4 or 7) containing sixteen 1 TB SATA disk drives that are configured in RAID 6 arrays (5 data + 2 parity) along with two spare drives per drawer. Customer data stored on the cache will take advantage of the

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 5 of 47

Host Bus Adapter data compression which, with a 3 to 1 compression ratio, can result in an effective usable cache size of up to 205 TB with the 7 drawer configuration.

1.3 Release 1.7 Cache Capacity With Release 1.7 a new cache controller with larger drives is available along with larger drives in the cache drawers. A second cache expansion frame is also available providing the potential for 441TB of cache. The tables below list the useable cache sizes available in the base frame and in the expansion frame with R1.7. The base frame must contain 1 cache controller and 6 cache drawers before the expansion frame can be attached.

• The CS7 cache controller contains 1TB drives and provides 9.8TB of usable storage. • The CS8 cache controller contains 2TB drives and provides 19.84TB of usable storage. • The XS7 cache drawer containing 1TB drives provides 9.8TB of usable storage. • The XS7 cache drawer containing 2TB drives and attached to the CS7 controller provides

19.68TB of usable storage. • The XS7 cache drawer containing 2TB drives and attached to the CS8 controller provides

23.84TB of storage.

Base Frame Description Size in TB 1 CS7 + 3 XS7 with 1TB drives 39.2 1 CS7 + 6 XS7 with 1TB drives 68.6 1 CS7 + 3 XS7 with 1TB drives + 1 XS7 with 2TB drives 58.88 1 CS7 + 3 XS7 with 1TB drives + 2 XS7 with 2TB drives 78.56 1 CS7 + 3 XS7 with 1TB drives + 3 XS7 with 2TB drives 98.24 1 CS8 19.84 1 CS8 + 1 XS7 with 2TB drives 43.68 1 CS8 + 2 XS7 with 2TB drives 67.52 1 CS8 + 3 XS7 with 2TB drives 91.36 1 CS8 + 4 XS7 with 2TB drives 115.2 1 CS8 + 5 XS7 with 2TB drives 139.04 1 CS8 + 6 XS7 with 2TB drives 162.88 Expansion Frame Description Size in TB 2 CS8 39.68 2 CS8 + 1 XS7 with 2TB drives 63.52 2 CS8 + 2 XS7 with 2TB drives 87.36 2 CS8 + 3 XS7 with 2TB drives 111.2 2 CS8 + 4 XS7 with 2TB drives 135.04 2 CS8 + 5 XS7 with 2TB drives 158.88 2 CS8 + 6 XS7 with 2TB drives 182.72 2 CS8 + 7 XS7 with 2TB drives 206.56 2 CS8 + 8 XS7 with 2TB drives 230.4 2 CS8 + 9 XS7 with 2TB drives 254.24

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 6 of 47

2 CS8 + 10 XS7 with 2TB drives 278.08

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 7 of 47

The following figures illustrate the possible disk cache configurations with R1.7 and the CS7 and CS8 based disk cache.

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 8 of 47

1.4 Release 2.1 - iRPQ 8B3604 for Second Expansion Frame With release 2.1, an iRPQ was made available to support a second expansion frame for a CS8 based TS7720. The second expansion frame adds a fourth CS8 disk controller and five XS7 expansion drawers. The fourth controller is possible since the VEA or VEB Virtualization Engine has four fibre ports available for communications with the disk controllers. The second expansion frame adds 139.04TB for a maximum total capacity of 580TB. The following figure shows the full 580TB configuration.

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 9 of 47

1.5 Release 3.0 CS9/XS9 Base Disk Cache With Release 3.0 a new model of disk cache was made available, the 3956-CS9 disk controller and 3956-XS9 expansion drawer. The CS9/XS9 disk cache can be installed in a TS7720 Encryption Capable Base (FC7331) or Encryption Capable Expansion (FC7332) frame. When installed in the Encryption Capable Base Frame, only the 3957-VEB is supported. When installed in the Encryption Capable Expansion frame, the Virtualization Engine can be either the 3957-VEA or the 3957-VEB engine. The TS7720 Encryption Capable Base frame can house between 23.86TB and 239.86TB in 24TB increments. The frame houses one CS9 controller and 0 through 9 XS9 expansion drawers. The TS7720 Encryption Capable Expansion frame can house between 24TB and 384TB in 24TB increments. The frame houses one CS9 controller and 0 through 15 XS9 expansion drawers. The Encryption Capable Base frame must be fully populated before the expansion frame can be added. The maximum capacity with both the Ecryption Capable Base and Expansion frames is 623.86TB.

The CS9 based Encryption Capable Expansion frame can be used to expand the disk cache of prior generation base frames. The base frame containing the prior generation of disk cache does not have to be filled before adding the Encryption Capable Expansion Frame. However, it is recommended that the base frame be filled first, assuming expansion drawers are available.

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 10 of 47

Note: Encryption cannot be enabled on the expansion frame when there are prior generations of disk cache in the base frame. With the first generation of TS7720 disk cache, the CS7/XS7 disk cache was made available in two sizes, 39.2TB and 68.6TB. The CS9/XS9 base expansion frame cann be added to either of these configurations. The Encryption Capable Expansion frame can contain between 24TB and 384TB, in 24TB increments. The following figure shows the maximum disk cache capabilities when adding the CS9/XS9 based expansion frame to the two CS7/XS7 based base frames.

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 11 of 47

The 39.2TB version of the first generation of TS7720 disk cache allowed from one to three second generation XS7 expansion drawers to be added. The CS9/XS9 based Encryption Capable Expansion frame can be added to this base frame. The Encryption Capable Expansion frame is configured with one CS9 controller and zero to fifteen XS9 expansion drawers. The base frame does not have to have three of the second generation XS7 expansion drawers in order for the Encryption Capable Expansion frame to be added. The second generation CS8/XS7 based disk cache in the base frame is configured with one CS8 controller and zero to six second generation XS7 expansion drawers. The CS9/XS9 based Encryption Capable Expansion frame can be added to this base frame. The Encryption Capable Expansion frame is configured with one CS9 controller and zero to fifteen XS9 expansion drawers. The base frame does not have to have six of the second generation XS7 expansion drawers in order for the Encryption Capable Expansion frame to be added. The following figures show the maximum disk cache configuratuions as discussed above.

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 12 of 47

2 Monitoring Cache Usage This section describes the methods that are provided by the host and the TS7720 for monitoring the cache utilization in the TS7720:

1. The TS7720 Management Interface (MI) provides panels displaying the Tape Volume Cache usage in various forms.

2. Host Command Line Request (Library Request Command) for CACHE status provides the total cache available and amount of cache used value.

3. The host console “DISPLAY SMS, LIBRARY (libname) details” command provides the “Cache percentage used” for a distributed library.

4. Attention messages surfaced to the host upon entering or leaving the limited free cache space and out of cache resources are provided.

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 13 of 47

2.1 Using the TS7700 Management Interface For the pre-R3.0 code the Health & Monitoring -> Tape Volume Cache menu item provides the following summary of the Tape Volume Cache. This panel is representative of the actual panel and may have slight differences from your panel. The Used size field indicates the amount and percentage of tape volume cache that is being used.

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 14 of 47

Figure 1: Pre-R3.0 Tape Volume Cache Panel For the R3.0 or higher code this information is found primarily on the Cluster Summary page.

• Moving the mouse over the disk cache tube will show the installed, available, and allocated cache size as well as the used size. Additionally the temporary removal threshold is shown in the cache tube if it has been enabled and can also be viewed from the Grid Summary panel > Actions > TS7720 Temporary Removal Thresholds panel. The Physical Cache status pod at the bottom of the page provides information concerning the disk cache.

• The copy queue size is shown next to the cluster when there is a copy queue present. The Copy Queue is also displayed at all times in the middle status pod at the bottom of the cluster summary panel.

• The host write throttle and copy throttle are displayed in an icon on the upper right of the cluster picture when there is any throttling occurring.

• The removal threshold is not displayed on the R3.0 interface.

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 15 of 47

Figure 2: R3.0 Cluster Summary

Figure 3: R3.0 Cluster Summary Throttle Indicator

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 16 of 47

For the pre-R3.0 code the Performance & Statistics -> Cache Utilization -> Number of logical volumes currently in cache item provides a graph and a numerical value of logical volumes in cache. This panel is representative of the actual panel and may have slight differences from your panel.

Figure 4: Pre-R3.0 Cache Utilization Panel - Number of Logical Volumes Currently in Cache For R3.0 and higher the panel is accessed via the Monitor icon > Performance > Cache Utilization > Number of virtual volumes currently in cache.

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 17 of 47

Figure 5: R3.0 Cache Utilization Panel - Number of Logical Volumes Currently in Cache

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 18 of 47

For pre-R3.0 The Performance & Statistics -> Cache Utilization -> Total amount of data currently in cache item provides a graph and numerical value of the cache usage. This panel is representative of the actual panel and may have slight differences from your panel.

Figure 6: Pre-R3.0 Cache Utilization Panel - Total Amount of Data Currently in Cache

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 19 of 47

For R3.0 and higher the panel is accessed via the Monitor icon > Performance > Cache Utilization > Total amount of data currently in cache.

Figure 7: R3.0 Cache Utilization Panel - Total Amount of Data Currently in Cache

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 20 of 47

The Performance & Statistics -> Cache Utilization -> Median duration that logical volumes have remained in cache item provides a graph and numerical values of the length of time logical volumes have remained in cache. This panel is representative of the actual panel and may have slight differences from your panel.

Figure 8: Pre-R3.0 Cache Utilization Panel - Median Duration That Volume Have Remained in Cache

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 21 of 47

For R3.0 and higher the panel is accessed via the Monitor icon > Performance > Cache Utilization > Median duration that virtual volumes have remained in cache.

Figure 9: R3.0 Cache Utilization Panel - Median Duration That Volume Have Remained in Cache

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 22 of 47

2.2 Using Host Console Request From a host with software supporting the Host Command Line Request, you can issue the LIBRARY REQUEST libname CACHE command to receive the following information on the current cache utilization for a distributed library: TAPE VOLUME CACHE STATE V1 INSTALLED/ENABLED GBS 68000/ 68000 PARTITION ALLOC USED PG0 PG1 PMIGR COPY PMT CPYT 0 68000 28466 0 28466 0 0 0 0

2.3 Using DISPLAY SMS, LIB Using DISPLAY SMS, LIB from the host will provide the following output that includes the percentage of cache used. Display SMS,LIBRARY(BARR86A),DETAIL CBR1110I OAM LIBRARY STATUS: TAPE LIB DEV TOT ONL AVL TOTAL EMPTY SCRTCH ON OP LIBRARY TYPE TYPE DRV DRV DRV SLOTS SLOTS VOLS BARR86A VDL 3957-VEA 0 0 0 0 0 0 Y Y ---------------------------------------------------------------------- Composite Library: BARR86 ---------------------------------------------------------------------- LIBRARY ID: BA86A CACHE PERCENTAGE USED: 41 OPERATIONAL STATE: AUTOMATED ---------------------------------------------------------------------- status lines The status lines indicate if one of the following states is active: – Limited Cache Free Space - Warning State – Out of Cache Resources - Critical State

2.4 Attention Messages The host, when told by the TS7700 that either the warning or critical cache state has been entered for a distributed library, will post one of following messages to the host console: CBR3792E Library library-name has entered the limited cache free space warning state CBR3794A Library library-name has entered the out of cache resources critical state. These messages are highlighted and held on the console for the operator to take action When the warning or critical cache state is exited, one of the following messages will be displayed on the host console:

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 23 of 47

CBR3793I Library library-name has left the limited cache free space warning state CBR3795I Library library-name has left the out of cache resources critical state

• The limited cache free space warning state occurs when the amount of free cache drops below 2 TB plus 5% of the usable cache. In the R1.7 PGA1 code level the 5% value changed to 1 TB regardless of the size of the cache. The warning state became a fixed value of 3 TB.

• The out of cache resources critical state is entered when the amount of free cache drops below 5% of the usable cache. This became a fixed value of 1 TB in the R1.7 PGA1 code level.

• The left the limited cache free space warning state message is surfaced when the amount of free cache has dropped at least 2.5 TB below the 5% of usable cache level. This provides a 0.5 TB range between entering and leaving the state. This became a fixed value of 3.5 TB in the R1.7 PGA1 code level.

• The left the out of cache resources critical state message is surfaced when the amount of available cache has dropped 2.5 TB below the 5% of usable cache level. This became a fixed value of 3.5 TB in the R1.7 PGA1 code level.

The table below describes the cache free space levels for entering and exiting the Limited Cache and Out of Cache states for the pre-R1.7 cache.

Cache Size

Limited Cache Free Space Warning State Out of Cache Resources Critical StateEnter Exit Enter Exit

40 TB (39.213 GB)

3.96 TB 4.46 TB 1.96 TB 4.46 TB

70 TB (68.622 GB)

5.43 TB 5.93 TB 3.43 TB 5.93 TB

For pre-R1.7 PGA1 code the other cache sizes use the following formulas to calculate the four values above. Use the tables in section 1 for the amount of usable cache. • Warning State Entry = (Usable TB * 0.05) + 2TB • Warning State Exit = (Usable TB * 0.05) + 2.5TB • Critical State Entry = (Usable TB * 0.05) • Critical State Exit = (Usable TB * 0.05) + 2.5TB For example, with a usable cache size of 233.28 TB, the threshold will be crossed when the amount of available cache crosses these values: • Warning State Entry = (233.28 * 0.05) + 2TB = 13.66TB • Warning State Exit = (233.28 * 0.05) + 2.5TB = 14.16TB • Critical State Entry = (233.28 * 0.05) = 11.66TB • Critical State Exit = (233.28 * 0.05) + 2.5TB = 14.16 As described above, R1.7 PGA1 code uses fixed values for the thresholds:

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 24 of 47

Limited Cache Free Space Warning State Out of Cache Resources Critical State Enter Exit Enter Exit 3 TB 3.5 TB 1 TB 3.5 TB

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 25 of 47

3 Managing Cache Usage The TS7720 can be implemented in one of two ways. The first method is where all of the compressed data will fit within the TS7720 disk cache. The second allows active logical volumes to be removed from the TS7720 disk cache, as long as at least one consistent copy exists elsewhere in the grid. When the first method is used, the most important aspect of managing the cache is to stay out of the cache full state because of its impact to continued operation (see the Impact of Being in the Out of Cache Resource State section for the details). When the second method is used, there must be sufficient space available in the grid to house all of the active data and all of the copies of that data. The grid could contain another TS7720 with a larger disk cache or a TS7740 with sufficient back-end tape. The following sections describe several ways to manage cache usage when all of the data must fit in the TS7720 disk cache.

3.1 Overwriting Existing Volumes There are two approaches to this method, the first being the most conservative. These methods rely on keeping the number of logical volumes (along with their size) at a point where they will not fill up the cache. As volume data becomes no longer needed, the volume is returned to the scratch category and eventually gets overwritten with new data. The first method bases the number of logical volumes on the assumption that every logical volume is filled to capacity with compressed host data. For example, the Data Class specifies a logical volume size of 6000 MiB (6000 x 1024 x 1024 bytes) or 6,291,456,000 bytes. This means the logical volume is 6000 MiB after compression. Assume a cache size of 623,860 GB, of which a maximum of 620,860 GB should be used (to stay below limited cache free space threshold). The maximum number of logical volumes should be set to 620,860 GB / 6,291,456,000 bytes/volume = 98683 logical volumes. The second method bases the number of logical volumes on the average host file size. Assuming an average host volume size of 750 MB uncompressed (750,000,000 bytes), a compression ratio of 2.5, and a cache size of 623,860 GB (620,860 GB to avoid limited cache free space threshold), the maximum number of logical volumes would be (620,860 GB / (750 MB / 2.5)) = 2,069,533 logical volumes. Exposures for the second method include the average volume size growing over time, and the average compression ratio shrinking. For both methods, the volumes do not need to be expired when they are returned to the scratch (i.e. fast ready) category because there is sufficient space in cache for all of the logical volumes to contain data that is actively managed by the TS7720.

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 26 of 47

3.2 Expiring Volume Data on Return to Scratch This method reduces cache usage as volumes are returned to a scratch category with the fast ready attribute and optionally with expire time enabled and expire hold enabled. These methods allow more volumes to be inserted into the system than there is cache space for holding all of them. This is because the expired volumes have no data associated with them and thus do not consume space in the cache. An expired volume is one that has been returned to scratch and has been deleted by the delete expired processing or has been allocated to satisfy a scratch mount. A volume that is just returned to scratch but not expired still takes up room in the cache. If you set the expire time for a category with the fast ready attribute set, but don’t select the hold option, the volume’s data will be managed by the TS7720 until either the expire-time passes, or the logical volume is allocated for a new mount, whichever comes first. If scratch volumes are expiring before they are being allocated, then you can reduce the expire-time in order to free up cache space earlier. Be sure to balance a reduced expire time with your need to keep scratch volume data around in case you want to return it to private. If you set the expire time for a category with the fast ready attribute set and select the hold option, the TS7720 will continue to manage a scratch logical volume’s data until the expire time has transpired. Also, the volume will not be allocated for a new mount during the expire-time period. For this situation, you can reduce the expire-time in order to free up cache space earlier. Be sure to balance a reduced expire time with your need to keep scratch volume data around in case you want to return it to private. Note: The minimum expire time is one hour, however, the TS7700 only flags expired logical volumes every 12 hours. Note: With expire hold, there is a potential for a period where mounts cannot be performed. This would occur if expire hold is set, and all scratch logical volumes have not yet expired. Since all the scratch volumes are in the hold period, none of them can be mounted. If there is enough cache available you can add more scratch volumes. However, if you are in an out-of-cache resources state, you should not add more logical volumes. You will need to wait for the existing expire hold scratch logical volumes to expire. Currently, a maximum of 1000 volumes per hour are expired by the TS7720. If 24,000 volumes are returned to scratch with expire hold enabled and a hold time of 24 hours is specified, it will be 48+ hours before all of the data associated with these volumes has been deleted from cache. Note: With the initial release level of TS7720 code (8.5.0.154), the amount of cache freed up by the deleted volumes can take up to 6 hours to be reflected in the cache utilization numbers. This means there will be a delay in reporting the exit of the cache warning and critical states. This delay no longer exists in later code levels starting with 8.5.0.171. One exposure with this approach is if the host return-to-scratch job does not run, the cache can fill and it can take over a day to recover.

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 27 of 47

Using the numbers from the Overwriting Existing Volumes section above, where 98,683 volumes was a safe value for the volume count, the number of logical volumes could be raised to 118,683 as long as 20,000 scratch logical volumes are always in the expired state.

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 28 of 47

For pre-R3.0 code the Logical Volumes -> Fast Ready Categories panel on the TS7720 MI is used to define fast ready categories, to set an Expire Time, and to set the Expire Hold attribute. A panel showing the creation of a fast ready category is also included. These panels are representative of the actual panel and may have slight differences from your panel.

Figure 10: Pre-R3.0 Fast Ready Category Panel

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 29 of 47

For R3.0 the panel is accessed via the Virtual volume icon > Categories.

Figure 11: R3.0 Category Panel

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 30 of 47

Figure 12: Pre-R3.0 Add a Fast Ready Category Panel – Adding Category 2, 48 Hour Expire Time, Hold Enabled

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 31 of 47

Figure 13: R3.0 Add a Fast Ready Category Panel – Adding Category 1235, 3 Day Expire Time, Hold Enabled

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 32 of 47

3.3 Ejecting Volumes This method reduces cache usage by ejecting volumes from the library. The logical volumes to be deleted need to be moved to a fast-ready category and then the host would need to issue an eject command for them. Volume data is deleted from cache when the volume is ejected. If you are having to eject logical volumes to manage your cache, consider adding more disk cache, adding another cluster and creating a grid with the existing cluster, or moving some of your workload to another tape library. Adding another cluster to increase your capacity only makes sense if you will not be replicating all data between the two clusters.

3.4 Altering Copy Consistency Points Copy Consistency Points (CCPs) are used to define which clusters in a multi-cluster grid are to contain a copy of a volume’s data. You can alter the CCPs to reduce the amount of data kept in cache on each cluster. Volume data contained in cache on a cluster can be the result of host writes or due to copy activity within the grid. Future cache usage can be reduced if copy consistency points are changed to eliminate the need to create a copy on a cluster. Changing the CCPs will only affect future host writes. The simplest change is to not create a copy to another cluster’s cache. Evaluate your need for multiple copies of data (test data, etc.) and change to use a CCP that does not create a copy.

3.5 Retain Copy Mode In a multi-cluster grid where fewer copies than the number of clusters in the grid of a volume are created, Retain Copy Mode may be needed. Refer to the IBM® Virtualization Engine TS7700 Series Best Practices - Hybrid Grid white paper on Techdocs for more details concerning Retain Copy Mode.

3.6 Removal Policies The following sections describe the removal policies for the TS7720 in a grid at the different code release levels. For Release 1.6 the Automatic Removal Policy is the sole means for removal. With Release 1.7 the Automatic Removal Policies is replaced by a set of enhanced removal policies.

3.6.1 Automatic Removal Policy This TS7720 policy, introduced in release 1.6, is to support grid configurations where there is a mixture of TS7720 and TS7740 clusters. The policy does not apply to homogeneous TS7720 grids. Since the TS7720 has a maximum storage space that is the size of its tape volume cache, once that cache fills, this removal policy allows logical volumes to be automatically removed

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 33 of 47

from cache as long as there is another consistent copy in the grid, such as on physical tape associated with a TS7740 or in another TS7720 tape volume cache. In essence, when coupled with the copy policies, it provides an automatic data removal function for the TS7720s. This new ‘removal policy’ is a fixed solution in the 1.6 release and is not customer tunable. The method in which volumes are removed will be based on least recently used (LRU). When the TS7720 determines that additional cache space is needed, those volumes which have already been replicated to another TS7700 will be automatically removed from the TS7720’s cache. The TS7720 confirms that a consistent copy of the logical volume exists in another cluster by communicating with the other cluster that contains the copy. The TS7720 will prefer to remove volumes which have been returned to a fast-ready category over private volumes. Refer to Section 2.4 - Attention Messages For a discussion of the thresholds when removal from the TS7720 cache begins and ends. Prior to the release 2.1 code level the automatic removal threshold is equal to the Limited Cache Free Space Warning State threshold. For release 1.7 through release 2.0 the Cache Free Space Warning threshold and Automatic Removal Threshold is set to 3TB. Starting with release the 2.1 code level, the automatic removal threshold is set to 4TB on newly installed clusters. The automatic removal threshold will remain at 3TB when an existing cluster is upgraded to release 2.1 or higher. Starting with release 2.1 the Automatic Removal Threshold can be adjusted using the SETTING CACHE REMVTHR host console request command. The TS7700 Management Interface displays Logical Volume Details. A logical volume in a TS7720 will have one of the following states: • Normal - Indicates a volume is a candidate for removal but hasn’t been processed yet. • Retained - Indicates the volume is the only copy in the grid and therefore is not eligible for

removal. • Deferred - Indicates the volume was processed for removal but wasn’t eligible yet. Copies to

other clusters haven’t occurred yet. • Removed - Indicates the volume has been removed from this TS7720’s cache. A timestamp

of when the removal occurred is provided. The host console receives CBR messages when automatic removal begins and ends. Notice the exiting automatic removal message is actually the exiting limited cache free space warning message. • CBR3750I Message from library lib_name: Auto removal of volumes has begun

on this disk-only cluster. • CBR3793I Message from library lib_name: Library library-name has left the

limited cache free space warning state

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 34 of 47

Starting with the release 3.0 code level the Auto removal message can be disabled using the SETTING ALERT REMOVMSG host console request command. The messages should be disabled on a TS7720 that is expected to automatically remove logical volumes in order to avoid repeated alert messages. It should be left enabled for a TS7720 that is not expected to reach the automatic removal threshold.

3.6.2 Enhanced Removal Policies This set of TS7720 policies, introduced in release 1.7, is to support grid configurations where there is a TS7720 cluster in the grid. The policies apply to homogeneous TS7720 grids as well as heterogeneous grids. Since the TS7720 has a maximum storage space that is the size of its tape volume cache, once that cache fills, the set of removal policies allows logical volumes to be automatically removed from cache as long as there is another consistent copy in the grid, such as on physical tape associated with a TS7740 or in another TS7720 tape volume cache. In essence, when coupled with the copy policies, it provides a variety of automatic data removal functions for the TS7720s.

These removal policies require a valid copy consistency point configuration at two or more clusters, where one is a TS7720, in order for the policy to be carried out. In addition, when the auto removal does take place, it implies an override to the current copy consistency policy which means the total number of consistency points will be reduced below the customer’s original configuration. When the automatic removal starts, all volumes in fast-ready category are removed first since these volumes are scratch volumes. To account for any private to scratch mistakes, fast-ready volumes have to meet the same copy count criteria in a grid as the non-fast-ready volumes. The pinning option and minimum duration time criteria discussed below are ignored for fast-ready volumes.

Customers need to have some level of control over which and when volumes are removed. To help customers guarantee that data will always reside in a TS7720 or will reside for at least a minimal amount of time, a minimum retention time, or temporary pin time, must be associated with each removal policy. This minimum retention time in hours will allow volumes to remain in a TS7720 tape volume cache for at least X hours before it becomes a candidate for removal, where X is between 0 and 65,535. The duration is added to the current time each time a volume is mounted independent of whether a write occurs. The update also occurs at each cluster within a grid independent of mount-cluster or chosen TVC cluster. A minimum retention time of zero indicates no minimal retention requirement. In addition to the minimum retention time, three options are available for each volume within a TS7720. The three policies are configured at the distributed library level and are refreshed at each cluster during any mount operation.

These options are:

• Pinned-The copy of the volume is not removed from this TS7720 cluster as long as the volume is non-fast-ready or is not selected to satisfy a category mount. The minimum retention time is not applicable and is implied as infinite. Once a pinned volume is moved to scratch, it becomes a priority candidate for removal similar to the next two options. This feature must be used judiciously to prevent a TS7720 cache from filling.

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 35 of 47

• Prefer Remove - The copy of a private volume is removed as long as at least one other copy exists on a peer cluster, the minimum retention time (in X hours) has elapsed since last access and the available free space on the cluster has fallen below the removal threshold. The order of which volumes are removed under this policy is based on their least recently used (LRU) access times. Volumes with this policy are removed prior to the removal of volumes with the Prefer Keep policy except for any volumes in Fast Ready categories. Archive and backup data would be a good candidate for this removal group since it won't likely be accessed once written.

• Prefer Keep - The copy of a private volume is removed as long as at least one other copy exists on a peer cluster, the minimum retention time (in X hours) has elapsed since last access, the available free space on the cluster has fallen below a threshold and volumes with the Prefer Remove policy have been exhausted. The order volumes are removed under this policy is based on their least recently used (LRU) access times. Volumes with the Prefer Remove policy are removed prior to the removal of volumes with the Prefer Keep policy except for any volumes in Fast Ready categories.

Note: For migration from pre-release 1.7, Prefer Keep with a minimum retention time of zero is the default fixed policy.

The Prefer Remove and Prefer Keep policies are similar to cache preference groups PG0 and PG1 with the exception that removal treats both groups as LRU versus using the volume size.

In addition to these policies, volumes assigned to a Fast Ready category that have not been previously delete-expired are also removed from cache when the free space on a cluster has fallen below a threshold. Volumes assigned to Fast Ready categories, regardless of their assigned removal polices, are always removed before any other removal candidates in volume size descending order. The minimum retention time is also ignored for Fast Ready volumes. Only when the removal of Fast Ready volumes does not adequately lower the cache free space below the required threshold will Prefer Remove and then possibly Prefer Keep candidates be analyzed for removal.

Though fast ready volumes are preferred to be removed first without regard to pinning or minimum retention time, there is still a requirement that at least one copy exist elsewhere within the Grid. If one or more peer copies can not be validated, the Fast Ready volume is not removed. If the Fast Ready volume has completed its delete-expire or expire-hold grace period and has already been deleted, then it no longer is a candidate for removal since the disk space it utilized has already been freed.

Only when all TS7700 machines within a Grid are at level R1.7 or later will these new policies be made visible within the Management Interface. All logical volumes created prior to this time will be given the default Prefer Keep policy and be assigned a zero minimum retention time duration.

With the addition of the Enhanced Removal Policies for the TS7720, the Storage Class actions are different for the TS7720 and the TS7740. The TS7720 has the three removal policies listed above. The TS7740 has the existing PG0 and PG1 policies. In a hybrid grid, the actions defined at each cluster are used to determine removal. The Storage Class name used at the TS7740 would also be bound to the volume at the TS7720. In other words, when a logical volume is mounted on a TS7740 cluster and subsequently copied to a TS7720, the Storage Class actions as defined on

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 36 of 47

the TS7740 are followed on the TS7740 copy (PG0 or PG1) and the Storage Class actions as defined on the TS7720 are followed on the TS7720 copy (Pinned, Prefer Remove, Prefer Keep).

For example there are three storage class names:

• KEEPME • NORMAL • SACRFICE

On a two cluster, hybrid grid where Cluster 0 is a TS7740 and Cluster 1 is a TS7720:

On Cluster 0 (TS7740) the Storage Class actions are defined as follows:

• KEEPME PG1 • NORMAL PG1 • SACRFICE PG0

On Cluster 1 (TS7720) the Storage Class actions are defined as follows:

• KEEPME Pinned • NORMAL Prefer Keep • SACRFICE Prefer Remove

With the Storage Class definitions shown above:

• Any job that uses the Storage Class KEEPME and writes to either TS7700 in the grid will be PG1 in the TS7740 and pinned in the TS7720.

• Any job that uses the Storage Class NORMAL and writes to either TS7700 in the grid will be PG1 in the TS7740 and be set to Prefer Keep in the TS7720.

• Any job that uses the Storage Class SACRFICE and writes to either TS7700 in the grid will be PG0 in the TS7740 and be set to Prefer Remove in the TS7720.

Below is a figure that illustrates the order in which volumes are removed from the TS7720 cache:

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 37 of 47

Figure 14 - TS7720 Cache Removal Priority Host Command Line Query Capabilities are supported that help override removal behavior as well as the ability to disable automatic removal within a TS7720 cluster. Please refer to the “IBM® Virtualization Engine TS7700 Series z/OS Host Command Line Request User's Guide” on Techdocs for more information. The new and modified Host Console Requests are:

• LVOL {VOLSER} REMOVE - This command will immediately remove a volume from the target TS7720 assuming at least one copy exists on another cluster. Pinned volumes and volumes that are still retained due to the minimum retention time can also immediately be removed.

• LVOL {VOLSER} REMOVE PROMOTE - This command will move a removal candidate within the target TS7720 to the front of the queue for removal. Volumes that are pinned or in fast-ready categories are not candidates for promotion. The removal threshold B must still be crossed before removal takes place. In addition, volumes in fast-ready categories will be removed first.

• LVOL {VOLSER} PREFER - This existing command that normally targets preference group updates in a TS7740 will now also be used to update the access time associated

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 38 of 47

with a volume in a TS7720 so that it is further in the queue for removal. Any associated minimum retention time is also refreshed thus emulating a mount access for read. The assigned removal policy is not modified.

• SETTING CACHE REMOVE {DISABLE|ENABLE} - This command will either enable or disable the automatic removal function within the target TS7720. The default is ENABLE.

The TS7700 Management Interface displays Logical Volume Details. A logical volume in a TS7720 will have one of the following removal residency states:

• Removed – The volume was removed from this TS7720’s cache. “Removal Time” will display when it was removed.

• No Removal Attempted – This volume is a candidate for removal, but a removal has not yet been attempted.

• Retained – An attempt was made to remove the volume and the TS7720 determined it couldn’t and likely never will.

• Deferred – An attempt was made to remove the volume, but conditions were not optimal and another attempt will be made later.

• Pinned – This volume is currently pinned in cache and will only be a candidate for removal if it exists within a fast-ready category.

• Held – This volume is currently held due to the assigned Minimum Retention value. Once it elapses, it will become a candidate for removal. The “Removal Time” will state the time when the hold will expire. If within a fast-ready category, it is still a candidate for removal.

The removal policy is set using the Storage Class panel on the TS7720 Management Interface as shown below. The policy type and retention time can be entered.

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 39 of 47

Figure 15 – Pre R3.0 TS7720 Storage Class Panel - Removal Policy Entry

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 40 of 47

The R3.0 panel is access via the Constructs icon > Storage Class.

Figure 16: R3.0 Storage Class Panel

3.7 Temporary Removal Threshold The temporary removal process introduced by release 1.6 is used in hybrid grids to allow a TS7740 to be taken into service mode for a period of time without having the TS7720 cache fill up. A temporary removal threshold is used to free up enough of the TS7720 cache so that it will not fill up whilst the TS7740 cluster is in service. This temporary threshold value sets a lower threshold for when the Automatic Removal Policy removes volumes from the TS7720 cache. The temporary removal is typically used when the last or only TS7740 in the grid is to be taken down. The threshold setting will need to be planned such that there is enough free space in the TS7720 cache to contain new volumes written to it for the duration of the service period. Each TS7720 can independently set this removal threshold using the Management Interface. Logical volumes may need to be removed before one or more clusters enter Service mode. When a cluster in the grid enters Service mode, remaining clusters can lose their ability to make or validate volume copies, preventing the removal of an adequate number of logical volumes. This scenario can quickly lead to the TS7720 Cache reaching its maximum capacity. The lower

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 41 of 47

threshold creates additional free cache space, which allows the TS7720 Virtualization Engine to accept any host requests or copies during the service outage without reaching its maximum cache capacity. The Temporary Removal Threshold value must be equal to or greater than the expected amount of compressed host workload written, copied, or both to the TS7720 Virtualization Engine during the service outage. The default Temporary Removal Threshold is 4 TB providing 5 TB (4 TB plus 1 TB) of free space exists. You can lower the threshold to any value between 3 TB and full capacity minus 3 TB. Progress of the removal process can be monitored using the Management Interface. The operations history posts periodic messages that describe the progress. Also, the Tape Volume Cache panel can be used to view the amount of available space. The threshold setting will need to be planned such that there is enough free space in the TS7720 cache to contain new volumes written to it for the duration of the service period. This figure below shows a two-cluster hybrid grid with the TS7720 attached to the host and the TS7740 as a DR cluster. The TS7740 is to be put into service mode.

LAN/WANLAN/WAN

TS7720 Cache

TS7740

Default Free Space

Additional Temporary Free Space

Planned Service Outage

LAN/WANLAN/WAN

TS7720 Cache

TS7740

Default Free Space

Additional Temporary Free Space

Planned Service Outage

Figure 17 - Temporary Removal Threshold

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 42 of 47

1. The first step is to set the Temporary Removal Threshold for the TS7720 using the TS7720’s

Management Interface. 2. Next, at the Management Interface of the TS7740 that is going to enter Service, turn on the

Temporary Removal process. 3. The TS7720 starts to actively remove volumes from its cache that have consistent copies in

the TS7740. 4. Scratch volumes are removed first then private volumes. 5. Monitor the TS7720 management interface for the temporary threshold to be reached. 6. The TS7740 enters service prep and eventually reaches service mode. While in service prep,

copies to the TS7740 continue. Once in service mode the removal stops and the temporary threshold is turned off.

7. During the service period the TS7720 cache begins to fill again. 8. The TS7740 leaves service mode with TS7720 cache to spare. 9. All is well; the TS7720 cache did not fill up. The temporary removal threshold is set independently for each TS7720 in the grid using the Management Interface. The Service Mode panel contains a button labeled “Lower Threshold”. When pressed a second panel appears showing a summary of the cache along with the Temporary Removal Threshold field. After entering the temporary threshold, press the “Submit Changes” button.

Figure 18 – Pre-R3.0 Setting Temporary Removal Threshold The Temporary Removal mode is initiated by selecting the “Lower Threshold” button on the Management Interface, Service Mode Panel of the TS7740 that will be put in service. The screen shown below allows the activation of the Temporary Removal Threshold to be confirmed. The

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 43 of 47

Operational History panel is used to cancel the removal task if you decide to not go to service mode.

Figure 19 – Pre-R3.0 Initiating Temporary Removal For R3.0 and higher the Temporary Removval Threshold is accessed from the grid summary panel from the Actions pull-down menu.

Figure 20: R3.0 Temporary Removal Threshold

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 44 of 47

With R3.0 the following panel is presented to allow the removal thresholds to be set and the temporary thresholds to be activated.

Figure 21: R3.0 Setting Temporary Removal Thresholds

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 45 of 47

4 Impact of Being in the Out of Cache Resource State Prior to Release 1.7, once a single TS7720 in a grid is in the Out-of-Cache state, new fast-ready mounts and writes to newly mounted volumes are failed. Specific (private) mounts for read of existing volumes are still allowed. All clusters in a grid remain in this state until at least 2.5 TBs are made available below the 95% mark for all clusters in the grid. This 2.5 TB value is meant to be big enough to prevent the toggling in and out of the state over short time durations.

Release 1.7 introduces TS7720 Cache Full redirection. Prior to R1.7, once a TS7720 becomes full (95% or higher), all scratch mounts into all TS7720 clusters will fail independent of how full other TS7720 clusters are. With R1.7, cache full conditions will be treated like back end library degraded conditions such as “Out of physical scratch”. When a TS7720 becomes full, only that cluster will no longer accept writes into its disk cache. During TVC selection, a TS7720 (including the mount point) which is full is viewed as an invalid TVC candidate. Only when all other candidates (TS7720 or TS7740) are also invalid will the mount fail. Otherwise, an alternative TVC will be chosen in a non-full TS7720 or TS7740 which has a copy policy mode of R or D.

When in a grid wide Out of Cache Resource state, scratch mounts (fast ready) will be failed by the TS7720. This results in OAM generating host console messages indicating the reason the mount failed (via CBR4171I messages) and that it can be retried (via the CBR4196D message) when cache space has been made available. CBR4171I MOUNT FAILED. LVOL=logical-volume, LIB=library-name, PVOL=physical-volume, REASON=reason-code. CBR4196D Job job-name, drive device-number, volser volser, error code error code. Reply 'R' to retry or 'C' to cancel. For example: CBR4171I MOUNT FAILED. LVOL=??????, LIB=ATLCOMP1, PVOL=??????, REASON=40. JOB03911 *73 CBR4196D JOB TB451QA3, DRIVE 6A0A, VOLSER ??????, ERROR CODE 140167. REPLY 'R' TO RETRY, 'W' TO WAIT, OR 'C' TO CANCEL. Attempts to append to specifically mounted volumes will fail. An IOS000I message will be issued with sense data indicating write protect If the already active devices and copy activity were to cause the cache to reach the 97.5 % cache utilization level then host throttling will occur similar to what happens in the TS7740. For example, even for the case of the 39,213 GB cache this means that 1 TB (~2.5% of the cache) of data would be needed (256 devices each writing 4 GB volumes) to get to the 97.5% level. It is highly unlikely the TS7720 will get to the point where throttling comes into play. As noted in the Managing the Cache section, recovering from the out of cache resources condition will take at best hours, if not days.

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 46 of 47

References: None

Disclaimers: Copyright © 2008, 2013 by International Business Machines Corporation. No part of this document may be reproduced or transmitted in any form without written permission from IBM Corporation. Product data has been reviewed for accuracy as of the date of initial publication. Product data is subject to change without notice. This information could include technical inaccuracies or typographical errors. IBM may make improvements and/or changes in the product(s) and/or programs(s) at any time without notice. References in this document to IBM products, programs, or services does not imply that IBM intends to make such products, programs or services available in all countries in which IBM operates or does business. Any reference to an IBM Program Product in this document is not intended to state or imply that only that program product may be used. Any functionally equivalent program, that does not infringe IBM’s intellectually property rights, may be used instead. It is the user’s responsibility to evaluate and verify the operation of any non-IBM product, program or service. THE INFORMATION PROVIDED IN THIS DOCUMENT IS DISTRIBUTED "AS IS" WITHOUT ANY WARRANTY, EITHER EXPRESS OR IMPLIED. IBM EXPRESSLY DISCLAIMS ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON INFRINGEMENT. IBM shall have no responsibility to update this information. IBM products are warranted according to the terms and conditions of the agreements (e.g., IBM Customer Agreement, Statement of Limited Warranty, International Program License Agreement, etc.) under which they are provided. IBM is not responsible for the performance or interoperability of any non-IBM products discussed herein. The customer is responsible for the implementation of these techniques in its environment. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. Unless otherwise noted, IBM has not tested those products in connection with this publication and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. The provision of the information contained herein is not intended to, and does not, grant any right or license under any IBM patents or copyrights. Inquiries regarding patent or copyright licenses should be made, in writing, to: IBM Director of Licensing IBM Corporation North Castle Drive Armonk, NY 10504-1785 U.S.A. Trademarks The following are trademarks or registered trademarks of International Business Machines in the United States, other countries, or both. IBM, TotalStorage, DFSMS/MVS, S/390, z/OS, and zSeries. Other company, product, or service names may be the trademarks or service marks of others.

March 2013

© Copyright IBM Corporation, 2008, 2013 Page 47 of 47

End of Document