POWER IT Pro - Oct. 2012

10
Managing IBM Power System Compute Nodes Secrets of an AIX Administrator, Part 4 How to Customize IBM Systems Director Video Tip: How Do I Build an LPAR? YOUR PURE, AIX, AND IBM i AUTHORITY A PENTON PUBLICATION OCTOBER 2012 / VOL. 1 / NO. 6 Workload Management Plus >>

description

POWER IT Pro offers an array of resources, news, and perspectives on IBM Power systems and servers, including Pure, AIX, and IBM i.

Transcript of POWER IT Pro - Oct. 2012

Page 1: POWER IT Pro - Oct. 2012

Managing IBM Power System Compute Nodes Secrets of an AIX Administrator, Part 4 How to Customize IBM Systems Director Video Tip:How Do I Build an LPAR?

Yo u r P u r e , A I X , A n d I B M i Au t h o r I t Y

A P e n to n P u b l i c At i o n o c to b e r 2012 / V o l . 1 / n o. 6

WorkloadManagement

Plus >>

Page 2: POWER IT Pro - Oct. 2012
Page 3: POWER IT Pro - Oct. 2012
Page 4: POWER IT Pro - Oct. 2012

Workload Management — Mel BeckmanIBM’s PureFlex and Power systems offer a range of workload management tools, from the IBM i’s Logical Partition (LPAR), to AIX’s Workload Partition (WPAR), to the PowerVM hypervisor’s micropartitioning. There are a slew of cross-platform management tools and interfaces as well. Each has strengths and weaknesses, and no single workload management toolset fits all missions. Learn the pros and cons of various Power workload controls so you can select the correct tool for the job.

Cover Story ▼

Access articles online at www.POWERITPro.com.

Features 39 Managing IBM Power System Compute

Nodes with Flex System Manager Greg Hintermeister

49 Secrets of an AIX Administrator, Part 4 Christian Pruett

Power at Work 53 Customizing IBM Systems Director Erwin Earley

65 System Auditing with AIX David Tansley

77 IVM: A Cost-Effective Entry to PowerVM Virtualization

Anthony English

81 How Do I Build an LPAR? Rob McNelly

82 How to Recover from a Lost Root Password

David Tansley

83 Move Multiple Small Files with AIX Anthony English

O c t O b e r 2 0 1 2 | V O l . 1 N O . 6

Chat with Us

Twitter

5 Power News

11 New Products

15 Industry Issues: Is There a PureSystems Server Solution in Your Future? Chris Maxcer

23 The Future of the IT Pro Seamus Quinn

88 Hot or Not: Software Defined Hype Sean Chandler

91 Advertising Index

In every Issue

27

Page 5: POWER IT Pro - Oct. 2012
Page 6: POWER IT Pro - Oct. 2012

27 w w w . P O w E R I T P R O . c O m P O w E R I T P R O / O c T O b E R 2 0 1 2

If memory, CPU, network, and storage capacity were infinite, and users were infinitely patient, IT technologists would never have to worry about managing workloads to fit available capacity and per-

formance needs. Alas, there never seems to be quite enough physical compute resources or user patience. Left to their own devices, the applications we manage would gradually slow down to a stodgy pace, at which point user patience would approach zero.

The solution to the problem of finite resources and seemingly unlimited user demand is workload management: the explicit alloca-tion of resources, on a priority basis, to applications in order to meet specific performance and reliability objectives. IBM’s Power Systems were engineered from the ground up to provide high-quality work-load management tools. Some of these tools are specific to each OS platform (i, AIX, and Linux); others are integrated into PowerVM, the hardware-based hypervisor that now underpins all Power-based machines. Depending on your environment, you might be able to exploit several—or all—of the available workload management tools. To use these capabilities, however, you must know what tools exist, and the pros and cons of each.

Mel Beckmanis senior technical editor for POWER IT Pro.

Email

Cover StoryCover Story

WorkloadManagement

The Power platform delivers loads of tools

Page 7: POWER IT Pro - Oct. 2012

P O W E R I T P R O / O c T O b E R 2 0 1 2 W W W . P O W E R I T P R O . c O m28

Cover Story

Objectives, Planning, and InstrumentationAny workload management process must start with objectives. There’s no single “optimum” workload management methodology, and thus no single set of objectives for every environment. You might have spe-cific performance objectives for high-priority applications, or a cost objective for energy consumption, or an uptime objective of 99.999 percent. Objectives are usually related to the importance of each appli-cation you administer. For example, email, Customer Relationship Management (CRM), and materials requirements planning (MRP) are typically mission-critical applications, whereas business intelligence and accounting are less performance-sensitive. To begin workload management planning, identify all the objectives you want to meet and prioritize applications in the order you want them to perform.

Don’t be concerned if the objectives you want aren’t realized in your current system. One benefit of workload management is that it improves the performance of existing runtime environments while reducing costs. It’s quite possible that unbalanced resource utilization, unused islands of capacity, unnecessary resource contention, or a combination of all three pathologies hampers your current system performance. However, you want to be able to measure the results of any workload manage-ment process so that you can demonstrate improvements. To do that, you must have good instrumentation in place.

To track gross performance improvements across a system, you need to monitor only a few key metrics: CPU, memory, and network utili-zation; I/O operations per second (IOPS); and application response times. Setting up instrumentation for these isn’t complex, but it’s outside the scope of this article. You can set up basic ad-hoc monitor-ing for each workload using the tools enumerated in Table 1. Or you can monitor them more conveniently in IBM’s Systems Director (see “Meet Your Data Center Sidekick: IBM Systems Director 6.3”), which provides historical graphing (Figure 1) and alerting. You can learn about these instrumentation techniques in the IBM Redbook “IBM PowerVM Virtualization Managing and Monitoring.”

Page 8: POWER IT Pro - Oct. 2012

29 w w w . P O w E R I T P R O . c O m P O w E R I T P R O / O c T O b E R 2 0 1 2

Workload Management

Table 1: Tools for Monitoring Resources in PowerVM

Platform CPU Memory Storage Network

AIX topas, nmon, PM for Power Systems

topas, nmon, PM for Power Systems

topas, nmon, iostat, fcstat, PM for Power Systems

topas, nmon, entstat, PM for Power Systems

IBM i WRKSYSACTIBM Performance Tools for IBM i, IBM Systems Director, PM for Power Systems

WRKSYSSTS, WRKSHRPOOL IBM Performance Tools for IBM i, IBM Systems Director, PM for Power Systems

WRKDSKSTS IBM Performance Tools for IBM i, IBM Systems Director, PM for Power Systems

WRKTCPSYSIBM Performance Tools for IBM i, IBM Systems Director, PM for Power Systems

Linux topas, nmon, sar /proc/meminfo iostat netstat, iptraf

VIOS topas, viostat topas, vmstat, svmon

topas, fcstat, viostat topas, entstat

System-wide

topas (AIX, VIOS)IBM Systems Director (all platforms) System i Navigator (IBM i)

topas (AIX, VIOS)System i Navigator (IBM i)

topas (AIX, VIOS)System i Navigator (IBM i)

topas (AIX, VIOS)System i Navigator (IBM i)

Figure 1 Example Systems

Director Performance Measurement Graph

Once you have a monitoring scheme up and running, you should run it across all major production periods (e.g., a day, a week, a month, month-end) to establish performance baselines against which you can measure your workload management changes.

Adapted from “IBM PowerVM Virtualization Managing and Monitoring”

Page 9: POWER IT Pro - Oct. 2012

P O W E R I T P R O / O c T O b E R 2 0 1 2 W W W . P O W E R I T P R O . c O m30

Cover Story

Logical PartitionsThe essence of workload management is moving workloads and resources around to marry up needy (and entitled) applications with the resources necessary to meet your objectives. For example, if you have an objective of sub-second response time on an eCommerce application, and that objective isn’t being met, you can add the appro-priate resource—CPU, memory, or network capacity—to open what-ever bottleneck is constraining performance. The instrumentation described in the previous section will let you identify the resource that requires relief.

Ideally, you could use one workload management toolset to accom-plish all your goals, but the state of the art hasn’t quite arrived there. For now, you’ll use a mix of tools, some specific to the platform you’re running—AIX, i, or Linux—and some cross-platform tools. Let’s look at workload management at the platform level first. It’s easiest to under-stand the big picture in light of the history of each platform converging on the Power Systems architecture, so we’ll look at them in that order.

The earliest workload management tools originated with the i plat-form, in the form of logical partitions. An LPAR is a virtual machine (VM) dedicated to a single OS, which in the beginning was just IBM i. Originally, the LPAR hypervisor ran in a dedicated “primary” OS/400 partition, which managed subsidiary LPARs using IBM i com-mands. Beginning with Power5 and i5/OS, the hypervisor (today called PowerVM) became independent from all OSs, running in its own dedicated, protected memory. Over time, IBM added support for Linux LPARs, and ultimately extended the concept to AIX. Today, IBM eschews the term LPAR, preferring the more generic virtual server, but for our discussion I’ll stick to the LPAR nomenclature. (An LPAR is a VM. Why create yet another term?)

The word “partition” means “part of a whole,” but that’s not quite consistent with the way LPARs actually work in the Power world. Orig-inally, you managed LPARs from within IBM i, but in modern Power systems (Power6 and Power7) you manage them from the Hardware

Page 10: POWER IT Pro - Oct. 2012

31 w w w . P O w E R I T P R O . c O m P O w E R I T P R O / O c T O b E R 2 0 1 2

Workload Management

Management Console (HMC), the Integrated Virtualization Manager (IVM), or the Systems Director Management Console (SDMC).

All of PowerVM’s management interfaces let you slice out a dedi-cated portion of CPU cycles for each LPAR. Two or more LPARs can also share CPU, which is useful for related application components such as a web and database server. The former situation guarantees a specific performance level, whereas the latter optimizes utilization. You choose the approach that works best for your objectives.

But wait, there’s more! Instead of specifically configuring LPARs to share CPUs, you can assign LPARs to a CPU pool. In this scenario, you allocate a CPU “entitlement” to each LPAR, which it’s guaranteed to have available when needed. If an LPAR goes over its entitlement, it can “borrow” unused CPU cycles from the pool. Any CPU cycles an LPAR doesn’t need are “donated” to the pool for others to use.

PowerVM doles out CPU capacity in virtual processors, which can be allocated in tenths of a physical processor—a capability called micropartitioning. You allocate a number of virtual processors to each LPAR, which dictates the number of symmetrical multiprocessing tasks in hardware, and set the micropartition size to the number of tenths across all the virtual processors. For a single virtual processor, the micropartition capacity can be from 0.1 to 1.0. For three virtual processors, the capacity can be from 0.3 to 3.0. The number of virtual processors in a micropartition isn’t necessarily related to the number of physical processors in the machine. A single physical processor can support up to 10 virtual processors. PowerVM manages this “overbook-ing” of physical processors by using wait time (e.g., when a processor is blocked for I/O completion) to spread unused CPU cycles among VPs.

The shared processor pool consists of all physical processors not already assigned to a CPU-dedicated LPAR. This means that if a dedi-cated LPAR shuts down, its CPU capacity is automatically donated to the pool. PowerVM automatically manages sharing, so until CPU capac-ity is saturated, each workload gets dedicated processing power. This means you don’t pay a penalty for putting workloads in a pool until the