Post on 19-Jan-2021
Historical Data Collection Guide
for OMEGAMON® XE andCandleNet Command Center®
CandleNet Portal® (Version 195)Candle Management Workstation® (Version 350)
GC32-9181-00
May 2004
Candle Corporation100 North Sepulveda Blvd.
El Segundo, California 90245
2 Historical Data Collection Guide for OMEGAMON XE and CCC
Registered trademarks and service marks of Candle Corporation: AF/OPERATOR, AF/REMOTE, Availability Command Center, Candle, Candle CIRCUIT, Candle Command Center, Candle Direct logo, Candle eDelivery, Candle Electronic Customer Support, Candle logo, Candle Management Server, Candle Management Workstation, CandleLight, CandleNet, CandleNet Command Center, CandleNet eBusiness Platform, CandleNet Portal, CL/CONFERENCE, CL/SUPERSESSION, CommandWatch, CT, CT/Data Server, CT/DS, DELTAMON, DEXAN, eBA, eBA*ServiceMonitor, eBA*ServiceNetwork, eBusiness at the speed of light, eBusiness Assurance, eBusiness Institute, ELX, EPILOG, ESRA, ETEWatch, IntelliWatch, IntelliWatch Pinnacle, MQSecure, MQView, OMEGACENTER, OMEGAMON, OMEGAMON II, OMEGAMON Monitoring Agent, OMEGAMON Monitoring Agents, OMEGAVIEW, OMEGAVIEW II, PQEdit, Response Time Network, Roma, SitePulse, Solutions for Networked Applications, Solutions for Networked Businesses, TMA2000, Transplex, and Volcano.Trademarks and service marks of Candle Corporation: AF/Advanced Notification, AF/PERFORMER, Alert Adapter, Alert Adapter Plus, Alert Emitter, AMS, Amsys, AutoBridge, AUTOMATED FACILITIES, Availability Management Systems, Business Services Composer, Candle Alert, Candle Business Partner Logo, Candle Command Center/SentinelManager, Candle CommandPro, Candle eSupport, Candle Insight, Candle InterFlow, Candle Managing what matters most, Candle Service Suite, Candle Technologies, CandleNet, CandleNet 2000, CandleNet Conversion, CandleNet eBP, CandleNet eBP Access for S.W.I.F.T., CandleNet eBP Administrator, CandleNet eBP Broker Access for Mercator or MQSI, CandleNet eBP Configuration, CandleNet eBP Connector, CandleNet eBP File Transfer, CandleNet eBP Host Connect, CandleNet eBP Object Access, CandleNet eBP Object Browser, CandleNet eBP Secure Access, CandleNet eBP Service Directory, CandleNet eBP Universal Connector, CandleNet eBP Workflow Access, CandleNet eBusiness Assurance, CandleNet eBusiness Exchange, CandleNet eBusiness Platform Administrator, CandleNet eBusiness Platform Connector, CandleNet eBusiness Platform Connectors, CandleNet eBusiness Platform Powered by Roma Technology, CandleNet eBusiness Platform Service Directory, Candle Vision, CCC, CCP, CCR2, CEBA, CECS, CICAT, CL/ENGINE, CL/GATEWAY, CL/TECHNOLOGY, CMS, CMW, Command & Control, Connect-Notes, Connect-Two, CSA ANALYZER, CT/ALS, CT/Application Logic Services, CT/DCS, CT/Distributed Computing Services, CT/Engine, CT/Implementation Services, CT/IX, CT/Workbench, CT/Workstation Server, CT/WS, !DB Logo, !DB/DASD, !DB/EXPLAIN, !DB/MIGRATOR, !DB/QUICKCHANGE, !DB/QUICKCOMPARE, !DB/SMU, !DB/Tools, !DB/WORKBENCH, Design Network, e2e, eBA*SE, eBAA, eBAAuditor, eBAN, eBANetwork, eBAAPractice, eBP, eBusiness Assurance Network, eBusiness at the speed of light, eBusiness at the speed of light logo, eBusiness Exchange, eBX, End-to-End, eNotification, ENTERPRISE, Enterprise Candle Command Center, Enterprise Candle Management Workstation, Enterprise Reporter Plus, ER+, ERPNet, ETEWatch Customizer, HostBridge, InterFlow, Candle InterFlow, Lava Console, Managing what matters most, MessageMate, Messaging Mastered, Millennium Management Blueprint, MMNA, MQADMIN, MQEdit, MQEXPERT, MQMON, NBX, NC4, NetGlue, NetGlue Extra, NetMirror, NetScheduler, New Times, New Team, New Readiness, OMA, OMC Gateway, OMC Status Manager, OMEGACENTER Bridge, OMEGACENTER Gateway, OMEGACENTER Status Manager, OMEGAMON/e, OMEGAMON Management Center, OSM, PathWAI, PC COMPANION, Performance Pac, Powered by Roma Technology, PowerQ, PQConfiguration, PQScope, Roma Application Manager, Roma Broker, Roma BSP, Roma Connector, Roma Developer, Roma FS/A, Roma FS/Access, RomaNet, Roma Network, Roma Object Access, Roma Secure, Roma WF/Access, Roma Workflow Access, RTA, RTN, SentinelManager, Somerset, Somerset Systems, Status Monitor, The Millennium Alliance, The Millennium Alliance logo, The Millennium Management Network Alliance, Tracer, Unified Directory Services, WayPoint, and ZCopy.Trademarks and registered trademarks of other companies: AIX, DB2, MQSeries and WebSphere are registered trademarks of International Business Machines Corporation. Citrix, WinFrame, and ICA are registered trademarks of Citrix Systems, Inc. Multi-Win and MetaFrame are trademarks of Citrix Systems, Inc. SAP is a registered trademark and R/3 is a trademark of SAP AG. UNIX is a registered trademark in the U.S. and other countries, licensed exclusively through X/Open Company Ltd. HP-UX is a trademark of Hewlett-Packard Company. SunOS is a trademark of Sun Microsystems, Inc. All other company and product names used herein may be trademarks or registered trademarks of their respective owners.
Copyright © April 2004, Candle Corporation, a California corporation. All rights reserved. International rights secured.
Threaded Environment for AS/400, Patent No. 5,504,898; Data Server with Data Probes Employing Predicate Tests in Rule Statements (Event Driven Sampling), Patent No. 5,615,359; MVS/ESA Message Transport System Using the XCF Coupling Facility, Patent No. 5,754,856; Intelligent Remote Agent for Computer Performance Monitoring, Patent No. 5,781,703; Data Server with Event Driven Sampling, Patent No. 5,809,238; Threaded Environment for Computer Systems Without Native Threading Support, Patent No. 5,835,763; Object Procedure Messaging Facility, Patent No. 5,848,234; End-to-End Response Time Measurement for Computer Programs, Patent No. 5,991,705; Communications on a Network, Patent Pending; Improved Message Queuing Based Network Computing Architecture, Patent Pending; User Interface for System Management Applications, Patent Pending.
NOTICE: This documentation is provided with RESTRICTED RIGHTS. Use, duplication, or disclosure by the Government is subject to restrictions set forth in the applicable license agreement and/or the applicable government rights clause.This documentation contains confidential, proprietary information of Candle Corporation that is licensed for your internal use only. Any unauthorized use, duplication, or disclosure is unlawful.
Contents 3
Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23About This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24Adobe Portable Document Format . . . . . . . . . . . . . . . . . . . . . . . . . . 29Documentation Conventions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31Candle Customer Service and Satisfaction . . . . . . . . . . . . . . . . . . . . 33
What’s New . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Chapter 1. Overview of Historical Data Collection . . . . . . . . . . . . . . . . . . . 37About Historical Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38Historical Collection Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40Performance Impact of Historical Data Requests . . . . . . . . . . . . . . . . 42
Chapter 2. Planning Collection of Historical Data. . . . . . . . . . . . . . . . . . . . 45Developing a Strategy for Historical Data Collection . . . . . . . . . . . . . 46
Chapter 3. Configuring Historical Data Collection on CandleNet Portal . . . 51Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52Configuring Historical Data Collection. . . . . . . . . . . . . . . . . . . . . . . . 54Starting and Stopping Historical Data Collection . . . . . . . . . . . . . . . . 58
Chapter 4. Configuring Historical Data Collection on CMW . . . . . . . . . . . . 61Invoking the HDC Configuration Program. . . . . . . . . . . . . . . . . . . . . 62
Contents
4 Historical Data Collection Guide for OMEGAMON XE and CCC
Using the Configuration Dialog to Control Historical Data Collection 65Defining Data Collection Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67Using the Advanced History Configuration Options Dialog . . . . . . . . 71
Chapter 5. Warehousing Your Historical Data . . . . . . . . . . . . . . . . . . . . . . . 75Prerequisites to Warehousing Historical Data . . . . . . . . . . . . . . . . . . . 76Configuring Your Warehouse. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78Preventing Historical Data File Corruption . . . . . . . . . . . . . . . . . . . . 80Error Logging for Warehoused Data . . . . . . . . . . . . . . . . . . . . . . . . . 82
Chapter 6. Converting History Files to Flat Files (Windows and OS/400) . . 83Conversion Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84Archiving Procedure using LOGSPIN . . . . . . . . . . . . . . . . . . . . . . . . 85Archiving Procedure using the Windows AT Command. . . . . . . . . . . 87Converting Files Using krarloff. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88AS/400 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91Location of the Windows Executables and Historical Data Collection Table Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Chapter 7. Converting History Files to Delimited Flat Files (MVS)93
Automatic Conversion and Archiving Process . . . . . . . . . . . . . . . . . . 94Location of the MVS Executables and Historical Data Table Files . . . 98Manual Archiving Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Chapter 8. Converting History Files toDelimited Flat Files (UNIX Systems)101
Understanding History Data Conversion . . . . . . . . . . . . . . . . . . . . . 102Performing the History Data Conversion . . . . . . . . . . . . . . . . . . . . . 103
Chapter 9. Converting History Files to Delimited Flat Files (HP NonStop Kernel Systems)107
Conversion Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Appendix A. Maintaining the Persistent Data Store (CT/PDS)109
About the Persistent Data Store . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Contents 5
Components of the CT/PDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112Overview of the Automatic Maintenance Process . . . . . . . . . . . . . . 115Making Archived Data Available . . . . . . . . . . . . . . . . . . . . . . . . . . . 119Exporting and Restoring Persistent Data . . . . . . . . . . . . . . . . . . . . . 123Data Record Format of Exported Data . . . . . . . . . . . . . . . . . . . . . . 125Extracting CT/PDS Data to Flat Files . . . . . . . . . . . . . . . . . . . . . . . . 131Command Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Appendix B. Disk Space Requirements for Historical Data Tables . . . . . . . . 141Historical Data Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143OMEGAMON XE for CICS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144OMEGAMON XE for DB2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162OMEGAMON XE for DB2 Universal Database . . . . . . . . . . . . . . . . 170OMEGAMON XE for NetWare . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175OMEGAMON XE for ORACLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179OMEGAMON XE for Sybase. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192OMEGAMON XE for MS SQL Server . . . . . . . . . . . . . . . . . . . . . . . 207OMEGAMON XE for OS/390 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216OMEGAMON XE for OS/390 UNIX System Services . . . . . . . . . . . 226OMEGAMON XE for OS/400 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232OMEGAMON XE for R/3™ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242OMEGAMON XE for Sysplex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246OMEGAMON XE for Tuxedo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258OMEGAMON XE for UNIX Systems . . . . . . . . . . . . . . . . . . . . . . . . 269OMEGAMON XE for WebSphere Application Server . . . . . . . . . . . 274OMEGAMON XE for WebSphere Application Server for OS/390 . . 285OMEGAMON XE for WebSphere Integration Brokers . . . . . . . . . . . 301OMEGAMON XE for WebSphere MQ Configuration. . . . . . . . . . . . 314OMEGAMON XE for WebSphere MQ Monitoring . . . . . . . . . . . . . . 316OMEGAMON XE for Windows Servers . . . . . . . . . . . . . . . . . . . . . . 329
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
6 Historical Data Collection Guide for OMEGAMON XE and CCC
List of Figures 7
Figure 1. CandleNet Portal History Collection Configuration Configuration Tab 56Figure 2. CandleNet Portal History Collection Configuration Status Tab. . . . . . 59Figure 3. The Configure History Icon in the Administration Window . . . . . . . . 62Figure 4. CMW History Configuration Dialog. . . . . . . . . . . . . . . . . . . . . . . . . . 64Figure 5. CMS Selection Portion of Dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67Figure 6. Table or Group selection portion of dialog. . . . . . . . . . . . . . . . . . . . . 69Figure 7. Advanced History Configuration Options dialog . . . . . . . . . . . . . . . . 71
List of Figures
8 Historical Data Collection Guide for OMEGAMON XE and CCC
List of Tables 9
Table 1. Contents of this guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25Table 2. Symbols in Command Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32Table 3. Logfile parameter values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86Table 4. krarloff Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90Table 5. DD Names Required . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95Table 6. KPDXTRA parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96Table 7. History conversion parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104Table 8. Determining the medium for dataset backup . . . . . . . . . . . . . . . . . . 117Table 9. Section 1 Data Record Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Table 10. Section 2 Data Record Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127Table 11. Section 2 Table Description Record . . . . . . . . . . . . . . . . . . . . . . . . . 128Table 12. Section 2 Column Description Record. . . . . . . . . . . . . . . . . . . . . . . 128Table 13. Section 3 Record Format. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130Table 14. Contents of the performance attribute tables . . . . . . . . . . . . . . . . . . 143Table 15. OMEGAMON XE for CICS historical data tables . . . . . . . . . . . . . . . 144Table 16. OMEGAMON XE for CICS table record sizes . . . . . . . . . . . . . . . . . 145Table 17. Bottleneck Analysis (CICSBNA) worksheet . . . . . . . . . . . . . . . . . . . 148Table 18. Connection Analysis (CON) worksheet . . . . . . . . . . . . . . . . . . . . . . 148Table 19. DBCTL Summary (CICSDLS) worksheet . . . . . . . . . . . . . . . . . . . . 149Table 20. DB2 Summary (CICSD2S) worksheet . . . . . . . . . . . . . . . . . . . . . . . 149Table 21. DB2 Task Activity (CICSD2T) worksheet . . . . . . . . . . . . . . . . . . . . . 149Table 22. Dump Analysis (CICSDAT) worksheet. . . . . . . . . . . . . . . . . . . . . . . 149Table 23. Enqueue Analysis (CICSNQA) worksheet . . . . . . . . . . . . . . . . . . . . 150Table 24. File Control Analysis (CICSFCA) worksheet . . . . . . . . . . . . . . . . . . 150Table 25. Intercommunication Summary (CICSICO) worksheet . . . . . . . . . . . 150Table 26. Internet Status (CICSIST) worksheet . . . . . . . . . . . . . . . . . . . . . . . . 150Table 27. Journal Analysis (CICSJAT) worksheet . . . . . . . . . . . . . . . . . . . . . . 151Table 28. Log Stream Analysis (CICSLSA) worksheet . . . . . . . . . . . . . . . . . . . 151Table 29. LSR Pool Status (CICSLPS) worksheet . . . . . . . . . . . . . . . . . . . . . . 151Table 30. MQ Connection Details (MQCONN) worksheet. . . . . . . . . . . . . . . . 152Table 31. Region Overview (CICSROV) worksheet. . . . . . . . . . . . . . . . . . . . . 152
List of Tables
10 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 32. Response Time Elemenets (CICSRTE) worksheet . . . . . . . . . . . . . . 152Table 33. Response Time Analysis (CICSRTS) worksheet . . . . . . . . . . . . . . . . 153Table 34. RLS Lock Analysis (RLS) worksheet . . . . . . . . . . . . . . . . . . . . . . . . 153Table 35. Storage Analysis (CICSSTOR) worksheet . . . . . . . . . . . . . . . . . . . . 153Table 36. System Initializatin Table (CICSSIA) worksheet . . . . . . . . . . . . . . . . 154Table 37. Task Class Analysis (CICSTCA) worksheet. . . . . . . . . . . . . . . . . . . . 154Table 38. Temporary Storage Detail (CICSTSD) worksheet . . . . . . . . . . . . . . 154Table 39. Temporary Storage Summary (CICSTSS) worksheet. . . . . . . . . . . . 155Table 40. Terminal Storage Violations (CICSTSV) worksheet . . . . . . . . . . . . . 155Table 41. Transaction Analysis (TRAN) worksheet . . . . . . . . . . . . . . . . . . . . . 155Table 42. Transaction Storage Violations (CICSXSV) worksheet . . . . . . . . . . . 156Table 43. Transient Data Queues (CICSTDQ) worksheet . . . . . . . . . . . . . . . . 156Table 44. Transient Data Summary (CICSTDS) worksheet . . . . . . . . . . . . . . . 156Table 45. UOW Analysis (CICSUWA) worksheet . . . . . . . . . . . . . . . . . . . . . . 157Table 46. UOW Enqueue Analysis (CICSUWE) worksheet . . . . . . . . . . . . . . . 157Table 47. VSAM Analysis (VSAM) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . 157Table 48. OMEGAMON XE for CICS disk space summary . . . . . . . . . . . . . . . 158Table 49. OMEGAMON XE for CICS disk space summary worksheet. . . . . . . 160Table 50. OMEGAMON XE for DB2 historical data tables . . . . . . . . . . . . . . . 162Table 51. OMEGAMON XE for DB2 table record sizes . . . . . . . . . . . . . . . . . . 163Table 52. DB2_Thread_Exceptions worksheet . . . . . . . . . . . . . . . . . . . . . . . . 164Table 53. DB2_System_States worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164Table 54. DB2_CICS_Exceptions worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . 164Table 55. DB2_CICS_Threads worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165Table 56. DB2_SRM_Subsystem worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . 165Table 57. DB2_SRM_Log_Manager worksheet . . . . . . . . . . . . . . . . . . . . . . . . 165Table 58. DB2_SRM_EDM worksheet. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166Table 59. DB2_SRM_UTL worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166Table 60. DB2_SRM_BPM worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166Table 61. DB2_SRM_BPD worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167Table 62. DB2_DDF_STAT worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167Table 63. DB2_DDF_CONV worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167Table 64. DB2_IMS_Connections worksheet. . . . . . . . . . . . . . . . . . . . . . . . . . 168Table 65. DB2_IMS_Regions worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168Table 66. DB2_Volume_Activity worksheet. . . . . . . . . . . . . . . . . . . . . . . . . . . 168Table 67. OMEGAMON XE for DB2 disk space summary worksheet . . . . . . . 169
List of Tables 11
Table 68. OMEGAMON XE for DB2 Universal Database historical data tables 170Table 69. OMEGAMON XE for DB2 Universal Database table record sizes. . . 171Table 70. All tables worksheet. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171Table 71. Application worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172Table 72. Database worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172Table 73. System Overview worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172Table 74. Locking Conflict worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173Table 75. Buffer Pool worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173Table 76. OMEGAMON XE for DB2 Universal Database disk space summary
worksheet 174Table 77. OMEGAMON XE for NetWare historical data tables . . . . . . . . . . . . 175Table 78. OMEGAMON XE for NetWare table record sizes . . . . . . . . . . . . . . . 175Table 79. Server (SERVER) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176Table 80. Volume (VOLUME) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176Table 81. Volume Usage (VOLUSAGE) worksheet . . . . . . . . . . . . . . . . . . . . . 176Table 82. Queue Jobs (QJOB) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177Table 83. Connections (CONNECT) worksheet . . . . . . . . . . . . . . . . . . . . . . . 177Table 84. OMEGAMON XE for NetWare disk space summary worksheet . . . . 177Table 85. OMEGAMON XE for ORACLE historical data tables. . . . . . . . . . . . 179Table 86. OMEGAMON XE for ORACLE table record sizes . . . . . . . . . . . . . . 181Table 87. Alert Log Details (KORALRTD) worksheet . . . . . . . . . . . . . . . . . . . 182Table 88. Alert Log Summary (KORALRTS) worksheet . . . . . . . . . . . . . . . . . 183Table 89. Cache Totals (KORCACHE) worksheet . . . . . . . . . . . . . . . . . . . . . . 183Table 90. Configuration (KORCONFS) worksheet . . . . . . . . . . . . . . . . . . . . . 183Table 91. Contention Summary (KORLOCKS) worksheet. . . . . . . . . . . . . . . . 184Table 92. Databases Summary (KORDB) worksheets . . . . . . . . . . . . . . . . . . . 184Table 93. Files (KORFILES) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184Table 94. Library Cache Usage (KORLIBCU) worksheet. . . . . . . . . . . . . . . . . 185Table 95. Lock Conflicts (KORLCONF) worksheet . . . . . . . . . . . . . . . . . . . . . 185Table 96. Logging Summary (KORLOGS) worksheet . . . . . . . . . . . . . . . . . . . 185Table 97. Process Detail (KORPROCD) worksheet . . . . . . . . . . . . . . . . . . . . . 186Table 98. Process Summary (KORPROCS) worksheet . . . . . . . . . . . . . . . . . . 186Table 99. Rollback Segments (KORRBST) worksheet . . . . . . . . . . . . . . . . . . . 186
Table 100. Segments (KORSEGS) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . 187Table 101. Server (KORSRVR) worksheet. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187Table 102. Server Options (KORSRVRD) worksheet. . . . . . . . . . . . . . . . . . . . . 187
12 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 103. Session Detail (KORSESSD) worksheet . . . . . . . . . . . . . . . . . . . . . . 188Table 104. Session Summary (KORSESSS) worksheet . . . . . . . . . . . . . . . . . . . 188Table 105. SGA Memory Summary (KORSGA) worksheet . . . . . . . . . . . . . . . . 188Table 106. SQL Text Full (KORSQLF) worksheet . . . . . . . . . . . . . . . . . . . . . . . 189Table 107. Statistics Detail (KORSTATD) worksheet . . . . . . . . . . . . . . . . . . . . . 189Table 108. Statistics Summary (KORSTATS) worksheet . . . . . . . . . . . . . . . . . . 189Table 109. Tablespaces (KORTS) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . 190Table 110. Trans Blocking Rollback Segment Wrap (KORTBRSW) worksheet. . 190Table 111. OMEGAMON XE for ORACLE disk space summary worksheet . . . 190Table 112. OMEGAMON XE for Sybase historical data tables . . . . . . . . . . . . . 192Table 113. OMEGAMON XE for Sybase table record sizes . . . . . . . . . . . . . . . . 194Table 114. Cache Detail (KOYCACD) worksheet . . . . . . . . . . . . . . . . . . . . . . . 195Table 115. Cache Summary (KOYCACS) worksheet. . . . . . . . . . . . . . . . . . . . . 196Table 116. Configuration (KOYSCFG) worksheet . . . . . . . . . . . . . . . . . . . . . . . 196Table 117. Databases Detail (KOYDBD) worksheet . . . . . . . . . . . . . . . . . . . . . 196Table 118. Databases Summary (KOYDBS) worksheet. . . . . . . . . . . . . . . . . . . 197Table 119. Devices Detail (KOYDEVD) worksheet . . . . . . . . . . . . . . . . . . . . . . 197Table 120. Engine Detail (KOYENGD) worksheet. . . . . . . . . . . . . . . . . . . . . . . 197Table 121. Engine Summary (KOYENGS) worksheet . . . . . . . . . . . . . . . . . . . . 197Table 122. Lock Conflict Detail (KOYLOCK) worksheet . . . . . . . . . . . . . . . . . . 198Table 123. Lock Conflict Summary (KOYLOCKS) worksheet . . . . . . . . . . . . . . 198Table 124. Lock Detail (KOYLCK) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . 199Table 125. Lock Summary (KOYLCKS) worksheet . . . . . . . . . . . . . . . . . . . . . . 199Table 126. Log Detail (KOYLOGD) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . 199Table 127. Log Summary (KOYLOGS) worksheet . . . . . . . . . . . . . . . . . . . . . . 200Table 128. Physical Device Detail (KOYSDEVD) worksheet . . . . . . . . . . . . . . . 200Table 129. Problem Detail (KOYPROBD) worksheet. . . . . . . . . . . . . . . . . . . . . 200Table 130. Problem Summary (KOYPROBS) worksheet . . . . . . . . . . . . . . . . . . 200Table 131. Process Detail (KOYPRCD) worksheet. . . . . . . . . . . . . . . . . . . . . . . 201Table 132. Process Summary (KOYPRCS) worksheet . . . . . . . . . . . . . . . . . . . . 201Table 133. Remote Servers (KOYSRVR) worksheet . . . . . . . . . . . . . . . . . . . . . 202Table 134. Server Detail (KOYSRVD) worksheet . . . . . . . . . . . . . . . . . . . . . . . 202Table 135. Server Summary (KOYSRVS) worksheet. . . . . . . . . . . . . . . . . . . . . 202Table 136. SQL Detail (KOYSQLD) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . 203Table 137. Statistics Detail (KOYSTATD) worksheet . . . . . . . . . . . . . . . . . . . . . 203Table 138. Statistics Summary (KOYSTATS) worksheet . . . . . . . . . . . . . . . . . . 203
List of Tables 13
Table 139. Task Detail (KOYTSKD) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . 204Table 140. OMEGAMON XE for Sybase disk space summary worksheet . . . . . 205Table 141. OMEGAMON XE for MS SQL Server historical data tables . . . . . . . 207Table 142. OMEGAMON XE for MS SQL Server table record sizes . . . . . . . . . 208Table 143. Configuration (KOQSCFG) worksheet. . . . . . . . . . . . . . . . . . . . . . . 209Table 144. Database Detail (KOQDBD) worksheet . . . . . . . . . . . . . . . . . . . . . . 209Table 145. Database Summary (KOQDBS) worksheet . . . . . . . . . . . . . . . . . . . 210Table 146. Device Detail (KOQDEVD) worksheet . . . . . . . . . . . . . . . . . . . . . . . 210Table 147. Lock Conflict Detail (KOQLOCK) worksheet . . . . . . . . . . . . . . . . . . 211Table 148. Lock Detail (KOQLOCKS) worksheet . . . . . . . . . . . . . . . . . . . . . . . 211Table 149. Problem Detail (KOQPROBD) worksheet . . . . . . . . . . . . . . . . . . . . 211Table 150. Problem Summary (KOXPROBS) worksheet . . . . . . . . . . . . . . . . . . 212Table 151. Process Detail (KOQPRCD) worksheet . . . . . . . . . . . . . . . . . . . . . . 212Table 152. Process Summary (KOQPRCS) worksheet . . . . . . . . . . . . . . . . . . . 212Table 153. Remote Servers (KOQSRVR) worksheet . . . . . . . . . . . . . . . . . . . . . 213Table 154. Server Detail (KOQSRVD) worksheet . . . . . . . . . . . . . . . . . . . . . . . 213Table 155. Server Summary (KOQSRVS) worksheet . . . . . . . . . . . . . . . . . . . . 213Table 156. Statistics Detail (KOQSTATD) worksheet . . . . . . . . . . . . . . . . . . . . . 214Table 157. Statistics Summary (KOQSTATS) worksheet . . . . . . . . . . . . . . . . . . 214Table 158. Text (KOQSQL) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214Table 159. OMEGAMON XE for MS SQL Server disk space summary worksheet. .
215Table 160. OMEGAMON XE for OS/390 historical data tables . . . . . . . . . . . . . 216Table 161. OMEGAMON XE for OS/390 table record sizes. . . . . . . . . . . . . . . . 218Table 162. Address Space (ASCPUUTIL) worksheet . . . . . . . . . . . . . . . . . . . . . 219Table 163. Address Space Real Storage (ASREALSTOR) worksheet . . . . . . . . . 219Table 164. Address Space Virtual Storage (ASVIRTSTOR) worksheet. . . . . . . . 219Table 165. Channel Paths (CHNPATHS) worksheet . . . . . . . . . . . . . . . . . . . . . 220Table 166. Common Storage (COMSTOR) worksheet . . . . . . . . . . . . . . . . . . . 220Table 167. DASD MVS Devices (DASDMVSDEV) worksheet . . . . . . . . . . . . . . 220Table 168. DASD MVS (DASDMVS) worksheet . . . . . . . . . . . . . . . . . . . . . . . . 221Table 169. Enclave Table (ENCTABLE) worksheet . . . . . . . . . . . . . . . . . . . . . . 221Table 170. Enqueues (ENQUEUE) worksheet. . . . . . . . . . . . . . . . . . . . . . . . . . 221Table 171. LPAR Clusters (LPCLUST) worksheet . . . . . . . . . . . . . . . . . . . . . . . 222Table 172. Operator Alerts (OPERALRT) worksheet . . . . . . . . . . . . . . . . . . . . . 222Table 173. Page Dataset Activity (PAGEDS) worksheet. . . . . . . . . . . . . . . . . . . 222
14 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 174. Real Storage (REALSTOR) worksheet . . . . . . . . . . . . . . . . . . . . . . . 223Table 175. System Paging Activity (PAGING) worksheet. . . . . . . . . . . . . . . . . . 223Table 176. System CPU Utilization (SYSCPUUTIL) worksheet . . . . . . . . . . . . . 223Table 177. Tape Drives (TAPEDRVS) worksheet . . . . . . . . . . . . . . . . . . . . . . . . 224Table 178. User Response Time (URESPTM) worksheet . . . . . . . . . . . . . . . . . . 224Table 179. WLM Service Class Resources (MWLMPR) worksheet. . . . . . . . . . . 224Table 180. OMEGAMON XE for OS/390 disk space summary worksheet . . . . . 225Table 181. OMEGAMON XE for OS/390 UNIX System Services historical data
tables 226Table 182. OMEGAMON XE for OS/390 UNIX System Services table record sizes .
227Table 183. USS Address Spaces (ASRESRC2) worksheet . . . . . . . . . . . . . . . . . 227Table 184. USS Kernel (OEKERNL2) worksheet. . . . . . . . . . . . . . . . . . . . . . . . 228Table 185. USS Processes (OPS2) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . 228Table 186. USS Logged on Users (OUSERS2) worksheet . . . . . . . . . . . . . . . . . 228Table 187. USS Mounted File Systems (MOUNTS2) worksheet . . . . . . . . . . . . 229Table 188. USS BPXPRMxx Values (BPXPRM2) worksheet . . . . . . . . . . . . . . . 229Table 189. USS Threads (THREAD2) worksheet . . . . . . . . . . . . . . . . . . . . . . . 229Table 190. USS HFS ENQ Contention (HFSENQC2) worksheet . . . . . . . . . . . 230Table 191. OMEGAMON XE for OS/390 UNIX System Services disk space
summary worksheet 230Table 192. OMEGAMON XE for OS/400 historical data tables . . . . . . . . . . . . . 232Table 193. OMEGAMON XE for OS/400 table record sizes. . . . . . . . . . . . . . . . 233Table 194. Async Performance (KA4ASYNC) worksheet . . . . . . . . . . . . . . . . . 234Table 195. Bsync Performance (KA4BSYNC) worksheet . . . . . . . . . . . . . . . . . 235Table 196. Controller Description (KA4CTLD) worksheet . . . . . . . . . . . . . . . . . 235Table 197. Device Description (KA4DEVD) worksheet . . . . . . . . . . . . . . . . . . . 235Table 198. Disk Performance (KA4DISK) worksheet . . . . . . . . . . . . . . . . . . . . . 236Table 199. Ethernet Performance (KA4ENET) worksheet . . . . . . . . . . . . . . . . . 236Table 200. IOP Performance (KA4PFIOP) worksheet . . . . . . . . . . . . . . . . . . . . 236Table 201. Job Performance (KA4PFJOB) worksheet . . . . . . . . . . . . . . . . . . . . 237Table 202. Line Description (KA4LIND) worksheet. . . . . . . . . . . . . . . . . . . . . . 237Table 203. Network Attributes (KA4NETA) worksheet. . . . . . . . . . . . . . . . . . . . 237Table 204. Pool Activity (KA4POOL) worksheet . . . . . . . . . . . . . . . . . . . . . . . . 238Table 205. SDLC Performance (KA4SDLC) worksheet . . . . . . . . . . . . . . . . . . . 238Table 206. System Status (KA4SYSTS) worksheet . . . . . . . . . . . . . . . . . . . . . . 238
List of Tables 15
Table 207. System Values (KA4SVAL) worksheet . . . . . . . . . . . . . . . . . . . . . . . 239Table 208. System Values: Activity (KA4SVACT) worksheet . . . . . . . . . . . . . . . 239Table 209. System Values: Device (KA4SVDEV) worksheet . . . . . . . . . . . . . . . 239Table 210. System Values: IPL (KA4SVIPL) worksheet . . . . . . . . . . . . . . . . . . . 240Table 211. System Values: Performance (KA4SVPRF) worksheet . . . . . . . . . . . 240Table 212. System Values: Problems (KA4SVPRB) worksheet . . . . . . . . . . . . . 240Table 213. System Values: Users (KA4SVUSR) worksheet . . . . . . . . . . . . . . . . 241Table 214. Token-Ring Performance (KA4TKRNG) worksheet . . . . . . . . . . . . . 241Table 215. X.25 Performance (KA4X25) worksheet . . . . . . . . . . . . . . . . . . . . . 241Table 216. OMEGAMON XE for R/3™ historical data tables . . . . . . . . . . . . . . . 242Table 217. OMEGAMON XE for R/3™ table record sizes . . . . . . . . . . . . . . . . . 242Table 218. Instance Configuration (KSASYS) worksheet. . . . . . . . . . . . . . . . . . 243Table 219. Service Response (KSAPERF) worksheet . . . . . . . . . . . . . . . . . . . . 243Table 220. Alerts (KSAALERTS) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . 243Table 221. Operating System and LAN (KSAOSP) Worksheet . . . . . . . . . . . . . 243Table 222. File System (KSAFSYSTEM) worksheet. . . . . . . . . . . . . . . . . . . . . . 244Table 223. Buffer Performance (KSABUFFER) worksheet. . . . . . . . . . . . . . . . . 244Table 224. OMEGAMON XE for R/3™ disk space summary worksheet. . . . . . . 245Table 225. OMEGAMON XE for Sysplex historical data tables . . . . . . . . . . . . . 246Table 226. OMEGAMON XE for Sysplex table record sizes . . . . . . . . . . . . . . . . 247Table 227. Service Class Address Spaces (MADDSPC) worksheet . . . . . . . . . . 248Table 228. CF Path (MCFPATH) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . 249Table 229. CF Structure to MVS System (MCFSMVS) worksheet . . . . . . . . . . . 249Table 230. CF Structures (MCFSTRCT) worksheet . . . . . . . . . . . . . . . . . . . . . . 249Table 231. Sysplex DASD Device (MDASD_DEV) worksheet . . . . . . . . . . . . . . 250Table 232. Sysplex DASD Group (MDASD_GRP) worksheet . . . . . . . . . . . . . . 250Table 233. Sysplex DASD (MDASD_SYS) worksheet . . . . . . . . . . . . . . . . . . . . 250Table 234. Global Enqueues (MGLBLENQ) worksheet . . . . . . . . . . . . . . . . . . 251Table 235. Resource Groups (MRESGRP) worksheet . . . . . . . . . . . . . . . . . . . . 251Table 236. Report Classes (MRPTCLS) worksheet . . . . . . . . . . . . . . . . . . . . . . 251Table 237. Sysplex WLM Service Class Period (MSRVCLS) worksheet . . . . . . . 252Table 238. Service Definition (MSRVDEF) worksheet . . . . . . . . . . . . . . . . . . . . 252Table 239. Service Class Subsys Workflow Analysis (MSSWFA) worksheet . . . . 253Table 240. Service Class Enqueue Workflow Analysis (MWFAENQ) worksheet 253Table 241. Service Class I/O Workflow Analysis (MWFAIO) worksheet . . . . . . . 254Table 242. XCF Paths (MXCFPATH) worksheet . . . . . . . . . . . . . . . . . . . . . . . . 254
16 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 243. XCF System Statistics (MXCFSSTA) worksheet . . . . . . . . . . . . . . . . 255Table 244. XCF System (MXCFSYS) worksheet . . . . . . . . . . . . . . . . . . . . . . . . 255Table 245. OMEGAMON XE for Sysplex disk space summary worksheet . . . . . 256Table 246. OMEGAMON XE for Tuxedo historical data tables . . . . . . . . . . . . . 258Table 247. OMEGAMON XE for Tuxedo table record sizes . . . . . . . . . . . . . . . . 259Table 248. Application Queues (TXAPPQ) worksheet. . . . . . . . . . . . . . . . . . . . 261Table 249. Application Server Queues (TXSVRQ) worksheet . . . . . . . . . . . . . . 261Table 250. Machine Configuration (TXMCONF) worksheet . . . . . . . . . . . . . . . 262Table 251. Machine Stats (TXMSTATS) worksheet . . . . . . . . . . . . . . . . . . . . . . 262Table 252. Queue Load (TXQLOAD) worksheet. . . . . . . . . . . . . . . . . . . . . . . . 262Table 253. Server Groups (TXSRVGP) worksheet . . . . . . . . . . . . . . . . . . . . . . 263Table 254. Service Group (TXSVCGRP) worksheet . . . . . . . . . . . . . . . . . . . . . 263Table 255. System Message Queues (TXSYSMQ) worksheet . . . . . . . . . . . . . . 263Table 256. Tuxedo App Queue Msgs (TCQMSG) worksheet . . . . . . . . . . . . . . . 264Table 257. Tuxedo App Queue Spcs (TCQSPCS) worksheet . . . . . . . . . . . . . . 264Table 258. Tuxedo App Queue Trans (TXQTRAN) worksheet. . . . . . . . . . . . . . 264Table 259. Tuxedo BBL Statistics (TXBBLCFG) worksheet. . . . . . . . . . . . . . . . 265Table 260. Tuxedo Client Conversations (TXCONV) worksheet . . . . . . . . . . . . 265Table 261. Tuxedo Client Statistics (TXSTATS) worksheet . . . . . . . . . . . . . . . . 265Table 262. Tuxedo Clients (TXCLIENTS) worksheet . . . . . . . . . . . . . . . . . . . . . 266Table 263. Tuxedo Domain Configuration (TXDOMCFG) worksheet . . . . . . . . 266Table 264. Tuxedo Servers (TXSERVER) worksheet . . . . . . . . . . . . . . . . . . . . . 266Table 265. Tuxedo Server Statistics (TXSRVSTAT) worksheet. . . . . . . . . . . . . . 267Table 266. Tuxedo Transactions (TXTRANSACT) worksheet . . . . . . . . . . . . . . 267Table 267. Tuxedo User Logs (TXLOGS) worksheet . . . . . . . . . . . . . . . . . . . . . 267Table 268. OMEGAMON XE for Tuxedo disk space summary worksheet . . . . . 268Table 269. OMEGAMON XE for UNIX historical data tables. . . . . . . . . . . . . . . 269Table 270. OMEGAMON XE for UNIX systems table record sizes . . . . . . . . . . . 269Table 271. System (UNIXOS) worksheet. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270Table 272. Filesystem space (UNIXDISK) worksheet . . . . . . . . . . . . . . . . . . . . . 270Table 273. Disk Performance space (UNIXDPERF) worksheet . . . . . . . . . . . . . 270Table 274. Network Interface space (UNIXNET) worksheet . . . . . . . . . . . . . . . 271Table 275. Online Users space (UNIXUSER) worksheet . . . . . . . . . . . . . . . . . . 271Table 276. Running Processes (UNIXPS) worksheet . . . . . . . . . . . . . . . . . . . . . 271Table 277. Network Filesystem (UNIXNFS) worksheet . . . . . . . . . . . . . . . . . . . 272Table 278. Unix Processor/CPU (UNIXCPU) worksheet . . . . . . . . . . . . . . . . . . 272
List of Tables 17
Table 279. OMEGAMON XE for UNIX disk space summary worksheet . . . . . . 273Table 280. OMEGAMON XE for WebSphere Application Server historical data
tables 274Table 281. OMEGAMON XE for WebSphere Application Server table record sizes .
276Table 282. All Workloads (KWEWKLDS) worksheet . . . . . . . . . . . . . . . . . . . . . 277Table 283. Application Server (KWEAPPSRV) worksheet . . . . . . . . . . . . . . . . . 278Table 284. Application Server Errors (KWEASERR) worksheet . . . . . . . . . . . . . 278Table 285. Container Object Pools (KWEEBOP) worksheet . . . . . . . . . . . . . . . 279Table 286. Container Transactions (KWETRANS) worksheet . . . . . . . . . . . . . . 279Table 287. DB Container Pools (KWEDBCONP) worksheet . . . . . . . . . . . . . . . 279Table 288. EJB Containers (KWECONTNR) worksheet . . . . . . . . . . . . . . . . . . 280Table 289. Enterprise Java Bean Methods (KWEEJBMTD) worksheet . . . . . . . 280Table 290. Enterprise Java Beans (KWEEJB) worksheet. . . . . . . . . . . . . . . . . . 280Table 291. JVM Garbage Collector Activity (KWEGC) worksheet . . . . . . . . . . . 281Table 292. Longest Running Workloads (KWEWKLEX) worksheet . . . . . . . . . . 281Table 293. Product Events (KWEPREV) worksheet. . . . . . . . . . . . . . . . . . . . . . 282Table 294. Selected Workload Delays (KWEWKLDD) worksheet . . . . . . . . . . . 282Table 295. Servlets/JSPs (KWESERVLT) worksheet . . . . . . . . . . . . . . . . . . . . . 283Table 296. Web Applications (KWEAPP) worksheet . . . . . . . . . . . . . . . . . . . . . 283Table 297. OMEGAMON XE for WebSphere Application Server disk space
summary worksheet 284Table 298. OMEGAMON XE for WebSphere Application Server for OS/390
historical data tables 285Table 299. OMEGAMON XE for WebSphere Application Server for OS/390 table
record sizes 287Table 300. Application Server Error Log (KWWERRLG) worksheet . . . . . . . . . 289Table 301. Application Server Instance (KWWAPPSV) worksheet. . . . . . . . . . . 289Table 302. Application Server Instance SMF Interval Statistics (KWWAPPSM)
worksheet 290Table 303. Datasource Detail (KWWDATAS) worksheet . . . . . . . . . . . . . . . . . . 290Table 304. HTTP Session Detail for WAS OS/390 (KWWHTTPS) worksheet . . 291Table 305. J2EE Server Bean Methods (KWWJBMTH) worksheet . . . . . . . . . . 291Table 306. J2EE Server Beans (KWWJBEAN) worksheet . . . . . . . . . . . . . . . . . 292Table 307. J2EE Server Containers (KWWJCONT) worksheet . . . . . . . . . . . . . 292Table 308. JVM Garbage Collector Activity for WAS OS/390 (KWWGC) worksheet
293
18 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 309. MOFW Server Classes (KWWMCLAS) worksheet . . . . . . . . . . . . . . 293Table 310. MOFW Server Containers (KWWMCONT) worksheet . . . . . . . . . . . 294Table 311. MOFW Server Methods (KWWMMETH) worksheet . . . . . . . . . . . . 294Table 312. MQSeries Access for WAS OS/390 (KWWMQSAC) worksheet . . . . 295Table 313. Product Events for WAS OS/390 (KWWPREV) worksheet. . . . . . . . 295Table 314. Server Instance Status (KWWAPSST) worksheet . . . . . . . . . . . . . . . 296Table 315. Workload Exception for WAS OS/390 (KWWWKLEX) worksheet . . 296Table 316. Workload Degradation Detail for WAS OS/390 (KWWWKLDD)
worksheet 297Table 317. Workload Degradation Summary for WAS OS/390 (KWWWKLDS)
worksheet 298Table 318. OMEGAMON XE for WebSphere Application Server for OS/390 disk
space summary worksheet299Table 319. OMEGAMON XE for WebSphere Integration Brokers historical data
tables 301Table 320. OMEGAMON XE for WebSphere Integration Brokers Table Record Sizes
303Table 321. Components (kqitcomp) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . 304Table 322. Product Events (kqitprev) worksheet . . . . . . . . . . . . . . . . . . . . . . . . 305Table 323. Broker Information (kqitbrkr) worksheet . . . . . . . . . . . . . . . . . . . . . 305Table 324. Broker Events (kqitbrev) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . 305Table 325. Message Flow Events (kqitflev) worksheet . . . . . . . . . . . . . . . . . . . . 306Table 326. Broker Statistics (kqitstbr) worksheet . . . . . . . . . . . . . . . . . . . . . . . . 306Table 327. Execution Group Statistics (kqitsteg) worksheet . . . . . . . . . . . . . . . . 306Table 328. Message Flow Statistics (kqitstmf) worksheet . . . . . . . . . . . . . . . . . . 307Table 329. Sub-Flow Statistics (kqitstsf) worksheet . . . . . . . . . . . . . . . . . . . . . . 307Table 330. CandleMonitor Node Statistics (kqitstfn) worksheet . . . . . . . . . . . . . 308Table 331. Message Flow Accounting (kqitasmf) worksheet . . . . . . . . . . . . . . . 308Table 332. Thread Accounting (kqitasth) worksheet . . . . . . . . . . . . . . . . . . . . . 308Table 333. Node Accounting (kqitasnd) worksheet . . . . . . . . . . . . . . . . . . . . . . 309Table 334. Terminal Accounting (kqitastr) worksheet . . . . . . . . . . . . . . . . . . . . 309Table 335. Execution Group Information (kqitdfeg) worksheet . . . . . . . . . . . . . 309Table 336. Message Flow Information (kqitdfmf) worksheet . . . . . . . . . . . . . . . 310Table 337. Message Processing Node Information (kqitdffn) worksheet. . . . . . . 310Table 338. Neighbors (kqitdsen) worksheet. . . . . . . . . . . . . . . . . . . . . . . . . . . . 310Table 339. Subscriptions (kqitdses) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . 311Table 340. Retained Publications (kqitdser) worksheet . . . . . . . . . . . . . . . . . . . 311
List of Tables 19
Table 341. ACL Entries (kqitdsea) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . 311Table 342. OMEGAMON XE for WebSphere Integration Brokers disk space
summary worksheet 312Table 343. OMEGAMON XE for WebSphere MQ Configuration historical data table
314Table 344. OMEGAMON XE for WebSphere MQ Configuration disk space
summary worksheet 315Table 345. OMEGAMON XE for WebSphere MQ Monitoring historical data tables.
317Table 346. OMEGAMON XE for WebSphere MQ Monitoring table record sizes 318Table 347. Application Statistics (QM_APAL) worksheet . . . . . . . . . . . . . . . . . . 320Table 348. Application Queue Statistics (QM_APQL) worksheet . . . . . . . . . . . . 320Table 349. Application Transaction/Program Statistics (QM_APTL) worksheet . 320Table 350. Buffer Pools (QMLHBM) worksheet. . . . . . . . . . . . . . . . . . . . . . . . . 321Table 351. Channel Initiator (QMCHIN_LH) worksheet . . . . . . . . . . . . . . . . . . 321Table 352. Channel Statistics (QMCH_LH) worksheet . . . . . . . . . . . . . . . . . . . 321Table 353. Error Log (QMERRLOG) worksheet . . . . . . . . . . . . . . . . . . . . . . . . 322Table 354. Event Log (QMEVENTH) worksheet . . . . . . . . . . . . . . . . . . . . . . . . 322Table 355. Log Manager (QMLHLM) worksheet . . . . . . . . . . . . . . . . . . . . . . . . 322Table 356. Message Manager (QMLHMM) worksheet . . . . . . . . . . . . . . . . . . . . 323Table 357. Message Statistics (QMSG_STAT) worksheet . . . . . . . . . . . . . . . . . . 323Table 358. Page Sets (QMPS_LH) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . 324Table 359. Queue Statistics (QMQ_LH) worksheet . . . . . . . . . . . . . . . . . . . . . . 324Table 360. Queue Sharing Group CF Structure Backups (QSG_CFBKUP)
worksheet 324Table 361. Queue Sharing Group CF Structure Statistics (QSG_CFSTR) worksheet
325Table 362. Queue Sharing Group Channel Statistics (QSG_CHANS) worksheet 325Table 363. Queue Sharing Group Queue Statistics (QSG_QUEUES) worksheet 325Table 364. Queue Sharing Group Queue Managers (QSG_QMGR) worksheet . 326Table 365. Queue Sharing Group CF Structure Connection Statistics
(QSG_CFCONN) worksheet326Table 366. OMEGAMON XE for WebSphere MQ Monitoring disk space summary
worksheet 327Table 367. OMEGAMON XE for Windows Servers historical data tables. . . . . . 329Table 368. OMEGAMON XE for Windows Servers table record sizes . . . . . . . . 331Table 369. Logical Disk (WTLOGCLDSK) worksheet . . . . . . . . . . . . . . . . . . . . 334
20 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 370. System (WTSYSTEM) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . 334Table 371. Physical Disk (WTPHYSDSK) worksheet. . . . . . . . . . . . . . . . . . . . . 334Table 372. Memory (WTMEMORY) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . 334Table 374. Processor (NTPROCSSR) worksheet . . . . . . . . . . . . . . . . . . . . . . . . 335Table 375. Page File (NTPAGEFILE) worksheet . . . . . . . . . . . . . . . . . . . . . . . . 335Table 376. Objects (WTOBJECTS) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . 335Table 373. Process (WTPROCESS) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . 335Table 377. Monitored Logs (NTLOGINFO) worksheet . . . . . . . . . . . . . . . . . . . 336Table 378. Event Log (NTEVTLOG) worksheet . . . . . . . . . . . . . . . . . . . . . . . . 336Table 379. Active Server Pages (ACTSRVPG) worksheet . . . . . . . . . . . . . . . . . 336Table 380. HTTP Content Index (HTTPCNDX) worksheet . . . . . . . . . . . . . . . . 336Table 381. HTTP Server (HTTPSRVC) worksheet . . . . . . . . . . . . . . . . . . . . . . 337Table 382. FTP Server (FTPSTATS) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . 337Table 383. Internet Information Server (IISSTATS) worksheet. . . . . . . . . . . . . . 337Table 384. UDP (UDPSTATS) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337Table 385. TCP (TCPSTATS) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338Table 386. ICMP (ICMPSTAT) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338Table 387. Network Interface (NETWRKIN) worksheet . . . . . . . . . . . . . . . . . . . 338Table 388. Network Segment (NETSEGMT) worksheet . . . . . . . . . . . . . . . . . . 338Table 389. Gopher Services (GOPHRSVC) worksheet . . . . . . . . . . . . . . . . . . . 339Table 390. MSMQ Information Store (MSMQIS) worksheet . . . . . . . . . . . . . . . 339Table 391. MSMQ Queue (MSMQQUE) worksheet . . . . . . . . . . . . . . . . . . . . . 339Table 392. MSMQ Service (MSMQSVC) worksheet . . . . . . . . . . . . . . . . . . . . . 339Table 393. MSMQ Sessions (MSMQSESS) worksheet. . . . . . . . . . . . . . . . . . . . 340Table 394. RAS Port (KNTRASPT) worksheet. . . . . . . . . . . . . . . . . . . . . . . . . . 340Table 395. RAS Total (KNTRASTOT) worksheet. . . . . . . . . . . . . . . . . . . . . . . . 340Table 396. Cache (NTCACHE) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340Table 397. Printer (NTPRINTER) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . 341Table 398. Print Job (NTPRTJOB) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . 341Table 399. Services (NTSERVICE) worksheet. . . . . . . . . . . . . . . . . . . . . . . . . . 341Table 400. Service Dependencies (NTSVCDEP) worksheet . . . . . . . . . . . . . . . 341Table 401. Devices (NTDEVICE) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . 342Table 402. Device Dependencies (NTDEVDEP) worksheet. . . . . . . . . . . . . . . . 342Table 403. Indexing Service (INDEXSVC) worksheet . . . . . . . . . . . . . . . . . . . . 342Table 404. Indexing Service Filter (INDEXSVCF) worksheet . . . . . . . . . . . . . . . 342Table 405. DHCP (DHCPSRV) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
List of Tables 21
Table 406. DNS Memory (DNSMEMORY) worksheet. . . . . . . . . . . . . . . . . . . . 343Table 407. DNS Zone Transfer (DNSZONET) worksheet . . . . . . . . . . . . . . . . . 343Table 408. DNS Dynamic Update (DNSDYNUPD) worksheet . . . . . . . . . . . . . 343Table 409. DNS Query (DNSQUERY) worksheet . . . . . . . . . . . . . . . . . . . . . . . 344Table 410. DNS WINS (DNSWINS) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . 344Table 411. FTP Service (FTPSVC) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . 344Table 412. Job Object (JOBOBJ) worksheet. . . . . . . . . . . . . . . . . . . . . . . . . . . 344Table 413. Job Object Details (JOBOBJD) worksheet. . . . . . . . . . . . . . . . . . . . 345Table 414. NNTP Commands (NNTPCMD) worksheet. . . . . . . . . . . . . . . . . . . 345Table 415. NNTP Server (NNTPSRV) worksheet . . . . . . . . . . . . . . . . . . . . . . . 345Table 416. Print Queue (PRINTQ) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . 345Table 417. SMTP (SMTPSRV) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346Table 418. Web Service (WEBSVC) worksheet . . . . . . . . . . . . . . . . . . . . . . . . . 346Table 419. OMEGAMON XE for Windows Servers disk space summary worksheet
346
22 Historical Data Collection Guide for OMEGAMON XE and CCC
Preface 23
Preface
This document describes the use of the historical data collection capability in CandleNet Portal® (Version 195), the user interface for OMEGAMON®XE. It also describes the use of the historical data collection capability in the Candle Management Workstation® of the CandleNet Command Center® (Version 350).
You can use the historical data collection capability to:
n measure workload growth and change n forecast hardware capacityn balance application workloadn identify application CPU, input/output (I/O), disk space, and network
characteristicsn detect chronic system anomaliesn perform trend analysis
P
About This Book
24 Historical Data Collection Guide for OMEGAMON XE and CCC
About This Book
Who should read this bookThis document is for those who use, configure, and maintain Candle products that include the historical data collection feature. The historical data collection History Configuration feature is automatically installed on the CMS for the following products:
n OMEGAMON XE for CICS (the historical data collection tables for this product also apply to OMEGAMON XE for CICSplex)
n OMEGAMON XE for Distributed Databases– OMEGAMON XE for ORACLE– OMEGAMON XE for MS SQL Server– OMEGAMON XE for Sybase
n OMEGAMON XE for NetWare
n OMEGAMON XE for OS/390
n OMEGAMON XE for OS/390 UNIX System Services
n OMEGAMON XE for OS/400
n OMEGAMON XE for R/3™n OMEGAMON XE for Sysplexn OMEGAMON XE for Tuxedo
n OMEGAMON XE for UNIX® Systems
n OMEGAMON XE for WebSphere Application Server (the historical data collection tables apply to the distributed product)
n OMEGAMON XE for WebSphere Application Server for OS/390n OMEGAMON XE for WebSphere Integration Brokersn OMEGAMON XE for WebSphere MQ Configurationn OMEGAMON XE for WebSphere MQ Monitoringn OMEGAMON XE for Windows Servers™
Preface 25
Adobe Portable Document Format
Contents of this guideThis unit describes the contents of each chapter of this guide.
Use the table to understand the organization and content of this guide.
Table 1. Contents of this guide
Chapter name Content
“What’s New” on page 35 This chapter describes the new features of OMEGAMON XE and the new features of CandleNet Command Center (Version 350).
“Overview of Historical Data Collection” on page 37
This chapter provides an overview of historical data collection
“Planning Collection of Historical Data” on page 45
This chapter provides information about historical data tables and historical data collection components.
“Configuring Historical Data Collection on CMW” on page 61
You run the Historical Data Collection Configuration program to start or stop historical data collection on CMW. This chapter provides instructions on invoking the program and using the CCC History Configuration panel to define data collection and warehousing rules.
“Configuring Historical Data Collection on CandleNet Portal” on page 51
You run the Historical Data Collection Configuration program to start or stop historical data collection on CandleNet Portal. This chapter provides instructions on invoking the program and using the History Collection Configuration panel to define data collection and warehousing rules.
“Warehousing Your Historical Data” on page 75
Several steps are required in order to warehouse your historical data to a supported relational database using ODBC. This chapter provides guidance on warehousing your historical data.
About This Book
26 Historical Data Collection Guide for OMEGAMON XE and CCC
“Converting History Files to Delimited Flat Files (Windows and OS/400)” on page 83
The history files collected by the rules established in the HDC Configuration program can be converted to delimited flat files that can be used in a variety of popular applications to easily manipulate the data and create reports and graphs. This chapter describes how to schedule the conversion both automatically and manually.
“Converting History Files to Delimited Flat Files (MVS)” on page 93
The history files collected by the rules established in the HDC Configuration program can be converted to delimited flat files that can be used in a variety of popular applications to easily manipulate the data and create reports and graphs. Automatic scheduling of collection and conversion has been integrated into the standard maintenance procedures used by the Persistent Data Store (CT/PDS).
“Converting History Files to Delimited Flat Files (UNIX Systems)” on page 101
The history files collected by the rules established in the HDC Configuration program can be converted to delimited flat files that can be used in a variety of popular applications to easily manipulate the data and create reports and graphs. This chapter describes how to schedule the conversion both automatically and manually.
“Converting History Files to Delimited Flat Files (HP NonStop Kernel Systems)” on page 107
The history files collected by the rules established in the HDC Configuration program can be converted to delimited flat files that can be used in a variety of popular applications to easily manipulate the data and create reports and graphs. This chapter describes how to schedule the conversion.
“Maintaining the Persistent Data Store (CT/PDS)” on page 109
This appendix describes procedures for maintaining the Persistent Data Store (CT/PDS).
Table 1. Contents of this guide (continued)
Chapter name Content
Preface 27
Adobe Portable Document Format
Documentation set informationFor detailed information on OMEGAMON XE and its user interface, CandleNet Portal, see the Using OMEGAMON Products: CandleNet Portal document.
For detailed information on the CCC and its user interface, the CMW™, see the Candle Management Workstation Administrator’s Guide and the Candle Management Workstation User’s Guide.
See the Installing Candle Systems on Windows for instructions on installing the Candle Warehouse Proxy Agent.
Use of MVS terminologyThe term MVS applies to the MVS, OS/390, and z/OS operating systems, unless specifically noted in this guide.
“Disk Space Requirements for Historical Data Tables” on page 141
The installation manual for your system environment provides base disk space requirements for CandleNet Portal, for the Candle Management Server (CMS),and for the Candle Management Workstation (CMW). These disk space requirements do not include the additional space required for maintaining historical data files. Because of the variations in client distributed systems, system size, number of managed systems, and so on, it is difficult to provide actual additional disk space requirements necessary for historical data collection. Rather, this chapter provides you with basic record sizes for each of the tables from which historical data is collected and suggests a methodology you can follow to estimate the required additional disk space.
Table 1. Contents of this guide (continued)
Chapter name Content
About This Book
28 Historical Data Collection Guide for OMEGAMON XE and CCC
Where to look for more informationFor more information related to this product and other related products, please see the
n technical documentation CD-ROM that came with your product
n technical documentation information available on the Candle web site at www.candle.com
n online help provided with this and the other related products
We would like to hear from youCandle welcomes your comments and suggestions for changes or additions to the documentation set. A user comment form, located at the back of each manual, provides simple instructions for communicating with the Candle Information Development department. You can also send email to UserDoc@candle.com. Please include "Historical Data Collection Guide for OMEGAMON XE and CCC" in the subject line.
Preface 29
Adobe Portable Document Format
Adobe Portable Document Format
Printing this bookCandle supplies documentation in the Adobe Portable Document Format (PDF). The Adobe Acrobat Reader will print PDF documents with the fonts, formatting, and graphics in the original document. To print a Candle document, do the following:
1. Specify the print options for your system. From the Acrobat Reader Menu bar, select File > Page Setup… and make your selections. A setting of 300 dpi is highly recommended as is duplex printing if your printer supports this option.
2. To start printing, select File > Print... on the Acrobat Reader Menu bar.
3. On the Print pop-up, select one of the Print Range options forn Alln Current pagen Pages from: [ ] to: [ ]
4. (Optional). Select the Shrink to Fit option if you need to fit oversize pages to the paper size currently loaded on your printer.
Printing problems?The print quality of your output is ultimately determined by your printer. Sometimes printing problems can occur. If you experience printing problems, potential areas to check are:n settings for your printer and printer driver. (The dpi settings for both your
driver and printer should be the same. A setting of 300 dpi is recommended.)
n the printer driver you are using. (You may need a different printer driver or the Universal Printer driver from Adobe. This free printer driver is available at www.adobe.com.)
n the halftone/graphics color adjustment for printing color on black and white printers (check the printer properties under Start > Settings > Printer). For more information, see the online help for the Acrobat Reader.
n the amount of available memory in your printer. (Insufficient memory can cause a document or graphics to fail to print.)
For additional information on printing problems, refer to the documentation for your printer or contact your printer manufacturer.
About This Book
30 Historical Data Collection Guide for OMEGAMON XE and CCC
Contacting AdobeIf additional information is needed about Adobe Acrobat Reader or printing problems, see the Readme.pdf file that ships with Adobe Acrobat Reader or contact Adobe at www.adobe.com.
Adding annotations to PDF filesIf you have purchased the Adobe Acrobat application, you can add annotations to Candle documentation in .PDF format. See the Adobe product for instructions on using the Acrobat annotations tool and its features.
Preface 31
Adobe Portable Document Format
Documentation Conventions
IntroductionCandle documentation adheres to accepted typographical conventions for command syntax. Conventions specific to Candle documentation are discussed in the following sections.
Panels and figuresThe panels and figures in this document are representations. Actual product panels may differ.
Required blanksThe slashed-b (b) character in examples represents a required blank. The following example illustrates the location of two required blanks.
beBA*ServiceMonitorb0990221161551000
Revision barsRevision bars (|) may appear in the left margin to identify new or updated material.
Variables and literals in command syntax examplesIn examples of command syntax for the OS/390, VM, OS/400, and NonStop Kernel platforms, uppercase letters indicate actual values (literals) that the user should type; lowercase letters indicate variables that represent data supplied by the user:
LOGON APPLID (cccccccc)
However, for the Windows and UNIX platforms, variables are shown in italics:
-candle.kzy.instrument.control.file=instrumentation_control_file_name-candle.kzy.agent.parms=agent_control_file_name
Note: In ordinary text, variable names appear in italics, regardless of platform.
About This Book
32 Historical Data Collection Guide for OMEGAMON XE and CCC
SymbolsThe following symbols may appear in command syntax:
Table 2. Symbols in Command Syntax
Symbol Usage
| The “or” symbol is used to denote a choice. Either the argument on the left or the argument on the right may be used. Example:
YES | NOIn this example, YES or NO may be specified.
[ ] Denotes optional arguments. Those arguments not enclosed in square brackets are required. Example:
APPLDEST DEST [ALTDEST]In this example, DEST is a required argument and ALTDEST is optional.
{ } Some documents use braces to denote required arguments, or to group arguments for clarity. Example:
COMPARE {workload} -REPORT={SUMMARY | HISTOGRAM}
The workload variable is required. The REPORT keyword must be specified with a value of SUMMARY or HISTOGRAM.
_ Default values are underscored. Example:
COPY infile outfile - [COMPRESS={YES | NO}]In this example, the COMPRESS keyword is optional. If specified, the only valid values are YES or NO. If omitted, the default is YES.
Preface 33
Adobe Portable Document Format
Candle Customer Service and Satisfaction
BackgroundTo assist you in making effective use of our products, Candle offers a variety of easy-to-use online support resources. The Candle Web site provides direct links to a variety of support tools that include a range of services. For example, you can find information about training, maintenance plans, consulting and services, and other useful support resources. Refer to the Candle Web site at www.candle.com for detailed customer service information.
Candle Customer Service and Satisfaction contactsYou will find the most current information about how to contact Candle Customer Service and Satisfaction by telephone or e-mail on the Candle Web site. Go to www.candle.com support section and choose the link to Support Contacts to locate your regional support center.
About This Book
34 Historical Data Collection Guide for OMEGAMON XE and CCC
What’s New 35
What’s New
IntroductionThis section details the documentation changes made to this manual.
Documentation changes
New tables added
New tables were added to the space requirements appendix to reflect the added support for historical data collection in these products:
n OMEGAMON XE for OS/390
n OMEGAMON XE for Universal Database
Additionally, numerous tables were revised to include current data on disk space requirements. See “Disk Space Requirements for Historical Data Tables” on page 141.
Other changes were made throughout the document to improve the accuracy of the information and to bring the information up to date with recent changes.
Product rename
OMEGAMON XE for WebSphere MQ Integrator has been renamed to OMEGAMON XE for WebSphere Integration Brokers. The “Disk Space Requirements for Historical Data Tables” on page 141 appendix has been revised to reflect the name change.
W
About This Book
36 Historical Data Collection Guide for OMEGAMON XE and CCC
Overview of Historical Data Collection 37
Overview of Historical DataCollection
IntroductionThis chapter provides an introduction to historical data collection.
Chapter ContentsAbout Historical Data Collection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38Historical Collection Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40Performance Impact of Historical Data Requests. . . . . . . . . . . . . . . . . . . . 42
1
About Historical Data Collection
38 Historical Data Collection Guide for OMEGAMON XE and CCC
About Historical Data Collection
OverviewThe Historical Data Collection (HDC) Configuration program, invoked from either CandleNet Portal or from the Candle Management Workstation (CMW), begins the collection of historical data. The program allows you to specify the collection of historical data at either the Candle Management Server® (CMS™) or at the remote system where the Candle monitoring agent is installed.
For Candle Management Servers, you can optionally specify historical data to be warehoused. Candle monitoring agents can also warehouse data as long as they are connected to a CMS. The warehoused data is written to Microsoft’s MS SQL Server™relational database on Windows. See “Warehousing Your Historical Data” on page 75.
Alternatively, you can continue to convert your historical data to delimited flat files or datasets using programs distributed with CandleNet Portal and with CMW. You can then use the converted historical data with any reporting tool from a third-party vendor such as SAS®, Microsoft® or Excel, or with other popular PC application tools to produce trend analysis reports and graphics.
You can also load the converted data into relational databases such as DB2®, ORACLE®, Sybase®, Microsoft SQL Server™, or others and produce customized history reports.
Managing your historical dataImportant: It is vital that you either warehouse your historical data or convert your historical data to delimited flat files or datasets. Otherwise, your history data files will grow unchecked, using up valuable disk space.
If you choose not to warehouse your data, you must institute rolloff jobs to regularly convert and empty out the history data files. This task is in addition to the main function of the rolloff programs, which is to convert the binary history data into readable text files. See the Converting Files to Delimited Flat Files chapters, as appropriate for your platform, for instructions.
Overview of Historical Data Collection 39
About Historical Data Collection
Collecting Short Term HistoryIn addition to the Historical Data Collection reports, for which collection and conversion procedures are documented in this manual, CandleNet Portal and CMW provide a short term history reporting capability.
You can find information on how to request short term history reports and how to specify the time interval for which you want short term history displayed in the individual product manuals in the discussion of product reports. There is information about and illustrations of the available short term status history reports in the Candle Management Workstation User’s Guide. You can also find information on requesting history reports and on specifying time intervals in CandleNet Portal in the online help.
To collect the data required for the generation of short term history reporting, you must start historical data collection as documented in “Configuring Historical Data Collection on CMW” on page 61 or in “Configuring Historical Data Collection on CandleNet Portal” on page 51.
Historical Collection Options
40 Historical Data Collection Guide for OMEGAMON XE and CCC
Historical Collection Options
OverviewTo provide flexibility in using historical data collection, Candle permits you to:
n turn history collection on, or turn off all history collection for multiple selected Candle Management Servers and multiple selected tables for a product
n save the history file at the CMS or at the remote agent
n define what data to save; that is, select what columns of a history table should be collected
n define the periodic time interval to save data (05, 15, 30, or 60 minutes)
n define the number of intervals of history to retain before the data is warehoused to a relational data base using ODBC, or use product-provided scripts to convert historical data to delimited flat files. These options are mutually exclusive.
Historical data collection can be specified for individual Candle Management Servers, products, and tables. However, all agents of the same type that report directly to the same CMS must have the same history collection options. Also, for a given history table, the same history collection options are applied to all Candle Management Servers for which that history table’s collection is currently enabled.
For example, if collection of UNIX Disk Performance (UNIXDPERF) is specified at the remote agent level, each UNIX agent running on a remote managed system collects historical data on that remote managed system.
For Candle Management Servers, you can optionally specify historical data to be warehoused. Candle monitoring agents can also warehouse data as long as they are connected to a CMS. The warehoused data is written to Microsoft’s SQL Server database on Windows.
Note: This document describes using Version 350 of the Candle Warehouse Proxy Agent to warehouse your historical data.
Overview of Historical Data Collection 41
Historical Collection Options
Some Candle agents do not provide history data for all of their tables and attribute groups. This is because the applications group for that agent has determined that collecting history data for certain tables is not appropriate, or would have a detrimental effect on performance. This could be due to the vast amount of data that would be generated.
Therefore, for each product, only tables that are available for history collection are listed in the History Collection Configuration dialog.
If, after you configure history data for a table and start history collection, you still do not see history data for that table, there is a problem either with the agent collection of that data, or with the history mechanism.
Performance Impact of Historical Data Requests
42 Historical Data Collection Guide for OMEGAMON XE and CCC
Performance Impact of Historical Data Requests
OverviewThe impact of historical data collection and/or warehousing on Candle components is dependent on multiple factors, including collection interval, number and size of historical tables collected, amount of data, system size, and so on. This section describes some of these factors.
Impact on the CMS or the agent of large amounts of historical dataThe component specified for collecting and/or warehousing history data (either at the CMS or the agent) can be negatively impacted when processing large amounts of data. This can occur because the historical warehouse process on the CMS or the agent must read the large row set from the history data file. The data must then be transmitted to the Warehouse Proxy agent. For large datasets, this sometimes impacts memory and CPU resources. Because of its ability to handle numerous requests simultaneously, the impact on the CMS is not as great as the impact on the agent.
Impact on the agent
For agents processing a large data request, the agent may be prevented from processing other requests until the time-consuming request has completed. This is important with most agents because an agent can usually process only one report,one situation, or one warehousing request at a time.
Requests for historical data from large tablesRequests for historical data from tables that collect a large amount of data will have a negative impact on the performance of the Candle components involved. To reduce the performance impact on your system, we recommend setting a longer collection interval for tables that collect a large amount of data. You specify this setting from the Configuration tab of the History Collection Configuration dialog. To find out the disk space requirements for tables in your OMEGAMON XE product, see “Disk Space Requirements for Historical Data Tables” on page 141.
Overview of Historical Data Collection 43
Performance Impact of Historical Data Requests
When you are viewing a report or a workspace for which you would like historical data, you can set the Time Span interval to obtain data for previous samplings. Selecting a long time span interval for the report time span increases the amount of data being processed, and may have a negative impact on performance. The program must dedicate more memory and CPU cycles to process a large volume of report data. In this instance, we recommend specifying a shorter time span setting, especially for tables that collect a large amount of data.
If a report rowset is too large, the report request may drop the task and return to the CNP or CMW with no rows because the agent took too long to process the request. However, the agent continues to process the report data to completion, and remains blocked, even though the report data is not viewable.
There could also be cases where the historical report data from the MVS Persistent Data Store might not be available. This could occur because the Persistent Data Store may be not be available while its maintenance job is running.
Scheduling the warehousing of historical dataThe same issues with requesting large reports apply to scheduling the warehousing of historical data only once a day. The more data being warehoused at once requires many more resources to read data into memory, and to transmit to the Candle Warehouse Proxy agent. If possible, we recommend making the warehousing rowset smaller by spreading the warehousing load over each hour, that is, by setting the warehouse interval to 1 hour.
Performance Impact of Historical Data Requests
44 Historical Data Collection Guide for OMEGAMON XE and CCC
Planning Collection of Historical Data 45
Planning Collection ofHistorical Data
IntroductionThis chapter provides information about
n selecting a strategy for historical data collection in your enterprise
n the components used by various platforms to accomplish historical data collection
n the tables used to collect historical data and their space requirements
n some specific requirements when collecting history for OMEGAMON XE for Sysplex.
Chapter ContentsDeveloping a Strategy for Historical Data Collection. . . . . . . . . . . . . . . . . 46
2
Developing a Strategy for Historical Data Collection
46 Historical Data Collection Guide for OMEGAMON XE and CCC
Developing a Strategy for Historical Data Collection
OverviewWhen developing a strategy for historical data collection, you must determine:
n The rules under which data will be collected; for example,
– How often do I want to collect historical data?
– Where do I want to collect the data—at the Candle Management Server or at the location where the Candle monitoring agent is running?
– What data do I want to collect?
n How often you want to warehouse collected data
n Whether scheduling of data conversion to delimited flat files should be automatic or manual
Defining data collection rulesAmong the factors that should govern the frequency of historical data collection are such things as:
n How much disk storage will be required to store the data being collected?
n What use will be made of the collected data
For information about using the History Configuration Dialog to establish the rules under which data is collected, see “Defining Data Collection Rules” on page 67.
Planning Collection of Historical Data 47
Developing a Strategy for Historical Data Collection
Warehousing collected dataThe CCC History Configuration program permits you to warehouse collected historical data to a database using ODBC. For additional information, see “Specifying collection options” on page 69.
CandleNet Portal also allows you to warehouse collected historical data to a database using ODBC. See “Configuring collection of attribute data” on page 54. For instructions on configuring a database, see “Warehousing Your Historical Data” on page 75.
Note: This document describes using Version 350 of the Candle Warehouse Proxy Agent to warehouse your historical data.
Defining the data conversion processData can be scheduled for conversion to delimited flat files either manually or automatically. If you choose to continue to convert data to delimited flat files, Candle strongly recommends that you schedule data conversion to be automatic. You will want to perform data conversion on a regular basis even if you are collecting historical data only to support short term history that is displayed on product reports. This is due to the fact that any historical data collection will result in use of system resources.
Data conversion programsPrograms are called to execute the conversion of history files to delimited flat files. The program that performs the conversion differs depending on your system environment.
n UNIX component
– The program to convert the binary history file to a delimited flat file is called krarloff
n Windows 2000/ME components
– The program to convert the binary history file to a delimited flat file is called krarloff.
– The program used to simulate the UNIX crontab command to archive historical data collection files on Windows Candle Management Servers and remote managed systems is called LOGSPIN.EXE
n MVS components
Developing a Strategy for Historical Data Collection
48 Historical Data Collection Guide for OMEGAMON XE and CCC
– The program to convert the binary history file to a delimited flat file is called KPDXTRA.
Columns added to history data files and to meta description filesFour columns are automatically added to the history data files and to the meta description files. These columns are:
n TMZDIFF. The time zone difference from Universal Time (GMT). This value is shown in seconds.
n WRITETIME. The CT timestamp when the record was written. This is a 16-character value in the format: cyymmddhhmmssttt, where:
– c = century
– yymmdd = year, month, day
– hhmmssttt = hours, minutes, seconds, milliseconds
n SAMPLES. Incremental counter for the number of samples written since the agent started. All rows written during the same interval have the same number.
n INTERVAL. The time between samples, shown in milliseconds.
Note: The warehousing process, using the Candle Data Warehouse, only adds two columns (TMZDIFF and WRITETIME), to the warehouse database. See “Warehousing Your Historical Data” on page 75.
For a sample meta description file, see “Sample *.hdr meta description file” on page 49.
Meta description filesA meta description file describes the format of the data in the source files. Meta description files are generated at the start of the historical data collection process.
The various platforms use different file naming conventions. Here are the rules for some platforms:
n AS/400 and HP NonStop™ Kernel (formerly Tandem) - Description files use the name of the data file as the base. The last character of the name is ‘M’. For example, for table QMLHB, the history data file name is QMLHB and the description file name is QMLHBM.
Planning Collection of Historical Data 49
Developing a Strategy for Historical Data Collection
n MVS - Description records are stored in the PDS facility, along with the data.
n UNIX - Uses the *.hdr file naming convention.
n Windows - Uses the *.hdr file naming convention.
Sample *.hdr meta description file
TMZDIFF(int,0,4)WRITETIME(char,4,16)QM_APAL.ORIGINNODE(char,20,128)QM_APAL.QMNAME(char,148,48)QM_APAL.APPLID(char,196,12)QM_APAL.APPLTYPE(int,208,4)QM_APAL.SDATE_TIME(char,212,16)QM_APAL.HOST_NAME(char,228,48)QM_APAL.CNTTRANPGM(int,276,4)QM_APAL.MSGSPUT(int,280,4)QM_APAL.MSGSREAD(int,284,4)QM_APAL.MSGSBROWSD(int,288,4)QM_APAL.INSIZEAVG(int,292,4)QM_APAL.OUTSIZEAVG(int,296,4)QM_APAL.AVGMQTIME(int,300,4)QM_APAL.AVGAPPTIME(int,304,4)QM_APAL.COUNTOFQS(int,308,4)QM_APAL.AVGMQGTIME(int,312,4)QM_APAL.AVGMQPTIME(int,316,4)QM_APAL.DEFSTATE(int,320,4)QM_APAL.INT_TIME(int,324,4)QM_APAL.INT_TIMEC(char,328,8)QM_APAL.CNTTASKID(int,336,4)SAMPLES(int,340,4)INTERVAL(int,344,4)
For example, an entry may have the form:
attribute_name(int,75,20)
where int identifies the data as an integer, 75 is the starting column in the data file, and 20 is the length of the field for this attribute in the file.
Estimating Space Required to Hold Historical Data TablesHistorical data is written to Performance Attribute Tables. These tables are defined, by product, in “Disk Space Requirements for Historical Data Tables” on page 141. Refer to that appendix for assistance in determining the names of the tables in which historical data is stored and their size, as well as those tables that are defaults. Worksheets are provided to assist you in estimating the size of the disk storage required to hold your enterprise’s historical data.
Developing a Strategy for Historical Data Collection
50 Historical Data Collection Guide for OMEGAMON XE and CCC
Configuring Historical Data Collection on CandleNet Portal 51
Configuring Historical DataCollection on
CandleNet Portal
IntroductionThis chapter describes how to configure and manage the collection of historical data from CandleNet Portal, the user interface for OMEGAMON XE. See “Configuring Historical Data Collection on CMW” on page 61, for instructions for configuring and managing historical data collection from the CMW.
Before you begin CMS start-up must be complete and the CMS must be running before you attempt to configure historical data collection. If you choose to warehouse your historical data rather than convert it to delimited flat files, you must have installed and configured the relational database to which you will roll off the data via ODBC.
Refer to the Installing Candle Products on Windows for details on installing the database to which you will write historical data. See “Configuring Your Warehouse” on page 78 for configuration information.
Chapter ContentsOverview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52Configuring Historical Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . 54Starting and Stopping Historical Data Collection . . . . . . . . . . . . . . . . . . . 58
3
Overview
52 Historical Data Collection Guide for OMEGAMON XE and CCC
Overview
Collecting historical dataThe table view and the bar, pie, and plot charts in CandleNet Portal have a tool for setting a time span. This Time Span tool causes previous data samples to be reported up to the time specified. As well, your Candle agent may have predefined workspaces with historical views. The historical data shown in these views is retrieved from binary history files set up through the History Collection Configuration dialog. The binary history files collect 24 hours worth of data. Beyond 24 hours, as new data arrives, the oldest data get deleted. If you have a data warehouse, the data are copied there prior to being deleted.
Some Candle agents do not provide history data for all of their tables and attribute groups. This is because the applications group for that agent has determined that collecting history data for certain tables is not appropriate, or would have a detrimental effect on performance. This could be due to the vast amount of data that would be generated. Therefore, for each product, only tables that are available for history collection are listed in the History Collection Configuration dialog. See “Configuring Historical Data Collection” on page 54.
If, after you configure history data for a table and start history collection, you still do not see history data for that table, there is a problem either with the agent collection of that data, or with the history mechanism.
Historical setupConfiguring historical data collection involves specifying the attribute groups for which data is collected, the collection interval, the roll-off interval (to a data warehouse), if any, and where the collected data is stored (at the agent or the CMS).
To ensure that data samplings are saved to populate your predefined historical workspaces, you must first configure and start historical data collection. This requirement does not apply to workspaces using attributes groups that are historical in nature and show all their entries without your starting data collection separately.
Configuring Historical Data Collection on CandleNet Portal 53
Overview
Requirements for invoking the HDC configuration programIn order to invoke the HDC Configuration program, you must have Configure History authority. The system administrator can grant this authority using the Administer Users, Permissions tab in CandleNet Portal. If you do not have the proper authority, you will not see the menu option or the toolbar option for historical configuration. See the Using OMEGAMON Products: CandleNet Portal document for more information.
Data roll offHistorical data is collected in binary files.These files grow as new data gets added at every sampling interval. Their size can increase quickly and take up a great deal of space on the hard drive. And the larger a history file is, the longer it takes to retrieve historical data into views. Candle has file conversion programs that move data out of the historical files to delimited text files.
See the Converting Files to Delimited Flat Files chapters, as appropriate for your platform, for instructions.
The long-term history feature offers a more permanent solution. The history files are maintained automatically because the data is periodically rolled off to an historical database (also called Candle Data Warehouse or data warehouse). To use long-term history, you must have configured your environment to include the Candle Warehouse Proxy agent and Candle Data Warehouse (historical database) for storing long-term historical data. See Installing Candle Products on Windows and “Warehousing Your Historical Data” on page 75 for instructions.
Configuring Historical Data Collection
54 Historical Data Collection Guide for OMEGAMON XE and CCC
Configuring Historical Data Collection
OverviewYou use the History Collection Configuration dialog to:
n review current configuration for historical data collection for a specific CMS or product
n start or stop historical data collection
n specify how historical data is to be collected for a specific product on a specific CMS
n change existing specifications for data collection
Accessing the History Collection Configuration dialogYou access the History Collection Configuration dialog by clicking on the toolbar or selecting History Configuration from the Edit menu (Ctrl+H). If you do not see the icon or the menu option, your user ID does not have the proper authority.
Configuring collection of attribute dataThe groups for which you want to collect data must be configured before you can start data collection. See “Starting historical data collection” on page 58.
Use the Configuration tab to set up historical data collection (see Figure 2 on page 59). From this tab, you can specify:
n the product for which data is to be collected
n the attribute group(s) for which data is to be collected
n the interval at which data for a particular attribute group is collected
n the location at which the data is stored (at agent or at the CMS)
n the interval at which data is warehoused, if any
In a future release, you will be able to specify the maximum number of days short term history data will be maintained before it is deleted. Currently, if short term history data is not being warehoused, it accumulates indefinitely unless it is rolled off using the provided file conversion programs. If it is being warehoused, data older than 24 hours are automatically deleted.
Configuring Historical Data Collection on CandleNet Portal 55
Configuring Historical Data Collection
You can view the attribute groups for a selected product for which data collection is recommended by clicking Show Default Groups.
Note: You cannot configure data collection for individual attributes from CandleNet Portal. If you want to exclude or include specific attributes in a group, you must configure collection from the CMW. See “Configuring Historical Data Collection on CMW” on page 61.
Configuration tab
To configure data collection for an attribute group or groups:
1. On the Configuration tab, select the product (agent type) for which you want to collect data. Result: The attribute groups for which you can collect historical data appear in a list box. When you select a product type, you are configuring collection for all agents of that type that report to the selected CMS.
2. Select one or more attribute groups, then use the radio buttons to select the interval for data collection, the location of data collection, and the interval for warehousing, if any.
Note: The controls show the default settings when you first open the dialog. As you select attribute groups from the list, the controls do not change for the selected group. If you change the settings for a group, those changes continue to display no matter which group you select while the dialog is open. This enables you to adjust the configuration controls once and apply the same settings to any number of attribute groups (one after the other, or use Ctrl+click to select multiples or Shift+click to select all groups from the first one selected to this point). The true configuration settings show in the group list and in the Status tab.
3. Click Configure Group(s) to apply the configuration selections to the attribute group or groups. The values do not take effect unless you click this button.Changes made to the configuration of any group are automatically reflected on the Status tab for all Candle Management Servers on which collection for the changed groups is already started. It is not necessary to stop and then restart collection for a group whose configuration has changed.
Note: Clicking Unconfigure Group(s) automatically stops collection for that group on all Candle Management Servers first.
Configuring Historical Data Collection
56 Historical Data Collection Guide for OMEGAMON XE and CCC
Figure 1. CandleNet Portal History Collection Configuration Configuration Tab
Configuring Historical Data Collection on CandleNet Portal 57
Configuring Historical Data Collection
Configuring data collection for logsThe CCC Logs apply to all applications. If you want to save the information in these logs, you should configure them for warehousing. You can configure historical data collection for any of the CCC Logs.
Note: Although you can set up historical data collection for any of these logs, you can create a chart or table view for only TNODESTS (Managed System Change Log) and Situations Status Log. CandleNet Portal currently does not provide query support for KRAMESG (Universal Message Log), OPLOG (Operations Log), TEIBLOG (Enterprise Information Base Changes Log), or TWORKLST (Worklist Log).
Starting and Stopping Historical Data Collection
58 Historical Data Collection Guide for OMEGAMON XE and CCC
Starting and Stopping Historical Data Collection
OverviewYou start and stop historical data collection for a specific CMS from the Status tab of the History Collection Configuration dialog.
The attribute groups for which you want to collect data must be configured before you can start data collection. See “Configuring collection of attribute data” on page 54.
Starting historical data collectionUse the Status tab of the History Collection Configuration dialog to view the configuration and collection status for each attribute group of a selected product on a selected CMS (see Figure 2 on page 59). You also use the Status tab to start and to stop collection.
To start data collection for configured attribute groups:
1. On the Status tab, select a CMS from the dropdown list.
2. Select a product.
3. Select the attribute group or groups for which you want to start data collection. The attribute groups for which historical data collection has been configured are listed in the Collection Status table. Shift-click to select a contiguous groups, or Ctrl-click to select noncontiguous groups.
4. Click Start Collection.Result: Two files are created for every attribute group selected: a configuration file with a .hdr extension; and a binary history file with no extension. For example, if you select the Address Space CPU Utilization attribute group, the two history files are ASCPUUTIL.hdr and ASCPUUTIL.
Configuring Historical Data Collection on CandleNet Portal 59
Starting and Stopping Historical Data Collection
Figure 2. CandleNet Portal History Collection Configuration Status Tab
Stopping data collectionTo stop data collection:
1. On the Status tab, select a CMS from the dropdown list.
2. Select a product.
3. Select the attribute group or groups for which you want to stop data collection.Shift-click to select a contiguous groups, or Ctrl-click to select noncontiguous groups.
4. Click Stop Collection.
Starting and Stopping Historical Data Collection
60 Historical Data Collection Guide for OMEGAMON XE and CCC
Configuring Historical Data Collection on CMW 61
Configuring Historical DataCollection on CMW
IntroductionYou invoke the Historical Data Collection (HDC) Configuration program to start or to stop the collection of historical data. You define the rules for running the program using the CCC History Configuration dialog, illustrated in this chapter.
For information on configuring historical data collection on CandleNet Portal, see “Configuring Historical Data Collection on CandleNet Portal” on page 51.
Before you beginThe CMS start-up must be complete and the CMS must be running before you attempt to configure historical data collection. If you choose to warehouse your historical data rather than convert it to delimited flat files, you must have installed and configured the relational database to which you will roll off the data via ODBC.
Refer to the Installing Candle Products on Windows for details on installing the database to which you will write historical data. See Chapter 4, “Configuring Your Warehouse” on page 78 for configuration information.
Chapter ContentsInvoking the HDC Configuration Program . . . . . . . . . . . . . . . . . . . . . . . . 62Using the Configuration Dialog to Control Historical Data Collection . . . . 65Defining Data Collection Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67Using the Advanced History Configuration Options Dialog. . . . . . . . . . . . 71Universal Agent History Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4
Invoking the HDC Configuration Program
62 Historical Data Collection Guide for OMEGAMON XE and CCC
Invoking the HDC Configuration Program
Requirements for invoking the HDC Configuration programIn order to invoke the HDC Configuration program, you must have the appropriate authority to launch the program. The system administrator can grant this authority using the Authority Settings window. If you do not have appropriate authority to launch the Configure History program, the associated icon will not appear in the Administration - Icons window.
Steps to invoke the HDC Configuration ProgramTo invoke the HDC Configuration program:
1. Access the CMW Administration - Icons window (Figure 3).
2. From the CMW Administration - Icons window, double-click the Configure History icon. Result: CMW displays the CCC History Configuration dialog.
Figure 3. The Configure History Icon in the Administration Window
Configuring Historical Data Collection on CMW 63
Invoking the HDC Configuration Program
About the CCC History Configuration dialogUsing the CCC History Configuration dialog, you can:
n review current settings for historical data collection for a specific CMS or product
n start or stop historical data collection
n specify how historical data is to be collected for a specific product on a specific CMS or on multiple Candle Management Servers. You can now configure history for multiple servers and multiple tables at one time
n change existing specifications for data collection
Invoking the HDC Configuration Program
64 Historical Data Collection Guide for OMEGAMON XE and CCC
Figure 4. CMW History Configuration Dialog
Configuring Historical Data Collection on CMW 65
Using the Configuration Dialog to Control Historical Data Collection
Using the Configuration Dialog to Control Historical Data Collection
Specifying configuration optionsOn the CCC History Configuration dialog, you can select:
n Display current configuration to display the collection status for each table for the currently selected product.
If you have selected multiple Candle Management Servers, the Tables list box will show the collection status for the first selected CMS. A button labelled Next... will be visible which, if selected, updates the Tables list box with the status for the next selected CMS. You can continue to select the Next... button until you have displayed the status for each selected server.
Note: If you use this dialog to change your current configuration, the changes you make may not be immediately reflected in the Tables list box, since the request must be transmitted to and processed by each CMS. You may need to refresh the status of the Tables list box after a few seconds by selecting the Display Configuration button before your changes become evident.
n Start default collection to begin historical data collection for those product tables defined as defaults. A confirmation message box pops up giving you the option of cancelling your request. If you select Cancel, the Tables list box is updated to show those tables that have been designated as defaults.
In this manual, you can refer to “Disk Space Requirements for Historical Data Tables” on page 141 for information about the default historical tables for your installed Candle products.
n Stop all history collection to stop all historical data collection for the selected product on all selected Candle Management Servers.
n Start collection to begin collection for the tables that are currently selected
Note: Historical information will not be recorded unless you press Start collection.
Using the Configuration Dialog to Control Historical Data Collection
66 Historical Data Collection Guide for OMEGAMON XE and CCC
n Advanced configuration to display a dialog that permits you to specify the subset of a table’s attributes that are to be collected. (By default, all of a table’s attributes are collected.) You can also access the Advanced History Configuration Options dialog by double-clicking a table or tables displayed in the Select Table(s) box.
n Help to receive information about panel options
n Quit to exit historical data collection. Selecting Quit stops the configuration program.
Configuring Historical Data Collection on CMW 67
Defining Data Collection Rules
Defining Data Collection Rules
OverviewYou can specify these historical data collection values:
n one or more Candle Management Servers you wish to configure for a product you will select from a pulldown menu. Servers must be online to be configured.
n the product for which historical data is to be collected
n the name of the group(s) or table(s) for which historical data is to be collected
n the collection interval
n the location where data is to be collected--either at the CMS or at the location where the agent is running
n how often data is to be rolled off to a warehouse.
Selecting the target Candle Management Server(s)On the CCC History Configuration dialog, the Select CMS target(s) field displays the identifier for the hub CMS and any Candle Management Servers attached to that hub. You can refresh the list of target Candle Management Servers by selecting the Rebuild CMS List pushbutton.
Selecting Rebuild CMS List causes the displayed list of available Candle Management Servers to be refreshed with any CMS started or stopped since the list was last displayed
Figure 5. CMS Selection Portion of Dialog
Defining Data Collection Rules
68 Historical Data Collection Guide for OMEGAMON XE and CCC
Selecting a productThe pulldown menu in the Select a Product field shows all of the Candle products installed in your environment. From this pulldown list, select the product or application for which you want historical data collected.
Selecting Group(s)In the Select Table(s) field, you can control whether the list box displays the actual table name or the Group name (the default) for each table. By clicking the appropriate button, you can view the list by Group name or by Table name. Depending on your selection, a list is displayed that contains the available groups or tables for which historical data can be collected.
For each entry in the list, the following are displayed(Figure 6):
n Group Name or Table Name: Name of the group or table for which historical data will be collected
n Collection Interval: Collection interval currently specified for the named group or table, or OFF
n Collection Location: Collection location currently specified for the named group or table
n Warehouse Interval: The frequency at which historical data is rolled off to your Candle data warehouse
n Filename: Name of the binary file to which raw historical data is written at each collection interval
Configuring Historical Data Collection on CMW 69
Defining Data Collection Rules
Figure 6. Table or Group selection portion of dialog
Specifying collection optionsUsing the Table or Group selection portion of the dialog (Figure 6), you can specify the following collection options for historical data.
n Collection Interval: The interval at which historical data is collected. For example, specifying 5 causes historical data to be collected at the end of every 5 minute period. You can specify values of 5, 15, or 30 minutes, or 1 hour. Using this field, select Off to turn off collection for the selected CMS target(s) and associated product without affecting historical data collection on other Candle Management Servers or agents.
n Collect Data At: The location at which data is to be collected--either at the remote agent or at the CMS to which the agent is connected.
Note: If you use the Advanced Configuration button to provide a custom definition, and if collection is started once a custom definition is in place, the history data will be collected at the CMS regardless of the setting of the Collection Location radio button.
n Warehouse every: The frequency at which historical data is rolled off to your Candle data warehouse. If you do not want to warehouse your historical data, select Off.
n Filename: Name of the binary file to which raw historical data is written each collection interval
Note: Historical information will not be recorded unless you press Start collection.
Defining Data Collection Rules
70 Historical Data Collection Guide for OMEGAMON XE and CCC
Note: Warehousing data to an ODBC data base is mutually exclusive with running data conversion programs on your historical data. If you choose to continue to run your data conversion scripts, you will want to select Off for the Warehouse every option.
Runtime InformationThe message field at the bottom of the CCC History Configuration dialog can display status information pertaining to the current or most recently completed request.
Configuring Historical Data Collection on CMW 71
Using the Advanced History Configuration Options Dialog
Using the Advanced History Configuration Options Dialog
OverviewIf you select Advanced configuration from the CCC History Configuration dialog. or if you double-click on a table or tables displayed in the Select Table(s) box, the Advanced History Configuration Options dialog displays. Use this dialog to select the attributes for which you want historical data to be collected.
Note: To avoid the corruption of historical data files, you must roll off and delete existing history data files and meta files prior to modifying the Advanced History Configuration options when storing history data at the CMS. See “Preventing Historical Data File Corruption” on page 80.
Figure 7. Advanced History Configuration Options dialog
Using the Advanced History Configuration Options Dialog
72 Historical Data Collection Guide for OMEGAMON XE and CCC
Use the Add and Remove buttons to add attributes to or remove attributes from the Selected and Available Attributes lists respectively. Add All and Remove All move the entire contents of one list to the other. You can also double-click an attribute in one list to move it to the other.
To obtain a list of the attributes currently being collected, click Current settings. Reset deletes any customized attribute subset you may have created so that next time collection is started for the table, the default, that is, all attributes, is selected.
When the Selected Attributes list is complete, select OK. This creates a local, custom configuration definition for the selected table that exists until the history configuration application terminates or you select the Reset button. This custom definition takes effect when historical data collection is next started for that table.
Every product, other than CCC Logs, requires that you specify at least the System_Name attribute as well as one other column.
Special considerations for CCC LogsThe CCC Logs, a group of enterprise information base (EIB) tables for which history is available, requires that you specify the Global_Timestamp attribute and at least one other column. The collection interval and location, as well as Warehouse interval, are fixed for the Status_History, EIB_Changes, Policy_Status, and System_Status logs, as follows:
n Collection Interval : once a day
n Collection Location: at the CMS
n Warehouse Interval: Once per day
See the Candle Management Workstation User’s Guide for additional information on the CCC Logs. See also the Candle Management Workstation Administrator’s Guide for a detailed description of the Display Item attribute. This attribute is used to more easily differentiate situations. You can view the results in the Status History log.
Configuring Historical Data Collection on CMW 73
Using the Advanced History Configuration Options Dialog
Universal Agent History Configuration Generally, each product is shipped with a file that is installed into the CMW’s SQLLIB directory. This file contains all of the definitions required by the Historical Data Collection Configuration program to start and stop historical data collection. Because the tables and attributes collected by Universal Agents are defined by you, the history definition file is not available to the CMW. For Universal Agents, history definitions are created dynamically from the agent’s attribute file. This file is retrieved from the agent by the CMW when the agent comes online. There are no default tables for Universal Agents.
If a new Universal Agent comes online after the Historical Data Collection Configuration application has started, you will need to restart this application before history collection can be configured for the new agent.
Using the Advanced History Configuration Options Dialog
74 Historical Data Collection Guide for OMEGAMON XE and CCC
Warehousing Your Historical Data 75
Warehousing YourHistorical Data
IntroductionSeveral steps are required in order to warehouse your historical data to a supported relational database using ODBC. Other considerations must also be addressed. This chapter provides guidance on warehousing historical data.
Important: This document describes using Version 350 of the Candle Warehouse Proxy Agent to warehouse your historical data.
Before you beginRefer to the Installing Candle Products on Windows for details on installing the database to which you will write historical data. That database must be installed before you can begin rolling off historical data to it.
Also, review “Configuring Historical Data Collection on CMW” on page 61 or “Configuring Historical Data Collection on CandleNet Portal” on page 51 for information about using the Historical Data Collection program on the appropriate user interface and using the history configuration dialogs.
Chapter ContentsPrerequisites to Warehousing Historical Data . . . . . . . . . . . . . . . . . . . . . . 76Configuring Your Warehouse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78Preventing Historical Data File Corruption . . . . . . . . . . . . . . . . . . . . . . . . 80Error Logging for Warehoused Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
5
Prerequisites to Warehousing Historical Data
76 Historical Data Collection Guide for OMEGAMON XE and CCC
Prerequisites to Warehousing Historical Data
OverviewIn order to use ODBC to warehouse historical data, your enterprise must first:
1. install Microsoft SQL Server.
2. define a user ID. The Candle Data Warehouse must be configured to be accessible using a user ID of Candle and a password of Candle.Important: In SQL Server, the userid Candle must be a member of the db_owner “Fixed Database Role” located in the Database/Roles menu. When the Candle userid exists in db_owner, all of the Candle Warehouse Proxy objects in the database have an owner ID of Candle, and the new tables and columns are correctly inserted into the database.
3. use the ODBC Administrator to add and to configure a data source called Candle Data Warehouse. No other name is acceptable to the CandleNet Command Center. Configure the data source to point to the SQL Server that is to be used for warehousing historical data.
4. start the Candle Warehouse Proxy Agent on a Windows system in the network. Configure the Candle Data Warehouse ODBC data source on the same system.You are now ready to use data warehousing.
Note: For mainframe products, in addition to configuring ODBC and SQL Server, you must set up historical data collection by defining Persistent Data Store (CT/PDS) datasets. You must also set up the required maintenance tasks to ensure the availability of these datasets. See “Maintaining the Persistent Data Store (CT/PDS)” on page 109.
Important: Historical data collection can be configured to be stored at any combination of the CMS or the agents. To ensure that history data is received from all sources, you must configure a common shared network protocol between the Candle Warehouse Proxy Agent and the component that is sending history data to it (either from a CMS or from an agent).
For example, you might have a CMS configured to use both IP and IP.PIPE. In addition, one agent might be configured with IP and a second agent with IP.PIPE. In this example, the Candle Warehouse Proxy Agent must be configured to use both IP and IP.PIPE.
Warehousing Your Historical Data 77
Prerequisites to Warehousing Historical Data
About the Candle Warehouse Proxy AgentThe Candle Warehouse Proxy Agent uses ODBC to write the historical data to a supported relational database. Only one Candle Warehouse Proxy Agent can be configured and running in your enterprise at one time. This proxy agent can handle warehousing requests from all managed systems in the enterprise. The proxy agent should be connected to the hub CMS. See Installing Candle Products on Windows for details regarding installation of the Candle Warehouse Proxy Agent.
Note: On Windows, we recommend, if possible, installing the proxy agent on the same machine on which the warehouse database resides.
Configuring Your Warehouse
78 Historical Data Collection Guide for OMEGAMON XE and CCC
Configuring Your Warehouse
OverviewYou use the history data collection configuration program in CMW and in CandleNet Portal to specify how often data is rolled off to a relational database.
Naming of warehoused history tablesWarehoused history tables in the database have the same names as the group names of history tables. For example, Windows Servers history for group name NT_System is collected in a binary file having the name WTSYSTEM. Historical data in this file, WTSYSTEM, is warehoused to the database in a table named NT_System.
The following UNIX history tables are exceptions to the foregoing. User and Disk groups are exported to the database to tables having the names UNIXUSER and UNIXDISK. This is due to the fact that User and Disk are reserved words in SQL Server. Tables named UNIXUSER and UNIXDISK cannot be queried using MS/Query.
Columns added to the warehouse databaseTwo columns are automatically added to the warehouse database. These are:
n TMZDIFF. The time zone difference from Universal Time (GMT). This value is shown in seconds.
n WRITETIME. The CT timestamp when the record was written. This is a 16-character value in the format: cyymmddhhmmssttt, where:
– c = century
– yymmdd = year, month, day
– hhmmssttt = hours, minutes, seconds, milliseconds
Warehousing Your Historical Data 79
Configuring Your Warehouse
Attributes formattingSome attributes need to be formatted for display purposes. For example, floating point numbers that specify a certain number of precision digits to be printed to the left of a decimal point. These display formatting considerations are specified in product attribute files.
The Candle Warehouse Database displays the correct attribute formatting only for those attributes that use integers with floating point number formats.
Logging successful exports of historical dataEvery successful export of historical data is logged in Candle Data Warehouse in a table called WAREHOUSELOG. The WAREHOUSELOG contains information such as origin node, table to which the export occurred, number of rows exported, time the export took place, and so forth. You can query this table to learn about the status of your exported history data.
Preventing Historical Data File Corruption
80 Historical Data Collection Guide for OMEGAMON XE and CCC
Preventing Historical Data File Corruption
OverviewBecause history data storage on non-MVS platforms uses flat files that are not indexed, corruption of historical data can occur. If history data is stored at either the agent or at the CMS, it is important to roll off the existing history data files and meta files into text files. You then delete the history data files and meta files at the agent or at the CMS for the selected tables to avoid corruption of the warehoused database tables. See “Converting History Files to Delimited Flat Files (Windows and OS/400)” on page 83.
Note: This situation does not apply to MVS history data as this data is stored in the Persistent Data Store (CT/PDS) facility.
To avoid the corruption of historical data files, you must roll off and delete existing data files prior to:
n modifying the Advanced History Configuration options when storing history data at the CMS. See “Using the Advanced History Configuration Options Dialog” on page 71.
n upgrading an existing Candle monitoring agent to a new release when storing history data at the agent. See Installing Candle Products on Windows for installation instructions.
Preventing corruption when storing data at the CMSIf you store historical data at the CMS, perform the procedure that follows prior to using the Advanced History Configuration options:
1. Save, roll off, or export the existing history data that is stored at the CMS for the selected table.
2. Delete the CMS history data files and meta files for the selected table only.
3. If you are warehousing the data, save or rename the existing database table, in case you want to retain the data for later use.
4. Using the SQL DROP command, delete the database table.You may now make modifications to the Advanced History Configuration options.
Warehousing Your Historical Data 81
Preventing Historical Data File Corruption
Preventing corruption when storing data at the agentIf you store historical data at the Candle monitoring agent, perform the procedure that follows prior to upgrading the agent to a new release. You perform this procedure when you can identify which, if any, product tables have added new attributes. If you are unsure about newly added attributes, perform the procedure for all existing product history tables.
1. Save, roll off, or export the existing history data files that are stored at the agent.
2. Delete the agent history data and the meta files.
3. If you are warehousing the data, save or rename the existing database table, in case you want to retain the data for later use.
4. Using the SQL DROP command, remove the database table.You may now proceed with the agent upgrade.
If your database is corruptedIf your database is corrupted, you can repair the database using this procedure:
1. Stop the Candle Warehouse Proxy agent.
2. Stop the collecting of historical data.
3. Delete the history data files and the meta files.
4. If you are warehousing the data, save or rename the existing database table, in case you want to retain the data for later use.
5. Using the SQL DROP command, delete the database table.
6. Return to the Historical Data Collection program, Advanced History Configuration option, and select your attributes. If you think you might want to add to the table later, select all of the attributes now. You can always go back and remove the attributes that you don’t want. Once you remove the attributes, the table will still be big enough for attributes that you might want to add later.
7. Start collecting data.
8. Restart the Candle Warehouse Proxy agent.Result: the SQL Server recreates the database tables.
Error Logging for Warehoused Data
82 Historical Data Collection Guide for OMEGAMON XE and CCC
Error Logging for Warehoused Data
Viewing errors in the Event LogShould an error occur during data rolloff, one or more entries are inserted into the Windows Application Event Log that is created on the system where the Warehouse Proxy is running. To view the Application Event Log, start the Event Viewer by clicking Start> Programs> Administrative Tools> Event Viewer. Select Application from the Log pull-down menu.
Setting a trace optionYou can set error tracing on to capture additional error messages that can be helpful in detecting problems.
Activating the trace option
To activate the trace option:
1. Click Start > Programs > Candle OMEGAMON XE > Manage Candle Services
2. Right-click Warehouse Proxy and select Advanced > Edit Trace Parms. The Trace Parameters for Warehouse Proxy dialog displays.
3. Select the RAS1 filters. The default setting is ERROR.
4. Enter the path and file name of the RAS1.log file that will contain the error messages for the warehouse proxy. For example:
c:\Candle\CMA\LOGS\khdRas1.log
where khd indicates the product code for the warehouse proxy.
5. Enter the KDC_DEBUG setting. None is the default.
Viewing the Trace Log
To view the trace log containing the error messages:
1. Select Start > Programs > Candle OMEGAMON XE > Manage Candle Services.
2. Right-click Warehouse Proxy and select Advanced > View Trace Log. The Log Viewer window displays the log file for the warehouse proxy agent.
Converting History Files to Delimited Flat Files (Windows and OS/400) 83
Converting History Files toDelimited Flat Files
(Windows and OS/400)
IntroductionIf you selected the option to warehouse data to an ODBC data base, that option is mutually exclusive with running the file conversion programs described in this chapter. To use these conversion procedures, you must have specified Off for the Warehouse option on the CCC History Configuration panel for the CMW and on the History Collection Configuration dialog for CandleNet Portal.
The history files collected using the rules established in the historical data collection configuration program can be converted to delimited flat files for use in a variety of popular applications to easily manipulate the data and create reports and graphs. Use the LOGSPIN program or the Windows AT command to schedule file conversion automatically. Use the krarloff program to manually invoke file conversion. The LOGSPIN program invokes krarloff when file conversion is scheduled automatically. For best results, you should schedule conversion to run every day. This is especially important on OS/400.
Chapter ContentsConversion Process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84Archiving Procedure using LOGSPIN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85Logfile parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86Archiving Procedure using the Windows AT Command. . . . . . . . . . . . . . . . . . . . . 87Converting Files Using krarloff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88AS/400 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91Location of the Windows Executables and Historical Data Collection Table Files. . 92
6
Conversion Process
84 Historical Data Collection Guide for OMEGAMON XE and CCC
Conversion Process
OverviewWhen setting up the process that will convert the history files you have collected to delimited flat files, you can choose to schedule the process automatically using the LOGSPIN program or the Windows AT command, or manually by running the krarloff program. The LOGSPIN program invokes krarloff. Before deciding on which method to use, see the Microsoft Windows library for full details on the security implications of choosing to run a program such as LOGSPIN versus entering the Windows AT command.
Important: Candle recommends running history file conversion every 24 hours.
Converting History Files to Delimited Flat Files (Windows and OS/400) 85
Archiving Procedure using LOGSPIN
Archiving Procedure using LOGSPIN
OverviewTo convert historical data files on Windows Candle Management Servers and remote managed systems, follow these steps. Parameters for the logfile program are described in “Logfile parameters” on page 86.
1. Create a text file with each entry corresponding to the history table file to be converted. The text file must be located on each managed system on which data conversion is performed. The format of each line of the text file is:
logfile {SIZE=nnn | TIME=hh:mm} [HEADER=(Y/N) DELIM=c OUTPUT=filestem RFILE=tempname KEEP=(Y/N)]
The parameters in brackets are optional and the parameters in braces are required.
2. To start archiving historical data on the remote managed system, enter the following at the command prompt:
LOGSPIN filename [archpathname]
or
start LOGSPIN filename [archpathname]
where:
n filename is the name of the text file described above and is required.
n archpathname is the name of the path where the archive program is located. This is optional, and the default is to use the Windows search sequence.
Note: Entering the start LOGSPIN command automatically opens an additional window and runs the command in the background.
3. To stop archiving historical data on the remote managed system, enter the following at the command prompt:
LOGSPIN STOP
Archiving Procedure using LOGSPIN
86 Historical Data Collection Guide for OMEGAMON XE and CCC
Logfile parametersThe table below describes the parameters that correspond to the krarloff program defaults.
Table 3. Logfile parameter values
Parameter Descriptionlogfile Name of the historical table to be converted/archived.SIZE Archive file at six-hour intervals if it exceeds nnnK bytes. The SIZE and
TIME parameters are mutually exclusive.TIME Archive the file once a day at the time specified in the format hh:mm. The
SIZE and TIME parameters are mutually exclusive.HEADER Specify Y to include a descriptive header in the archived file. The default is
N.DELIM Character to be used as a column delimiter. The default is a TAB character.OUTPUT Output filename for archived files. The suffix BK0–BK6 is appended to
each file, with BK0 representing the latest archive and BK6 the earliest. If no output filename is specified, the default is the first part of the log filename for an (8.3) filename, or the first 32 characters for a long filename.
RFILE Intermediate filename used by the LOGSPIN program. The default is the first part of the log filename for an (8.3) filename followed by .TMP, or the first 32 characters for a long filename followed by .TMP.
KEEP Specify Y to keep the intermediate file. The default is N(spintime, spinsize default).
Converting History Files to Delimited Flat Files (Windows and OS/400) 87
Archiving Procedure using the Windows AT Command
Archiving Procedure using the Windows AT Command
OverviewTo archive historical data files on Windows Candle Management Servers and on remote managed systems using the AT command, use the procedure that follows. To find out the format of the command, enter AT /? at the MS/DOS command prompt.
1. In order for the AT command to function, you must start the Task Scheduler service. To start the Task Scheduler service, select Settings >Control Panel > Administrative Tools > Services. Result: The Services window displays.
2. At the Services window, select Task Scheduler. Change the service Start Type to Automatic. Click Start. Result: The Task Scheduler service is started.
An example of using the AT command to archive the history files is as follows:
AT 23:30 /every:M,T,W,Th,F,S,Su c:\sentinel\cms\archive.bat
In this example, Windows will execute the archive.bat file located in c:\sentinel\cms everyday at 11:30 pm. An example of the contents of archive.bat is:
krarloff -o memory.txt wtmemory
krarloff -o physdsk.txt wtphysdsk
krarloff -o process.txt wtprocess
krarloff -o system.txt wtsystem
Converting Files Using krarloff
88 Historical Data Collection Guide for OMEGAMON XE and CCC
Converting Files Using krarloff
OverviewWhen initiated by LOGSPIN, the krarloff program makes an intermediate copy of the captured history binary file. This copy is processed while history data continues to be collected in the emptied original file. History file conversion can occur whether or not the CMS or the agent is running. You can also manually initiate krarloff as described below.
The krarloff program can be run either at the CMS or in the directory in which the agent is running, from the directory in which the history files are stored. See “Location of the Windows Executables and Historical Data Collection Table Files” on page 92.
Parameters for the krarloff program are described in “krarloff Parameters” on page 90.
Attributes formattingSome attributes need to be formatted for display purposes. For example, floating point numbers that specify a certain number of precision digits to be printed to the left of a decimal point. These display formatting considerations are specified in product attribute files.
When you use krarloff to roll off historical data into a text file, any attributes that require format specifiers as indicated in the attribute file are ignored. Only the raw number is seen in the rolled off history text file. Thus, instead of displaying 45.99% or 45.99, the number 4599 appears.
The Candle Warehouse Proxy agent does use the product attribute files to display the correct attribute formatting. However, the Candle Warehouse Database displays the correct attribute formatting only for those attributes that use integers with floating point number formats. See “Warehousing Your Historical Data” on page 75.
Converting History Files to Delimited Flat Files (Windows and OS/400) 89
Converting Files Using krarloff
Using krarloff on Windows Run the krarloff command from the directory in which the CMS or the agent is run by entering the following at the command prompt:
krarloff [-h] [-d delimiter] [-g] [-m meta-file] [-r rename-to-file] [-o output-file] {-s source | source-filename}
where the square brackets denote the optional parameters, and the curly braces denote a required parameter.
Note: The command is on a single line when typed.
Using krarloff on OS/400Run the krarloff command from an OS/400 in the directory in which the CMS is run by entering the following at the command prompt:
call qautomon/krarloff parm ([‘ -h’] [‘-g’] [‘-d’ ‘delimiter’] [‘-m’ meta-file] [‘-r’ rename-source-file-to] [‘-o’ output-file] {‘-s’ source-file | source-file )}
where the square brackets denote the optional parameters, and the curly braces denote a required parameter.
If you run krarloff from an OS/400 in the directory in which the agent is running, replace qautomon with the name of the executable for your agent. For example, the MQ agent would use kmqlib in the command string.
Note: The command is on a single line when typed.
Converting Files Using krarloff
90 Historical Data Collection Guide for OMEGAMON XE and CCC
krarloff Parameters
Table 4. krarloff Parameters
Parameter Default Value Description
-h off Controls the presence or absence of the header in the output file. If present, the header is printed as the first line. The header identifies the attribute column name.
-d tab Delimiter used to separate fields in the output text file. Valid values are any single character (for example, a comma).
-g off Controls the presence or absence of the product “group_name” in the header of the output file. Add the -g to the invocation line for krarloff to include a group_name.attribute_name in the header.
-m source-file.hdr Meta-file that describes the format of the data in the source file. If no meta-file is specified on the command line, the default filename is used.
-r source-file.old Rename-to-filename parameter used to rename the source file. If the renaming operation fails, the script waits two seconds and retries the operation.
-o source-file.nnn where nnn is Julian day
Output filename. The name of the file containing the output text file.
-s none Required parameter. Source binary history file that contains the data that needs to be read. Within the curly brace, the vertical bar (|) denotes that you can either use an “-s source” option, or if a name with no option is specified, it is considered a source filename. No defaults are assumed for the source file.
Converting History Files to Delimited Flat Files (Windows and OS/400) 91
AS/400 Considerations
AS/400 Considerations
Where is the historical data stored on the AS/400?User data is stored in QUSRSYS. For each table, there are two files stored on OS/400 that are associated with historical data collection. For example, if you are collecting data for the system status attributes, these two files are KA4SYSTS and KA4SYSTSM. The former is the binary data that is being output by the OMA. The second file is the metafile. The metafile is a file having a single row that contains the names of the columns. The contents of both files can be displayed using DSPPFM.
What happens after krarloff is run?In using the system status example above, after running krarloff, file KA4SYSTS becomes KA4SYSTSO. A new KA4SYSTS file is generated when another row of data is available.
KA4SYSTSM remains untouched.
KA4SYSTSH is the file that is output by krarloff and containing the data is delimited flat file format. This file can be transferred from the AS/400 to the workstation by means of a file transfer program (FTP).
Location of the Windows Executables and Historical Data Collection Table Files
92 Historical Data Collection Guide for OMEGAMON XE and CCC
Location of the Windows Executables and Historical Data Collection Table Files
Location of Windows executablesExecutables are located as follows:
n \candle\cms directory on the CMS, where candle is the directory in which the CMS was installed
n \candle\cma directory on the remote managed systems, where candle is the directory in which the agents were installed
Note: The krarloff conversion program must be located in the same directory as the LOGSPIN.EXE program.
Location of Windows historical data table filesIf you run the CMS and agents as processes or as services, the historical data table files are located in the
n \candle\cms directory on the CMS, where candle is the directory in which the CMS was installed
n \candle\cma\logs directory on the remote managed systems, where candle is the directory in which the agents were installed
Location of history configuration files on WindowsThe history configuration files are located in \candle\cms\sqllib.
Converting History Files to Delimited Flat Files (MVS) 93
Converting History Files toDelimited Flat Files (MVS)
IntroductionThe history files collected by the rules established in the HDC Configuration program or by your definitions related to historical data collection during product installation can be converted to delimited flat files automatically as part of your persistent data store maintenance procedures (see “Maintaining the Persistent Data Store (CT/PDS)” on page 109), or manually using a MODIFY command. You can use the delimited flat file as input to a variety of popular applications to easily manipulate the data and create reports and graphs.
Data that has been warehoused cannot be extracted since the warehoused data is deleted from the persistent data store. To use these conversion procedures, you must have specified Off for the Warehouse option on the CCC History Configuration panel for the CMW and on the History Collection Configuration dialog for CandleNet Portal.
Chapter ContentsAutomatic Conversion and Archiving Process . . . . . . . . . . . . . . . . . . . . . . 94Location of the MVS Executables and Historical Data Table Files . . . . . . . 98Manual Archiving Procedure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
7
Automatic Conversion and Archiving Process
94 Historical Data Collection Guide for OMEGAMON XE and CCC
Automatic Conversion and Archiving Process
OverviewWhen you customized your Candle environment, you were given the opportunity to specify the EXTRACT option for maintenance. Specification of the EXTRACT option ensures that scheduling of the process to convert and archive information stored in your history data tables is automatic. No further action on your part is required. As applications write historical data to the history data tables, the persistent data store detects when a given data set is full, launches the KPDXTRA process to copy the data set, and notifies the CMS that the data set can once again be used to receive historical information. Additional information about the persistent data store can be found in “Maintaining the Persistent Data Store (CT/PDS)” on page 109.
An alternative to the automatic scheduling of conversion is the ability to manually issue the command to convert the historical data files. Information about manually converting your files is found in “Manual Archiving Procedure” on page 99
Converting Files Using KPDXTRAThe conversion program, KPDXTRA, is called by the persistent data store maintenance procedures when the EXTRACT option is specified for maintenance. This program reads a dataset containing the collected historical data and writes out two files for every table that has data collected for it. The processing of this data does not interfere with the continuous collection being performed. Because the process is automatic, a brief overview of the use of KPDXTRA is provided here. For full information about KPDXTRA, review the sample JCL distributed with your Candle product. The sample JCL is found as part of the sample job KPDXTRA contained in the sample libraries RKANSAM and TKANSAM.
Converting History Files to Delimited Flat Files (MVS) 95
Automatic Conversion and Archiving Process
Attributes formattingSome attributes need to be formatted for display purposes. For example, floating point numbers that specify a certain number of precision digits to be printed to the left of a decimal point. These display formatting considerations are specified in product attribute files.
When you use KDPEXTRA to roll off historical data into a text file, any attributes that require format specifiers as indicated in the attribute file are ignored. Only the raw number is seen in the rolled off history text file. Thus, instead of displaying 45.99% or 45.99, the number 4599 appears.
About KPDXTRAKPDXTRA runs in the batch environment as part of the maintenance procedures. It is capable of taking a parameter that allows the default column separator to be changed. The MVS JCL syntax for executing this command is:
// EXEC PGM=KPDXTRA,PARM=‘PREF=dsn-prefix [DELIM=xx] [NOFF]’
Several files must be allocated for this job to run.
In V300 and later, all datasets are kept in read/write state even if they are not active. This makes the datasets unavailable if the CMS is running. That is, jobs cannot be run against the active datasets and the inactive datasets must be taken offline. You can dynamically remove a dataset from the CMS by issuing the modify command:
F stcname,KPDCMD QUIESCE FILE=DSN:dataset
If you must run a utility program against an active data store, issue a SWITCH command prior to issuing this QUIESCE command.
DDNAMES required to be allocated for KPDXTRAThe following is a summary of the DDnames that must be allocated for KPDXTRA. Refer to the sample JCL in the Sample Libraries distributed with the product for additional information.
Table 5. DD Names Required
RKPDOUT KPDXTRA log messages
RKPDLOG Persistent data store (CT/PDS) messages
Automatic Conversion and Archiving Process
96 Historical Data Collection Guide for OMEGAMON XE and CCC
KPDEXTRA parametersThe table that follows specifies the KPDEXTRA parameters.
RKPDIN Table definition commands file (input to CT/PDS subtask) as set up by CICAT
RKPDIN1 CT/PDS file from which data is to be extracted
RKPDIN2 Optional control file defined as a DUMMY DD statement
Table 6. KPDXTRA parameters
Parameter Default Value Description
PREF= none Required parameter. Identifies the high level qualifier where the output files will be written.
DELIM= tab Specifies the separator character to use between columns in the output file. The default is a tab character X’05’. To specify some other character, specify the 2-byte hexadecimal representative for that character. For example, to use a comma, specify DELIM=6B.
QUOTE NQUOTE Optional parameter that puts double quotes around all character type fields. Trailing blanks are removed from the output. Makes the output format of the KPDXTRA program identical in format to the output generated by the distributed krarloff program.
NOFF off Causes the creation (if set to ON) or omission (if set to OFF) of a separate file (header file) that contains the format of the tables. Also controls the presence or absence of the header in the output data file that is created as a result of the extract operation. If OFF is specified, the header file will not be created but the header information is included as the first line of the data file. The header information shows the format of the extracted data.
Table 5. DD Names Required (continued)
Converting History Files to Delimited Flat Files (MVS) 97
Automatic Conversion and Archiving Process
KPDXTRA messagesThese messages can be found in the RKPDOUT sysout logs created by the execution of the maintenance procedures:
Persistent datastore Extract program KPDXTRA - Version V130.00 Using output file name prefix: CCCHIST.PDSGROUPThe following characters will be used to delimit output file tokens: Column values in data file.............: 0x05 Parenthesized list items in format file: 0x6bNote: Input control file not found; all persistent data will be extracted.Table(s) defined in persistent datastore file CCCHIST.PDSGROUP.PDS#1: Appl. Table Extract Name Name Status --------------- ----------------- ------------ PDSSTATS PDSCOMM Excluded PDSSTATS PDSDEMO Included PDSSTATS PDSLOG Included PDSSTATS TABSTATS IncludedChecking availability of data in data store file: No data found for Appl: PDSSTATS Table: PDSDEMO . Table excluded. No data found for Appl: PDSSTATS Table: TABSTATS . Table excluded.The following 1 table(s) will be extracted: Appl. Table No. Oldest Newest Name Name Rows Row Row ---------------- ------------ -------- -------------------------- --------------------------- PDSSTATS PDSLOG 431 1997/01/10 05:51:20 1997/02/04 02:17:54Starting extract operation.Starting extract of PDSSTATS.PDSLOG. The output data file, CCCHIST.PDSGROUP.D70204.PDSLOG, does not exist; it will be created. The output format file, CCCHIST.PDSGROUP.F70204.PDSLOG, does not exist; it will be created.Extract completed for PDSSTATS.PDSLOG. 431 data rows retrieved, 431 written.Extract operation completed.
Location of the MVS Executables and Historical Data Table Files
98 Historical Data Collection Guide for OMEGAMON XE and CCC
Location of the MVS Executables and Historical Data Table Files
Location of MVS executablesExecutables are located in the &hilev.&midlev.RKANMOD or &hilev.&midlev.TKANMOD library, where:
n &hilev is the library in which the CMS was installed
n &midlev is the name you have provided at installation time.
Location of MVS historical data table filesThe historical data files created by the extraction program are located in the following library structure:
n &hilev.&midlev.&dsnlolev.tablename.D
n &hilev.&midlev.&dsnlolev.tablename.H
where:
n &hilev qualifier is the library in which the CMS was installed.
n &midlev is the name you have provided at installation time.
n &dsnlolev is the low-level qualifier of the dataset names as set by the CICAT
n tablename can be up to 10 characters. When the tablename is greater than 8 characters, the tablename portion of the dataset contains the first 8 characters followed by a period, with the remaining characters of the name appended.
Datasets with a name ending with “D” represent data output. Datasets with a name ending with “H” represent header or format output.
Converting History Files to Delimited Flat Files (MVS) 99
Manual Archiving Procedure
Manual Archiving Procedure
Converting historical files manuallyTo manually convert historical data files on the MVS CMS and on the remote managed systems, issue the following MODIFY command:
F stcname,KPDCMD SWITCH GROUP=cccccccc EXTRACT
where:
n stcname identifies the name of the started task that is running either the CMS or MVS agents.
n cccccccc identifies the group name associated with the persistent data store allocations. The values for cccccccc may vary based on which products are installed. The standard group name is GENHIST.
When this command is executed, only the tables associated with the group identifier are extracted. If multiple products are installed, each can be controlled by separate SWITCH commands.
This switching can be automated by using either an installation scheduling facility or an automation product.
You can also use the CandleNet Command Center’s advanced automation features to execute the SWITCH command. To do so, define a situation that, when it becomes true, executes the SWITCH command as the action.
Manual Archiving Procedure
100 Historical Data Collection Guide for OMEGAMON XE and CCC
Converting History Files to Delimited Flat Files (UNIX Systems) 101
Converting History Files toDelimited Flat Files (UNIX Systems)
IntroductionIf you selected the option to warehouse data to an ODBC data base, that option is mutually exclusive with running the file conversion programs described in this chapter. To use these conversion procedures, you must have specified Off for the Warehouse option on the CCC History Configuration panel for the CMW and on the History Collection Configuration dialog for CandleNet Portal.
This chapter explains how the UNIX CandleHistory script is used to convert the saved historical data contained in the history data files to delimited flat files. You can use the delimited flat files in a variety of popular applications to easily manipulate the data to create reports and graphs.
The procedure described in this chapter empties the history accumulation files, and must be performed periodically so that the history files do not take up needless amounts of disk space.
Chapter ContentsUnderstanding History Data Conversion. . . . . . . . . . . . . . . . . . . . . . . . . 102Performing the History Data Conversion . . . . . . . . . . . . . . . . . . . . . . . . 103
8
Understanding History Data Conversion
102 Historical Data Collection Guide for OMEGAMON XE and CCC
Understanding History Data Conversion
OverviewIn the UNIX environment, you use the CandleHistory script to activate and customize the conversion procedure used to turn selected Candle binary historical data tables into a form usable by other software products. The historical data that is collected is in a binary format and must be converted to ASCII in order to be used by third party products. Each binary file is converted independently. The historical data collected by the Candle Management Server may be at the host location of the CMS or at the location of the reporting agent. Conversion can be run at any time, whether or not the CMS or agent(s) are active.
Conversion applies to all history data collected under the current CANDLEHOME associated with a single CMS server, whether the data was written by the CMS or by a remote agent.
Additional information about CandleHistory can be found in the online help. When you enter CandleHistory -h at the command line, this output displays:
CandleHistory [ -h CANDLEHOME ] -C [ -L nnn[Kb|Mb] ] [ -t masks*,etc ] [ -D delim ] [ -H|+H ] [ -N n ] [ -p cms_name ] prod_code
CandleHistory -A?
CandleHistory [ -h CANDLEHOME ] -A perday|0 [ -W days ] [ -L nnn[Kb|Mb] ] [ -t masks*,etc ] [ -D delim ] [ -H|+H ] [ -N n ][ -i instance|-p cms_name ] prod_code
Note: Certain parameters are required. Items separated with the pipe symbol denotes mutual exclusivity (for example, Kb|Mb means enter either Kb or Mb, not both.) Typically entered as a single line at the UNIX command prompt.
The parameters used with the script are documented in “History conversion parameters” on page 104:
Converting History Files to Delimited Flat Files (UNIX Systems) 103
Performing the History Data Conversion
Performing the History Data Conversion
OverviewThe CandleHistory script schedules the conversion of historical data to delimited flat files. Both the manual process to perform a one-time conversion and the conversion script that permits you to schedule automatic conversions are documented below.
Important: The CandleHistory script must be executed from CANDLEHOME/bin.
After the conversion has taken place, the resulting delimited flat file has the same name as the input history file with an extension that is a single numerical digit. For example, if the input history file table name is KOSTABLE, the converted file will be named KOSTABLE.0. The next conversion will be named KOSTABLE.1, and so on.
Performing a one-time conversionTo perform a one-time conversion process, type the following at the command prompt:
./CandleHistory -C prod_code
Scheduling basic automatic history conversionsUse CandleHistory to schedule automatic conversions via the UNIX cron facility. To schedule a basic automatic conversion, type the following at the command prompt:
./CandleHistory -A n prod_code
where n is a number from 1-24. This number specifies the number of times per day the data conversion program will run, rounded up to the nearest divisor of 24. The product code is required as well.
For example,
CandleHistory -A 7 ux
means run history conversion every three hours.
Performing the History Data Conversion
104 Historical Data Collection Guide for OMEGAMON XE and CCC
Customizing your history conversionYou can use the CandleHistory script to further customize your history collection by specifying additional options. For example, you can choose to convert files that are above a particular size limit that you have set. You can also choose to perform the history conversion on particular days of the week.
The table that follows describes all of the history conversion parameters.
Table 7. History conversion parameters
-C Identifies this as an immediate one-time conversion call. Required.
-A n Identifies this as a history conversion call. Required. Automatically run specified number of times per day; absence of -A means run conversion now. Value must be -A n, where n is 1-24, the number of runs per day, rounded up to the nearest divisor of 24. For example, -A 7 means run every three hours.
-A 0 Cancels all automatic runs for tables(s) specified.
-A ? Lists automatic collection status for all tables.
-W Day of the week (0=Sunday, 1=Monday, etc.). Can be a comma-delimited list of numbers or ranges thereof. For example, -W 1,3-5 means Monday, Wednesday, Thursday, and Friday. The default is Monday through Saturday (1-6).
-H Exclude column headers. Default is "attribute".
+H Include group (long table) names in column headers. Format is “Group_desc.Attribute” Default is attribute only.
-L Only converts files whose size is over a specified number of Kb/Mb (suffix can be any of none, K, Kb, M, Mb with none defaulting to Kb).
-h Override for the value of $CANDLEHOME
-t List of tables or mask patterns delimited by commas, colons, or blanks. If the pattern has embedded blanks, it must be surrounded with quotes.
-D Output delimiter to use. Default=tab character. Quote or escape blank: -D ‘ ‘
-N Keep generation 0-n of output (default 9).
Converting History Files to Delimited Flat Files (UNIX Systems) 105
Performing the History Data Conversion
-i instance For agent instances (those not using the default queue manager). Directs the program to process historical data collected by the specified agent instance. For example, -i qm1 specifies the instance named “qm1”.
-p cms_name Directs the program to process historical data collected by the specified CMS instead of the agent.
Note: A product code of ms must be used with this option. The default action is to process data collected by prod_code agent.
prod_code Two-character product code of the product from which historical data is to be converted. Refer to Installing Candle Products on UNIX, Version CT350, for product codes.
Table 7. History conversion parameters
Performing the History Data Conversion
106 Historical Data Collection Guide for OMEGAMON XE and CCC
Converting History Files to Delimited Flat Files (HP NonStop Kernel Systems) 107
Converting History Files toDelimited Flat Files
(HP NonStop Kernel Systems)
introductionIf you selected the option to warehouse data to an ODBC data base, that option is mutually exclusive with running the file conversion programs described in this chapter. To use these conversion procedures, you must have specified Off for the Warehouse option on the CCC History Configuration panel for the CMW and on the History Collection Configuration dialog for CandleNet Portal.
The history files collected using the rules established in the HDC Configuration program can be converted to delimited flat files for use in a variety of popular applications to easily manipulate the data and create reports and graphs. Use the krarloff program to manually invoke file conversion. For best results, you should schedule conversion to run every day.
Support is provided for OMEGAMON XE for WebSphere MQ Configuration and for OMEGAMON XE for WebSphere MQ Monitoring running on the HP NonStop™Kernel operating system (formerly Tandem). For information specific to OMEGAMON XE for WebSphere MQ Monitoring relating to historical data collection, see the Customizing Monitoring Options topic found in your version of the product documentation.
Chapter ContentsConversion Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
9
Conversion Process
108 Historical Data Collection Guide for OMEGAMON XE and CCC
Conversion Process
OverviewWhen setting up the process that will convert the history files you have collected to delimited flat files, you can schedule the process manually by running the krarloff program. Parameters for the krarloff program are described in “krarloff Parameters” on page 90.
Important: Candle recommends running history file conversion every 24 hours.
Using krarloff on HP NonStop KernelThe history files are kept on the DATA subvolume, under the default <$VOL>.CCMQDAT. However, the location of the history files is dependent on where you start the monitoring agent. If you started the monitoring agent using STRMQA from the CCMQDAT subvolume, the files are stored on CCMQDAT.
You can run krarloff from the DATA subvolume by entering the following:
RUN <$VOL>.CCMQEXE.KRARLOFF <parameters>
Note that CCMQDAT and CCMQEXE are defaults. During the installation process, you can assign your own names for these files.
For a table listing the krarloff parameters, see “krarloff Parameters” on page 90.
Attributes formattingSome attributes need to be formatted for display purposes. For example, floating point numbers that specify a certain number of precision digits to be printed to the left of a decimal point. These display formatting considerations are specified in product attribute files.
When you use krarloff to roll off historical data into a text file, any attributes that require format specifiers as indicated in the attribute file are ignored. Only the raw number is seen in the rolled off history text file. Thus, instead of displaying 45.99% or 45.99, the number 4599 appears.
Maintaining the Persistent Data Store (CT/PDS) 109
Maintaining the PersistentData Store (CT/PDS)
IntroductionThe persistent data store (CT/PDS) runs, on MVS, in the same address space as the Candle Management Server (CMS). It provides the ability to record and retrieve tabular relational data on a 24 by 7 basis while maintaining indexes on the recorded data. This appendix describes the procedures you use to maintain the CT/PDS.
See the Installation and Configuration of Candle Products on OS/390 and z/OS guide for instructions on configuring the persistent datastore.
Note: For applications configured to run in the MVS CMS address space, the Configure persistent data store step within the CMS product configuration is required. This step applies to MVS-based products and non-MVS-based products that enable historical data collection in this MVS CMS. Additionally, any started task associated with a product (including the CMS address space itself) that is running prior to configuring the CT/PDS, must be stopped.
Chapter ContentsAbout the Persistent Data Store . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111Components of the CT/PDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112Overview of the Automatic Maintenance Process . . . . . . . . . . . . . . . . . . 115Making Archived Data Available . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119Exporting and Restoring Persistent Data . . . . . . . . . . . . . . . . . . . . . . . . . 123Data Record Format of Exported Data . . . . . . . . . . . . . . . . . . . . . . . . . . 125Extracting CT/PDS Data to Flat Files . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
A
110 Historical Data Collection Guide for OMEGAMON XE and CCC
Command Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Maintaining the Persistent Data Store (CT/PDS) 111
About the Persistent Data Store
About the Persistent Data Store
OverviewThe Persistent Data Store (CT/PDS) is used for writing and for retrieving historical data. The program is the server portion of a client/server application. The client code either provides data to be inserted into relational tables or make requests to retrieve the data. The CT/PDS acts as a subset of a database management system that is concerned only with the physical level of recording and retrieving data.
The data being written to the persistent data store is organized by tables, groups, and datasets. Each table is assigned to a group. A group can have one or more datasets assigned to it. Normally, three datasets are assigned to each group. Groups can have multiple tables assigned to them, so it is not necessary to have a dataset for each table defined to the system. The assignment of tables, groups, and datasets are defined during CICAT configuration of your product. See the CICAT documentation for details.
The CMS provides automatic maintenance for the datasets in the CT/PDS. Two procedures and one CLIST provide automatic maintenance and are located in &rhilev.&midlev.RKANSAM. Their default names are:
n KPDPROC1n KPDPROCC
n KPDPROC2If you changed prefix KPDPROC during the configuration process in the CICAT, the suffixes remain 1, C, and 2, respectively. See “Overview of the Automatic Maintenance Process” on page 115.
User ID when running the CT/PDS proceduresThe CT/PDS procedures run with the user ID of the person who installed the product.
Components of the CT/PDS
112 Historical Data Collection Guide for OMEGAMON XE and CCC
Components of the CT/PDS
OverviewThe components described below make up the CT/PDS.
n KPDMANE
This is the primary executable program. It is a server for other applications running in the same address space. This program is designed to run inside the Engine address space as a separate subtask. Although it is capable of running inside the Engine, it does not make any use of Engine services. This is because the KPDMAIN program is also used in other utility programs that are intended to run in batch mode. This is the program that eventually starts the maintenance task when it does a switch and determines that no empty datasets are available.
n KPDUTIL
This program is used primarily to initialize one or more datasets for CT/PDS use. The program simply attaches a subtask and starts the KPDMANE program in it. The DD statements used when this program is run dictate what control files are executed by the KPDMANE program.
n KPDARCH
This program acts as a client CT/PDS program that pulls data from the specified dataset and writes it out to a flat file. The program attaches a subtask and starts up the KPDMANE program in it. The output data is still in an internal format, with all the index information excluded.
n KPDREST
This program acts as a client CT/PDS program that reads data created by the KPDARCH program and inserts it back into a dataset in the proper format so that the CT/PDS can use it. This includes the re-building of index information. The program attaches a subtask and starts the KPDMANE program in it.
Maintaining the Persistent Data Store (CT/PDS) 113
Components of the CT/PDS
n KPDXTRA
This is a client CT/PDS program that pulls data from a dataset and writes it to one or more flat files with all column data converted to EBCDIC and separated by tabs. This extracted data can easily be loaded into a DBMS or into spreadsheet programs such as EXCEL. As with the other client programs, a subtask is attached and the KPDMANE program will be loaded and executed in that environment. See “Extracting CT/PDS Data to Flat Files” on page 131.
n KPDDSCO
This program communicates with the started task that is running the CT/PDS and send it commands to be executed. The typical command executed is the RESUME command to tell the CT/PDS that it can once again use a dataset. This program is capable of using two forms of communication. The older version acts as a client application to the CMS. This mode uses SNA to connect to the server and submit the command requests. The later version uses an SVC 34 to execute a modify command to the proper started task. A secondary function of this program is to log information in a general log maintained in the CT/PDS tables.
Operation of the CT/PDSThe KPDMANE program invokes maintenance automatically in two places. The first is on startup when it is reading and processing every dataset it knows about. It looks at internal data to determine if the dataset is in a known and stable state. If not, it issues a RECOVER command. The second area is when it is recording information from applications onto an active dataset for a group. If it detects that it is running out of room on a write operation, it executes the SWITCH command internally.
n RECOVER Logic
This code puts the dataset into a quiesce state and closes the file. Information is set up to request an ARCHIVE, INIT, and RESTORE operation to be performed by the maintenance procedures. An SVC 34 is issued for a START command on KPDPROC1 (or its overridden name). The command exits to the caller with the dataset unusable until a RESUME command is executed.
Components of the CT/PDS
114 Historical Data Collection Guide for OMEGAMON XE and CCC
n SWITCH Logic
The SWITCH command looks at all of the datasets assigned to the group and finds an empty one. Note that if no empty datasets are available, future attempts to write data to any dataset in the group will fail. Normally, an empty dataset will be found and it will be marked as the active dataset.
A test is made on the dataset being deactivated (because it is full) to see if the EXTRACT option was specified. If so, the EXTRACT command for the dataset is executed.
The next test is to check if there are any empty datasets in the current group. If not, the code finds the dataset with the oldest data and marks it for maintenance. With the latest release of the CT/PDS, the code checks to see if any of the maintenance options BACKUP, EXPORT, or EXTRACT were specified for this dataset. If not, the INITDS command is executed. Otherwise, the BACKUP command is executed.
n BACKUP Logic
This code puts the dataset in a quiesce state and closes it. A test is made to see if the user specified either BACKUP or EXPORT for the dataset and set appropriate options for the started task. The options always include a request to initialize the dataset. An SVC 34 is issued to start the KPDPROC1 procedure. The code returns to the caller with the dataset unavailable until the RESUME command is executed.
n EXTRACT Logic
This is similar to the BACKUP logic, except the only option specified is for an EXTRACT run with no initialization performed on the dataset.
n RESUME Logic
This code opens the specified dataset name and verifies that it is valid. The dataset is taken out of the quiesce state and made once again available for activation during the next SWITCH operation.
Maintaining the Persistent Data Store (CT/PDS) 115
Overview of the Automatic Maintenance Process
Overview of the Automatic Maintenance Process
Overview When a dataset becomes full, the CT/PDS selects an empty dataset to make it active. Once active, the CT/PDS checks to see if there are any more empty datasets. If there are no more empty datasets, maintenance is started on the oldest dataset, and data recording is suspended.
Prior to launching the KPDPROC1 process, the CT/PDS checks to see if either the BACKUP function or the EXPORT function has been specified. If neither function has been specified, then the dataset is initialized within the CT/PDS started task and KPDPROC1 is not executed.
The maintenance process consists of three files that are generated and tailored by CICAT and invoked by the Persistent Data Store. The files are:
KPDPROC1
KPDPROC1 is a procedure that is started with an MVS START command. Limited information is passed to this started task which it uses to drive a CLIST in a TSO environment. CICAT creates this file and puts it into the RKANSAM library for each runtime environment (RTE) that has a CT/PDS component. This procedure must be copied to a system level procedure library so the command issued to start it can be found.
The parameters passed to KPDPROC1 vary based on the version of CICAT and the CT/PDS. This document assumes the latest version is installed. There are three parameters passed to the started task. They are:
n HILEV
This is the high level qualifier for the RTE that configured this version of the CT/PDS. It is obtained by extracting information from the DD statement that points to the CT/PDS control files.
n LOWLEV
This is the low level qualifier for the sample library. It currently contains the RKANSAM field name.
Overview of the Automatic Maintenance Process
116 Historical Data Collection Guide for OMEGAMON XE and CCC
n DATASET
The fully qualified name of the dataset being maintained. It is possible to have a dataset name that does not match the high level qualifier specified in the first parameter.
KPDPROCC
KPDPROCC is the CLIST that is executed by the procedure KPDPROC1 procedure. The CLIST has the task of obtaining all of the information needed to perform the maintenance and to submit a job to execute the desired maintenance.
KPDPROC2
KPDPROC2 is the actual JOB that gets executed to save the data and to initialize the dataset so it can be once again used by the CT/PDS. This procedure:
– backs up the data
– deletes the dataset
– allocates a new dataset with the same parameters as before
– makes the new dataset available for reading and writing
CICAT allows the user to pick the first seven characters of the maintenance procedure names. The KPDPROC is the default if the user does not modify it.
What part of maintenance do you control?Most of the CT/PDS maintenance procedure is automatic and does not require your attention. Through the CICAT, you have already specified the EXTRACT, BACKUP and EXPORT options by indicating a Y or N for each dataset group. See “Command Interface” on page 134 for descriptions of additional commands that are used primarily for maintenance.
n BACKUP - makes an exact copy of the dataset being maintained.
n EXPORT - writes the data to a flat file in an internal format that can be used by external programs to post process the data. This is also used for recovery purposes when the CT/PDS detects potential problems with the data.
Maintaining the Persistent Data Store (CT/PDS) 117
Overview of the Automatic Maintenance Process
n EXTRACT - write thes data to a flat file in human readable form which is suitable for loading into other DBMS systems.
Note: If none of the maintenance options are specified, the data within the dataset being maintained is erased.
You can indicate whether to:
n back up the data for each dataset group
n back up the data to tape or to DASD for all dataset groups
Indicating dataset backup to tape or to DASDFor all dataset groups that you selected to back up, you must indicate whether you want to back up the data to tape or to DASD. This decision will apply to all datasets.
Backing up datasets to DASDUse this procedure to modify KPDPROC2:
1. Access the procedure in &rhilev.&midlev.RKANSAM(KPDPROC2) with any editor.
2. Remove the comment characters from the step that backs up datasets to DASD and insert comment characters in the step that backs up datasets to tape.
3. Save the procedure.
4. Copy procedure KPDPROC2 to your system procedure library, usually SYS1.PROCLIB.
Table 8. Determining the medium for dataset backup
If you are backing up datasets to... THEN...
tape use KPDPROC2 as shipped
DASD follow the procedure below
Overview of the Automatic Maintenance Process
118 Historical Data Collection Guide for OMEGAMON XE and CCC
Naming the export datasetsWhen you choose to export data, you are requesting to write data to a sequential dataset. The names of all exported datasets follow the format
&rhilev.&midlev.&dsnlolev.A#######
where:
n &rhilev is the high-level qualifier of all datasets in the CT/PDS, as you specified in the CICAT
n &midlev is the mid-level qualifier of all datasets in the CT/PDS, as you specified in the CICAT
n &dsnlolev is the low-level qualifier of the dataset names as set by the CICAT
n A is a required charactern nnnnnnn is a sequential number
Maintaining the Persistent Data Store (CT/PDS) 119
Making Archived Data Available
Making Archived Data Available
OverviewThis topic shows you how to make data available to those products that use the CT/PDS after the data has been backed up to DASD or to tape.
To make the data available you will dynamically restore a connection between an archived dataset and the CMS.
When the automatic maintenance facility backs up a dataset in the persistent data store, it performs the following activities:
n disconnects the dataset from the CMS
n copies the dataset to tape or DASD in a format readable by the CMS
n deletes and reallocates the dataset
n reconnects the empty dataset to the CMS
To view archived data from the product, you must ensure that the dataset is stored on an accessible DASD volume and reconnect the dataset to the CMS.
Dataset naming conventionsWhen the maintenance facility backs up a dataset, it uses the following format to name the dataset:
&rhilev.&midlev.&dsnlolev.B#######
where:
n &rhilev is the high-level qualifier of all datasets in the CT/PDS, as you specified in the CICAT
n &midlev is the mid-level qualifier of all datasets in the CT/PDS, as you specified in the CICAT
n &dsnlolev is the low-level qualifier of the dataset names as set by the CICAT
n B is a required charactern nnnnnnn is a sequential number
Making Archived Data Available
120 Historical Data Collection Guide for OMEGAMON XE and CCC
PrerequisitesBefore you begin to restore the connection between the archived dataset and the CMS, you will need the following information:
n the name of the archived dataset that contains the data you want to view. Your systems programmer can help you locate the name of the dataset.
n the name of the CT/PDS group that corresponds to the data you want to view
Finding background informationYou can use the CICAT to find the name of the CT/PDS group to which the archived dataset belongs by following this procedure:
1. Stop the CMS if it is running.
2. Log onto a TSO session and invoke ISPF.
3. At the ISPF Primary Option menu, enter 6 in the Option field to access the TSO command mode.
4. At the TSO command prompt, type:EX ’shilev.INSTLIB’
where shilev is the high-level qualifier of the CICAT installation library at your site.
Result: CICAT first displays the copyright panel and then the CICAT Main Menu.
5. From the CICAT Main Menu, select Configure products and then Select product to configure.
6. From the Product Selection Menu, select the product.
7. On the Runtime Environments (RTEs) panel, specify C to select the RTE where the product you configured resides.
8. On the Configure product panel, select Configure persistent data store and then Modify and review data store specifications.
9. Locate the low-level qualifier of the dataset you want to reconnect and note the corresponding group name.
10. Press F3 until you exit CICAT.
Maintaining the Persistent Data Store (CT/PDS) 121
Making Archived Data Available
Connecting the dataset to the CMSTo reconnect the archived dataset to the CMS so you can view the data from the product, follow this procedure:
1. If the dataset resides on tape, use a utility such as IEBGENR to copy the dataset to a DASD volume that is accessible by the CMS.
2. Copy job KPDCOMMJ from &hilev.TKANSAM to &rhilev.&midlev.RKANSAM.
3. Access job &rhilev.&midlev.RKANSAM(KPDCOMMJ) with any editor.
4. Substitute site-specific values for the variables in the job, as described in the comments at the beginning of the job. In addition to the comments in the job, you may find the following information helpful:n Variable &GROUP on the COMM ADDFILE statement is the group name
that you identified in “Finding background information” on page 120.
n Variable &PDSN on the COMM ADDFILE statement is the name of the dataset you want to reconnect.
5. Locate the COMM ADDFILE statement near the bottom of the job and remove the comment character (*).
6. Submit KPDCOMMJ to restore the connection between the dataset you specified and the CMS.
7. To verify that the job ran successfully, you can view a report in RKPDLOG that lists all the persistent data store datasets that are connected to the CMS. RKPDLOG is the ddname of a SYSOUT file allocated to the CMS.Locate the last ADDFILE statement in the log and examine the list of datasets that follows the statement. If the job ran successfully, the name of the dataset you reconnected will appear in the list.
Disconnecting the datasetThe dataset that you connected to the CMS is not permanently connected. The connection will automatically be removed the next time the CMS terminates. If you wish to remove the dataset from the CT/PDS immediately after you view the data, follow this procedure:
1. Access job &rhilev.&midlev.RKANSAM(KPDCOMMJ) with any editor.
2. Retain all site-specific values that you entered when you modified the job to reconnect the dataset in the previous procedure.
Making Archived Data Available
122 Historical Data Collection Guide for OMEGAMON XE and CCC
3. Locate the COMM ADDFILE statement near the bottom of the job and perform the following steps, if needed:
A. Remove the comment character from the statement, if one exists.B. Overtype the word ADDFILE with the word DELFILE.C. Remove the Group parameter together with its value.D. Remove the RO parameter if it exists.
4. Submit KPDCOMMJ to remove the connection between the dataset and the CMS.
– To verify that the job ran successfully, you can view a report in RKPDLOG that lists all datasets connected to the CMS.
Locate the last DELFILE statement in the log and examine the list of datasets that follows the statement. If the job ran successfully, the name of the dataset you disconnected will not appear in the list.
5. If the dataset resides on tape, you may want to conserve space by deleting the dataset from DASD.
Maintaining the Persistent Data Store (CT/PDS) 123
Exporting and Restoring Persistent Data
Exporting and Restoring Persistent Data
OverviewIn addition to the standard maintenance jobs used by the persistent data store, there are sample jobs distributed with the CMS that you can use to export data to a sequential file and then restore the data to the original indexed format.
These jobs are not tailored by the CICAT at installation time and must be modified to add pertinent information.
Exporting persistent dataFollow this procedure to export persistent data to a sequential file:
1. Stop the CMS if it is running.
2. Copy &thilev.&midlev.RKANSAM(KPDEXPTJ).
3. Update the jobcard with the following values:
With the exception of &pdsn, these values can be found in the PDSLOG SYSOUT of the CMS started task.
4. Submit the job.
&rhilev high-level qualifier of the runtime environment where the CT/PDS resides.
&pdsn fully qualified name of the CT/PDS dataset to be exported
&expdsn fully qualified name of the export file you are creating
&unit2 DASD unit identifier for &expdsn
&ssz record length of output file (You can use the same record length as defined for &pdsn.)
&sct count of blocks to allocate (You can use the same size as the blocks allocated for &pdsn.)
&bsz &ssz value plus eight
Exporting and Restoring Persistent Data
124 Historical Data Collection Guide for OMEGAMON XE and CCC
Restoring exported dataFollow this procedure to restore a previously exported CT/PDS dataset.
1. Copy &thilev.&midlev.RKANSAM(KPDRESTJ).
2. Update the jobcard with the following values:
With the exception of &pdsn, these values can be found in the PDSLOG SYSOUT of the CMS started task.
3. Submit the job.
&rhilev high-level qualifier of the runtime environment where the CT/PDS resides.
&pdsn fully qualified name of the CT/PDS dataset to be restored
&expdsn fully qualified name of the file you are creating
&unit2 DASD unit identifier for &expdsn
&group identifier for the group that the dataset will belong to
&siz size of the dataset to be allocated, in megabytes
Maintaining the Persistent Data Store (CT/PDS) 125
Data Record Format of Exported Data
Data Record Format of Exported Data
OverviewThis section describes the format of the dictionary entries but not its contents. The actual meaning of the tables and columns is product-specific.
Due to the nature of the data being recorded, the format of a dataset is complex. A single dataset contains descriptions for every table that was recorded in the original data set, therefore mapping information in the form of a data dictionary is provided for every table. In many cases, the tables can have variable length columns as well as rows of data where some of the columns are not available. The information about missing columns and lengths for variable columns are imbedded in the data records. Some tables have columns that physically overlay each other. This must be taken into account when trying to obtain data for these overlays.
Data in the exported file is kept in internal format which means that many of the fields will be in binary. The output file is made up of three sections with one or more data rows within each.
n Section 1 describes general information about the data source used to create the exported data.
n Section 2 contains a dictionary needed to map out the data.
n Section 3 contains the actual data rows.
The historical data is maintained in relational tables, therefore the dictionary mappings provide table and column information for every table that had data recorded for it in the CT/PDS.
Section 1The Section 1 record is not needed to map out the data within the exported file. However, it is useful for determining how to re-allocate a dataset when a CT/PDS file needs to be reconstructed.
Data Record Format of Exported Data
126 Historical Data Collection Guide for OMEGAMON XE and CCC
Section 1 contains a single data row used to describe information about the source of the data recorded in the export file. The data layout for the record is:
Section 2 RecordsSection 2 provides information about the tables and columns that are represented in Section 3. This section has a header record followed by a number of table and column description records.
Table 9. Section 1 Data Record Format
Field Offset Length Type Description
RecID 0 4 Char Record ID. Contains AA10 for header record 1.
Length 4 4 Binary Contains the record length of the header record.
Timestamp 8 16 Char Timestamp of export. Format: CYYMMDDHHMMSSMMM
Group 24 8 Char Group name to which the data belongs.
Data Store Ver 32 8 Char Version of KPDMANE used to record original data.
Export Version 40 8 Char Version of KPDARCH used to create exported file.
Total Slots 48 4 Binary Number of blocks allocated in original dataset.
Used Slots 52 4 Binary Number of used blocks at time of export.
Slot Size 56 4 Binary Block size of original dataset.
Expansion Area 60 20 --- Unused area.
Data Store Path 80 256 Char Name of originating dataset.
Export Path 336 256 Char Name of exported dataset.
Maintaining the Persistent Data Store (CT/PDS) 127
Data Record Format of Exported Data
Dictionary Header Record
This is the first Section 2 record (and therefore the second record in the dataset). It provides general information about the format of the dictionary records that follow. It is used to describe how many tables are defined in the dictionary section. The data layout for the dictionary header record is:
Table description record
Each table within the exported dataset has a table record that provides its name, identifier, and additional information about the columns. All table records are provided before the first column record. The column records and all of the data records in section 3 use the identifier number to associate it with the appropriate table.
The map length and variable column count fields can be used to determine exactly where the data for each column starts and to properly determine if the column exists in a record. The format of the table description record is described in the table that follows.
Table 10. Section 2 Data Record Format
Field Offset Length Type Description
RecID 0 4 Char Record ID. Contains DD10 for header record 2.
Dictionary Len 4 4 Binary Contains the length of the entire dictionary.
Header Len 8 4 Binary Length of the header record.
Table Count 12 4 Binary Number of tables in dictionary (1 record per table).
Column Count 16 4 Binary Total number of columns described.
Table Row Len 20 4 Binary Size of table row.
Col Row Len 24 4 Binary Size of column row.
Expansion 28 28 --- Unused area.
Data Record Format of Exported Data
128 Historical Data Collection Guide for OMEGAMON XE and CCC
Column description record
One record exists for every column in the associated table record. Each record provides the column name, type, and other characteristics. The order of the column column rows is the same order in which the columns appear in the output row. However, some columns may be missing on any given row. The mapping structure defined under section 3 must be used to determine if a column is present.
The format of the column records is:
Table 11. Section 2 Table Description Record
Field Offset Length Type Description
RecID 0 4 Char Record ID. Contains DD20 for table record.
Identifier Num 4 4 Binary Unique number for this table.
Application 8 8 Char Application name table belongs to.
Table Name 16 10 Char Table name.
Table Version 26 8 Char Table version.
Map Length 34 2 Binary Length of the mapping area.
Column Count 16 4 Binary Count of columns in the table.
Variable Cols 36 4 Binary Count of variable name columns.
Row Count 40 4 Binary Number of rows in exported file for this table.
Oldest Row 44 16 Char Timestamp for oldest row written for this table.
Newest Row 64 16 Char Timestamp for newest row written for this table.
Expansion 80 16 --- Unused area.
Table 12. Section 2 Column Description Record
Field Offset Length Type Description
RecID 0 4 Char Record ID. Contains DD30 for table record.
Maintaining the Persistent Data Store (CT/PDS) 129
Data Record Format of Exported Data
Section 3 recordsSection 3 has one record for every row of every table that was in the original CT/PDS dataset being exported. Each row starts with a fixed portion followed by the actual data associated with the row. The length of the column map can be obtained from the table record (DD20). Each bit in the map represents one column. A 0 for the bit position indicates that the column data is not present while a 1 indicates that data exists in this row for the column. Immediately following the column map field is an unaligned set of 2-byte length fields. One of these length fields exists for every variable length column in the table. This mapping information must be used to determine where the starting location for any given column is within the data structure. The actual data starts immediately after the last length field.
If dealing with overlay columns, use the column offset defined in the DD30 records to determine the starting location for this type of column. Typically, we do not worry about overlaid columns with extracted data. If you have a real need to look at the actual content of an overlaid column, you will need to expand the data by re-inserting any missing columns and expanding all variable length columns to the maximum length before doing the mapping.
Table Ident 4 4 Binary Identifier for the table this column belongs to.
Column Name 8 10 Char Column name.
SQL Type 18 2 Char SQL type for column.
Column Length 20 4 Binary Maximum length of this column’s data.
Flag 24 1 Binary Flag byte.
Spare 25 1 --- Unused.
Overlay Col ID 26 2 Char Column number if this is an overlay.
Overlay Col Off 28 2 Char Offset into row for start of overlay column.
Alignment 30 2 --- Unused.
Spare 1 32 8 --- Unused.
Table 12. Section 2 Column Description Record (continued)
Field Offset Length Type Description
Data Record Format of Exported Data
130 Historical Data Collection Guide for OMEGAMON XE and CCC
The table that follows maps the fixed portion of the data.
Table 13. Section 3 Record Format
Field Offset Length Type Description
RecID 0 4 Char Record ID. Contains ROW1 for column record.
Table Ident 4 4 Binary Identifier for the table this record belongs to.
Row Length 8 4 Binary Total length of this row.
Data Offset 12 4 Binary Offset to start of data.
Data Length 16 4 Binary Length of data portion of row.
Column Map 20 Varies Binary Column available map plus variable length fields.
Maintaining the Persistent Data Store (CT/PDS) 131
Extracting CT/PDS Data to Flat Files
Extracting CT/PDS Data to Flat Files
OverviewThis topic explains how to extract data from a CT/PDS dataset into a flat file in EBCDIC format. This information can be loaded into spreadsheets or databases.
The format of the data is converted to tab delimited columns. The data is written to separate files for each table, therefore the data format for all rows in each dataset is consistent. The program also generates a separate file. This file contains a single row that provides the column names in the order in which the data is organized. This file is also delimited for ease of use. An option (NOFF) on the KPDXTRA program bypasses creating the separate file and places the column information as the first record of the data file.
This job is not tailored by the CICAT at installation time and must be modified to add pertinent information.
The output from this job is written to files with the following naming standard:
&pref.xymmdd.tablename
where:
n &pref is the high-level qualifier that you designate for the output files
n x is D for data output or F for format output
n ymmdd is the year (y), month (mm), and day (dd) on which the KPDXTRA job is run
n tablename is the identifier for the table being extracted. It is recommended that this name be no more than eight characters.
If this job is run more than once on a given day, data is appended to any data previously extracted for that day.
In Version 300 and later, all datasets are kept in read/write state even if they are not active. This makes the datasets unavailable if the CMS is running. That is, jobs cannot be run against the active datasets and the inactive datasets must be taken offline.
Extracting CT/PDS Data to Flat Files
132 Historical Data Collection Guide for OMEGAMON XE and CCC
You can dynamically remove a dataset from the CMS by issuing the modify command:
F stcname,KPDCMD QUIESCE FILE=DSN:dataset
If you must run a utility program against an active data store, issue a SWITCH command prior to issuing this QUIESCE command.
Extracting CT/PDS data to EBCDIC filesUse this job to extract CT/PDS data to EBCDIC files.
1. Copy &thilev.&midlev.RKANSAM(KPDXTRAJ).
2. Update the jobcard with the following values:
3. Add the parameters you want to use for this job
4. Submit the job.
&rhilev high-level qualifier of the runtime environment where the CT/PDS resides.
&pdsn fully qualified name of the CT/PDS dataset to be extracted
&pref high-level qualifier for the extracted data
PREF= identifies the high-level qualifier for the output file. This field is required.
DELIM=nn identifies the separator character to be placed between columns. The default is 05.
NOFF= if used, causes the format file not to be generated. The column names will be placed into the data file as the first record.
QUOTES use to place quotes around character type of data
Maintaining the Persistent Data Store (CT/PDS) 133
Extracting CT/PDS Data to Flat Files
Extracted data format
Header Record
The following is a sample extract header file record:
TMZDIFF(int,0,4) WRITETIME(char,1,16) ORIGINNODE(char,2,128) QMNAME(char,3,48) APPLID(char,4,12) APPLTYPE(int,5,4) SDATE_TIME(char,6,16) HOST_NAME(char,7,48) CNTTRANPGM(int,8,4) MSGSPUT(int,9,4) MSGSREAD(int,10,4) MSGSBROWSD(int,11,4) INSIZEAVG(int,12,4) OUTSIZEAVG(int,13,4) AVGMQTIME(int,14,4) AVGAPPTIME(int,15,4) COUNTOFQS(int,16,4) AVGMQGTIME(int,17,4) AVGMQPTIME(int,18,4) DEFSTATE(int,19,4) INT_TIME(int,20,4) INT_TIMEC(char,21,8) CNTTASKID(int,22,4) SAMPLES(int,23,4) INTERVAL(int,24,4)
Each field is separated by a tab character (by default). The data consists of the column name with a type, column number, and column length field within the parenthesis for each column. The information within parenthesis is used primarily to describe the internal formatting information, and therefore can be ignored.
Data Record
Each record in the data file for the above header contains data that looks like the following:
0 "1000104003057000" "MQM7:SYSG:MQESA" "MQM7" "XCXS2DPL" 2 "1000104003057434" "SYSG" 1 0 0 0 0 0 2 90007 0 2 0 1 96056 "016: 01" 1 1 900
Using the header file and the data file will match up as follows:
TMZDIFF 0 IntegerWRITETIME "1000104003057000 "CharacterORIGINNODE "MQM7:SYSG:MQESA "CharacterQMNAME "MQM7 "Character… … …SAMPLES 1 IntegerINTERVAL 900 Integer
Command Interface
134 Historical Data Collection Guide for OMEGAMON XE and CCC
Command Interface
OverviewThe CT/PDS uses a command interface to perform many of the tasks needed to maintain the datasets used for historical data. Most of these commands can be invoked externally through a command interface supported in the Engine environment. These commands can be executed using the standard MVS MODIFY interface with the following format:
F stcname,KPDCMD command arguments
where
Stcname Started task name of address space where the CT/PDS is running.
command One of the supported dynamic commands.
arguments Valid arguments to the specified command.
CommandsMany commands are supported by the CT/PDS. The commands described below are used primarily for maintenance.
SWITCH command
This dynamic command causes a data store file switch for a specific file group. At any given time, update-type operations against tables in a particular group are directed to one and only one of the files in the group. That one file is called the "active" file. A file switch changes the active file for a group. In other words, the switch causes a file other than the currently active one to become the new active file.
If the group specified by this command has only one file, or the group currently has no inactive file that is eligible for output, the switch is not performed.
At the conclusion of a switch, CT/PDS starts the maintenance process for a file in the group if no empty files remain in the group.
The [NO]EXTRACT keyword may be used to force or suppress an extract job for the data store file deactivated by the switch.
Maintaining the Persistent Data Store (CT/PDS) 135
Command Interface
Syntax:
SWITCH GROUP=groupid [ EXTRACT | NOEXTRACT ]
where
groupid Specifies the id of the file group that is to be switched. The group must have multiple files assigned to it.
EXTRACT: Specifies that the deactivated data store file should be extracted, even if the file’s GROUP statement did not request extraction.
NOEXTRACT: Specifies that extraction should not be performed for the deactivated data store file. This option overrides the EXTRACT keyword of the GROUP statement.
Note that if neither EXTRACT nor NOEXTRACT is specified, the presence or absence of the EXTRACT keyword on the file’s GROUP statement determines whether extraction is performed as part of the switch.
BACKUP
This command causes a maintenance task to be started for the data store file named on the command. The maintenance task typically deletes, allocates and initializes a data store file, optionally backing up or exporting the file before deleting it. (The optional export and backup steps are requested via parameters on the data store file’s GROUP command in the RKPDIN file.)
Syntax:
BACKUPFILE=DSN:dsname
where
dsname: Specifies the physical dataset name of the file that is to be maintained.
Command Interface
136 Historical Data Collection Guide for OMEGAMON XE and CCC
ADDFILE command
This command is used to dynamically assign a new physical data store file to an existing file group. The command can be issued any time after the CT/PDS initialization has completed in the CMS. It can be used to increase the number of files assigned to a group or to bring old data back online. It cannot, however, be used to define a new file group id. It may be used to add files only to groups that already exist as the result of GROUP commands in the RKPDIN input file.
Syntax:
ADDFILE GROUP=groupid FILE=DSN:dsname [ RO ] [ BACKUP ] [ ARCHIVE ]
where
groupid: Specifies the unique group id of the file group to which a file is to be added.
dsname: Specifies the fully-qualified name (no quotes) of the physical dataset that is to be added to the group specified by groupid.
RO: Specifies that the file is to be read-only (that is, that no new data may be recorded to it). By default, files are not read-only (that is, they are modifiable). This parameter may also be specified as READONLY.
BACKUP: Specifies that the file is to be copied to disk or tape before being reallocated by the automatic maintenance task. (Whether the copy is to disk or tape is a maintenance process customization option.) By default, files are not backed up during maintenance.
ARCHIVE: Specifies that the file is to be exported before being reallocated by the automatic maintenance task. By default, files are not exported during maintenance.
Maintaining the Persistent Data Store (CT/PDS) 137
Command Interface
DELFILE command
This command is used to drop one physical data store file from a file group’s queue of files. It can be issued any time after CT/PDS initialization has completed in the CMS.
The file to be dropped must be full, partially full, or empty; it cannot be the "active" (output) file for its group (if it is, the DELFILE command will be rejected as invalid).
The DELFILE command is conceptually the opposite of the ADDFILE command, and is intended to be used to manually drop a file that was originally introduced by a GROUP or ADDFILE command. Once a file has been dropped by DELFILE, it is no longer allocated to the CMS task and may be allocated by other tasks. Note that DELFILE does not physically delete a file or alter it in any way. To physically delete and un-catalog a file, use the REMOVE command.
Syntax:
DELFILE FILE=DSN:dsname
where
dsname: Specifies the fully-qualified (without quotes) name of the file that is to be dropped.
EXTRACT command
This command causes an extract job to be started for the data store file named on the command. The job converts the table data in the data store file to delimited text format in new files, then signals the originating CMS to resume use of the data store file.
For each table extracted from the data store file, two new files are created. One file contains the converted data and one file contains a record describing the format of each row in the first file.
Syntax:
EXTRACT FILE=DSN:dsname
where
dsname: Specifies the physical dataset name of the file to have its data extracted.
Command Interface
138 Historical Data Collection Guide for OMEGAMON XE and CCC
INITDS command
This command forces a data store file to be initialized within the address space where the CT/PDS is running.
Syntax:
INITDS FILE DSN:dsname
where
dsname: Identifies the data set name of the data store file to be initialized.
RECOVER command
This command causes a recovery task to be started for the data store file named on the command. The recovery task attempts to repair a corrupted data store file by exporting it, reallocating and initializing it, and restoring it. The restore operation rebuilds the index information, the data most likely to be corrupted in a damaged file. The recovery is not guaranteed to be successful, however; some severe forms of data corruption are unrecoverable.
Syntax:
RECOVER FILE=DSN:dsname
where
dsname: Specifies the physical name of the dataset to be recovered.
RESUME command
The RESUME command is used to notify the CT/PDS that it can once again make use of the dataset specified in the arguments. The file identified must be one that was taken offline by the backup, recover, or extract commands.
Syntax:
RESUME FILE=DSN:dsname
where
dsname: Specifies the physical name of the dataset to be brought online.
Maintaining the Persistent Data Store (CT/PDS) 139
Command Interface
Other Useful Commands
QUERY CONNECT command
The QUERY CONNECT command displays a list of applications and tables that are currently defined in the CT/PDS. The output of this command shows the application names, table names, total number of rows recorded for each table, the group the table belongs to, and the current dataset that the data is being written to.
Syntax:
QUERY CONNECT <ACTIVE>
where
ACTIVE - Optional parameter that only displays those tables that are active. An active table is one that has been defined and assigned to an existing group, and the group has datasets assigned to it.
DATASTORE command
The QUERY DATASTORE command displays a list of datasets known to the CT/PDS. For each dataset, the total number of allocated blocks, the number of used blocks, the number of tables that have data recorded, the block size, and status are displayed.
Syntax:
QUERY DATASTORE <FILE=DSN:datasetname>
where
FILE - Optional parameter that allows you to specify that you are only interested in the details for a single dataset. When this option is used, the resulting display is changed to show information that is specific to the tables being recorded in the dataset.
Command Interface
140 Historical Data Collection Guide for OMEGAMON XE and CCC
COMMIT command
This dynamic command flushes to disk all pending buffered data. For performance reasons, CT/PDS does not immediately write to disk every update to a persistent table. Updates are buffered in virtual storage. Eventually the buffered updates are "flushed" (that is, written to disk) at an optimal time. However, this architecture makes it possible for persistent data store files to become "corrupted" (invalid) if the files are closed prematurely, before pending buffered updates have been flushed. Such premature closings may leave inconsistent information in the files.
The known circumstances that may cause corruption are:
n Severe abnormal CMS terminations that prevent the CT/PDS recovery routines from executing
n IPLs performed without first stopping the CMS
The COMMIT command is intended to limit the exposure to data store file corruption. Some applications automatically issue this command after inserting data.
Syntax:
COMMIT
Disk Space Requirements for Historical Data Tables 141
Disk Space Requirements forHistorical Data Tables
IntroductionThe installation manual for your product(s) provides the basic disk space requirements for the CMS, for the Candle Management Workstation (CMW), and for CandleNet Portal. These basic disk space requirements do not include additional space that is required for maintaining historical data files. Because of the variations in client distributed systems, system size, number of managed systems, and so on, it is difficult to provide actual additional disk space requirements necessary for historical data collection. This chapter provides the system administrator with basic record sizes for each of the tables from which historical data is collected.
Chapter ContentsHistorical Data Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143OMEGAMON XE for CICS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144OMEGAMON XE for DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162OMEGAMON XE for DB2 Universal Database . . . . . . . . . . . . . . . . . . . . 170OMEGAMON XE for NetWare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175OMEGAMON XE for ORACLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179OMEGAMON XE for Sybase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192OMEGAMON XE for MS SQL Server. . . . . . . . . . . . . . . . . . . . . . . . . . . 207OMEGAMON XE for OS/390 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216OMEGAMON XE for OS/390 UNIX System Services . . . . . . . . . . . . . . . 226OMEGAMON XE for OS/400 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232OMEGAMON XE for R/3™. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
B
142 Historical Data Collection Guide for OMEGAMON XE and CCC
OMEGAMON XE for Sysplex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246OMEGAMON XE for Tuxedo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258OMEGAMON XE for UNIX Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . 269OMEGAMON XE for WebSphere Application Server . . . . . . . . . . . . . . . 274OMEGAMON XE for WebSphere Application Server for OS/390 . . . . . . 285OMEGAMON XE for WebSphere Integration Brokers. . . . . . . . . . . . . . . 301OMEGAMON XE for WebSphere MQ Configuration . . . . . . . . . . . . . . . 314OMEGAMON XE for WebSphere MQ Monitoring . . . . . . . . . . . . . . . . . 316OMEGAMON XE for Windows Servers . . . . . . . . . . . . . . . . . . . . . . . . . 329
Disk Space Requirements for Historical Data Tables 143
Historical Data Tables
Historical Data Tables
Why collect historical data and what are the space requirements?Historical data collection works with current Candle monitoring agents, such as those for UNIX, Windows, NetWare, OS/400 and for database systems such as ORACLE, Sybase, and MS SQL Server. You can collect historical data for the most common performance attribute tables for each Candle product. In this chapter, the following information is provided for each performance attribute table:
Worksheets are provided to permit you to estimate space requirements for each of the Performance Attribute Tables. Summary sheets permit you to estimate the total historical data space requirements for your installed product(s).
Table 14. Contents of the performance attribute tables
Column Use
Attribute History Table Name of the table in which historical data is stored
Filename For ... Name of the files corresponding to the named Attribute Historical Table
Default HDC Table Specifies whether or not the table is a default historical data table
Estimated Space The estimated space required per managed system per 24-hour period for the default file collection option
OMEGAMON XE for CICS
144 Historical Data Collection Guide for OMEGAMON XE and CCC
OMEGAMON XE for CICS
OMEGAMON XE for CICS historical data tablesIn the table that follows, zero bytes are used during the 24-hour period when history is switched off.
Note: The history tables that follow also apply to the OMEGAMON XE for CICSplex product.
Table 15. OMEGAMON XE for CICS historical data tables
Attribute History Table
Filename for Historical Data
Default HDC Table
Estimated Space Required per
managed system per 24 hours
Bottleneck Analysis CICSBNA No
Connection Analysis CON No
SDBCTL Summary CICSDLS No
DB2 Summary CICSD2S No
DB2 Task Activity CICSD2T No
Dump Analysis CICSDAT No
Enqueue Analysis CICSNQA No
File Control Analysis CICSFCA No
Intercommunication Summary
CICSICO No
Internet Status CICSIST No
Journal Analysis CICSJAT No
Log Stream Analysis CICSLSA No
LSR Pool Status CICSLPS No
MQ Connection Details MQCONN No
Region Overview CICSROV No
Response Time Elements CICSRTE No
Response Time Analysis CICSRTS No
Disk Space Requirements for Historical Data Tables 145
OMEGAMON XE for CICS
OMEGAMON XE for CICS table record sizes
RLS Lock Analysis RLS No
Storage Analysis CICSSTOR No
System Initialization Table CICSSIA No
Task Class Analysis CICSSTCA No
Temporary Storage Detail CICSSTSD No
Temporary Storage Summary
CICSTSS No
Terminal Storage Violations
CICSTSV No
Transaction Analysis TRAN No
Transaction Storage Violations
CICSXSV No
Transient Data Queues CICSTDQ No
Transient Data Summary CICSTDS No
UOW Analysis CICSUWA No
UOW Enqueue Analysis CICSUWE No
VSAM Analysis VSAM No
Total Default Space 0 Kilobytes
Table 16. OMEGAMON XE for CICS table record sizes
History Table Record Size Frequency
Bottleneck Analysis 205 bytes 1 record per detected wait reason per interval
Connection Analysis 247 bytes 1 record per connected system per interval
DBCTL Summary 83 bytes 1 record per interval
DB2 Summary 78 bytes 1 record per interval
DB2 Task Activity 86 bytes 1 record per DB2 transaction per interval
Dump Analysis 93 bytes 1 record per interval
Table 15. OMEGAMON XE for CICS historical data tables (continued)
OMEGAMON XE for CICS
146 Historical Data Collection Guide for OMEGAMON XE and CCC
Enqueue Analysis 873 bytes 1 record per enqueue resource per interval
File Control Analysis 81 bytes 1 record per interval
Intercommunication Summary
126 bytes 1 record per interval
Internet Status 76 bytes 1 record per interval
Journal Analysis 110 bytes 1 record per journal per interval
Log Stream Analysis 182 1 record per log strream per interval
LSR Pool Status 88 bytes 8 records per interval
MQ Connection Details
206 bytes 1 record per interval
Region Overview 96 bytes 1 record per interval
Response Time Elements
125 bytes 1 record per active element per interval
Response Time Analysis
128 bytes 1 record per active group per interval
RLS Lock Analysis 408 bytes 1 record per queued task per interval
Storage Analysis 92 bytes 10 records per interval
System Initialization Table
147 bytes 1 record per keyword per interval
Task Class Analysis 108 bytes 1 record per lass name per interval
Temporary Storage Detail
167 bytes 1 record per TS queue per interval
Temporary Storage Summary
106 bytes 1 record per interval
Terminal Storage Violations
80 bytes 1 record per terminal identifier having storage violations per interval
Transaction Analysis 175 bytes 1 record per transaction per interval
Transaction Storage Violations
80 bytes 1 record per transaction identifier having storage violations per interval
Transient Data Queues
86 bytes 1 record per TD queue per interval
Table 16. OMEGAMON XE for CICS table record sizes (continued)
Disk Space Requirements for Historical Data Tables 147
OMEGAMON XE for CICS
Transient Data Summary
90 bytes 1 record per interval
UOW Analysis 84 bytes 1 record per interval
UOW Enqueue Analysis
102 bytes 1 record per unit-of-work per interval
VSAM Analysis 220 bytes 1 record per VSAM file per interval
Table 16. OMEGAMON XE for CICS table record sizes (continued)
OMEGAMON XE for CICS
148 Historical Data Collection Guide for OMEGAMON XE and CCC
OMEGAMON XE for CICS space requirement worksheetsUse the following worksheets to estimate expected file sizes and the additional disk space requirements for your site. A sample calculation is provided for each historical data collection table. The minimum collection interval unit of 15 minutes is used in the calculation.
Note: The space requirements assume that only one CICS region is being monitored. If historical data collection is started against, for example, 10 CICS regions, the expected file size number needs to be multipled by 10.
Table 17. Bottleneck Analysis (CICSBNA) worksheet
Interval Record Size
No. of Connected Systems
Formula Expected File Size per 24 Hours
15 min. 205 bytes 20 (60/15 x 24 x 205 x 20) / 1024
384 kilobytes
Table 18. Connection Analysis (CON) worksheet
Interval Record Size
No. of Connected Systems
Formula Expected File Size per 24 Hours
15 min. 201 bytes 20 (60/15 x 24 x 201 x 20) / 1024
377 kilobytes
Disk Space Requirements for Historical Data Tables 149
OMEGAMON XE for CICS
Table 19. DBCTL Summary (CICSDLS) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 86 bytes (60/15 x 24 x 86) / 1024 8 kilobytes
Table 20. DB2 Summary (CICSD2S) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 78 bytes (60/15 x 24 x 78) / 1024 8 kilobytes
Table 21. DB2 Task Activity (CICSD2T) worksheet
Interval Record Size
No. of DB2 Transactions
Formula Expected File Size per 24 Hours
15 min. 88 bytes 10 (60/15 x 24 x 88 x 10) / 1024
83 kilobytes
Table 22. Dump Analysis (CICSDAT) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 93 bytes (60/15 x 24 x 93) / 1024 9 kilobytes
OMEGAMON XE for CICS
150 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 23. Enqueue Analysis (CICSNQA) worksheet
Interval Record Size
No. of Enqueue
Resources
Formula Expected File Size per 24 Hours
15 min. 873 bytes 10 (60/15 x 24 x 873) x 10 / 1024
819 kilobytes
Table 24. File Control Analysis (CICSFCA) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 81 bytes (60/15 x 24 x 81) / 1024 8 kilobytes
Table 25. Intercommunication Summary (CICSICO) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 126 bytes (60/15 x 24 x 126) / 1024 12 kilobytes
Table 26. Internet Status (CICSIST) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 76 bytes (60/15 x 24 x 76) / 1024 8 kilobytes
Disk Space Requirements for Historical Data Tables 151
OMEGAMON XE for CICS
Table 27. Journal Analysis (CICSJAT) worksheet
Interval Record Size
No. of Journals
Formula Expected File Size per 24 Hours
15 min. 131 bytes 3 (60/15 x 24 x 131 x 3) / 1024 37 kilobytes
Table 28. Log Stream Analysis (CICSLSA) worksheet
Interval Record Size
No. of Log Streams
Formula Expected File Size per 24 Hours
15 min. 182 bytes 3 (60/15 x 24 x 182 x 3) / 1024 51 kilobytes
Table 29. LSR Pool Status (CICSLPS) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 88 bytes (60/15 x 24 x 88) / 1024 66 kilobytes
OMEGAMON XE for CICS
152 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 30. MQ Connection Details (MQCONN) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 135 bytes (60/15 x 24 x 135) / 1024 13 kilobytes
Table 31. Region Overview (CICSROV) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 148 bytes (60/15 x 24 x 148) / 1024 14 kilobytes
Table 32. Response Time Elemenets (CICSRTE) worksheet
Interval Record Size
No. of Elements
Formula Expected File Size per 24 Hours
15 min. 125 bytes 10 (60/15 x 24 x 125 x 10) / 1024 117 kilobytes
Disk Space Requirements for Historical Data Tables 153
OMEGAMON XE for CICS
Table 33. Response Time Analysis (CICSRTS) worksheet
Interval Record Size
No. of Groups
Formula Expected File Size per 24 Hours
15 min. 128 bytes 5 (60/15 x 24 x 128 x 5) / 1024 60 kilobytes
Table 34. RLS Lock Analysis (RLS) worksheet
Interval Record Size
No. of Queued Tasks
Formula Expected File Size per 24 Hours
15 min. 408 bytes 5 (60/15 x 24 x 408 x 5) / 1024 191 kilobytes
Table 35. Storage Analysis (CICSSTOR) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 92 bytes (60/15 x 24 x 92 x 10) / 1024 87 kilobytes
OMEGAMON XE for CICS
154 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 36. System Initializatin Table (CICSSIA) worksheet
Interval Record Size
No. of Keywords
Formula Expected File Size per 24 Hours
15 min. 147 bytes 75 (60/15 x 24 x 147 x 75) / 1024 1,034 kilobytes
Table 37. Task Class Analysis (CICSTCA) worksheet
Interval Record Size
No. of Class
Names
Formula Expected File Size per 24 Hours
15 min. 108 15 (60/15 x 24 x 108 x 15) / 1024
152 kilobytes
Table 38. Temporary Storage Detail (CICSTSD) worksheet
Interval Record Size
No. of TS Queues
Formula Expected File Size per 24 Hours
15 min. 167 bytes 20 (60/15 x 24 x 167 x 20) / 1024 313 kilobytes
Disk Space Requirements for Historical Data Tables 155
OMEGAMON XE for CICS
Table 39. Temporary Storage Summary (CICSTSS) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 114 bytes (60/15 x 24 x 114) / 1024 11 kilobytes
Table 40. Terminal Storage Violations (CICSTSV) worksheet
Interval Record Size
No. of Terminal IDs with
Violations
Formula Expected File Size per 24 Hours
15 min. 80 bytes 3 (60/15 x 24 x 80 x 3) / 1024 23 kilobytes
Table 41. Transaction Analysis (TRAN) worksheet
Interval Record Size
No. of Transactions
Formula Expected File Size per 24 Hours
15 min. 213 bytes 50 (60/15 x 24 x 213 x 50) / 1024
998 kilobytes
OMEGAMON XE for CICS
156 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 42. Transaction Storage Violations (CICSXSV) worksheet
Interval Record Size
No. of Transaction
IDs with Violations
Formula Expected File Size per 24 Hours
15 min. 80 bytes 3 (60/15 x 24 x 80 x 3) / 1024 23 kilobytes
Table 43. Transient Data Queues (CICSTDQ) worksheet
Interval Record Size
No. of TD Queues
Formula Expected File Size per 24 Hours
15 min. 86 bytes 10 (60/15 x 24 x 86 x 10) / 1024 81 kilobytes
Table 44. Transient Data Summary (CICSTDS) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 90 bytes (60/15 x 24 x 90) / 1024 9 kilobytes
Disk Space Requirements for Historical Data Tables 157
OMEGAMON XE for CICS
Table 45. UOW Analysis (CICSUWA) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 84 bytes (60/15 x 24 x 84) / 1024 8 kilobytes
Table 46. UOW Enqueue Analysis (CICSUWE) worksheet
Interval Record Size
No. of Units of
Work
Formula Expected File Size per 24 Hours
15 min. 102 bytes 50 (60/15 x 24 x 102 x 50) / 1024
479 kilobytes
Table 47. VSAM Analysis (VSAM) worksheet
Interval Record Size
No. of VSAM Files
Formula Expected File Size per 24 Hours
15 min. 220 bytes 50 (60/15 x 24 x 220 x 50) / 1024
1,032 kilobytes
OMEGAMON XE for CICS
158 Historical Data Collection Guide for OMEGAMON XE and CCC
OMEGAMON XE for CICS disk space summaryIn the worksheet examples, we used the minimum collection interval unit of 15 minutes. Creating a summary table provides a representative disk space storage space figure for all of the history files and archived files for a one-week time period, if all collection is done on the CMS. The table below summarizes the OMEGAMON XE for CICS sample data above. t.
Table 48. OMEGAMON XE for CICS disk space summary
History Table Historical Data Table Size in
kilobytes (24 hours)
No. of Archives
Subtotal Space Required (kilobytes)
Bottleneck Analysis 384 7 2,688
Connection Analysis 277 7 2,639
SDBCTL Summary 8 7 56
DB2 Summary 8 7 56
DB2 Task Activity 83 7 581
Dump Analysis 9 7 63
Enqueue Analysis 819 7 5,733
File Control Analysis 8 7 56
Intercommunication Summary
12 7 84
Internet Status 8 7 56
Journal Analysis 37 7 259
Log Stream Analysis 51 7 357
LSR Pool Status 66 7 462
MQ Connection Details 13 7 91
Region Overview 14 7 98
Response Time Elements 117 7 819
Response Time Analysis 60 7 420
RLS Lock Analysis 191 7 1,337
Storage Analysis 87 7 609
Disk Space Requirements for Historical Data Tables 159
OMEGAMON XE for CICS
System Initialization Table 1,034 7 7,238
Task Class Analysis 152 7 1,064
Temporary Storage Detail 313 7 2,191
Temporary Storage Summary
11 7 77
Terminal Storage Violations
23 7 161
Transaction Analysis 998 7 6,986
Transaction Storage Violations
23 7 161
Transient Data Queues 81 7 567
Transient Data Summary 9 7 63
UOW Analysis 8 7 56
UOW Enqueue Analysis 479 7 3,353
VSAM Analysis 1,032 7 7,224
Total Default Space 45,605 Kilobytes
Table 48. OMEGAMON XE for CICS disk space summary (continued)
OMEGAMON XE for CICS
160 Historical Data Collection Guide for OMEGAMON XE and CCC
OMEGAMON XE for CICS disk space summary worksheetYou can create a summary table that provides a representative disk space storage space figure for all of the history files and archived files for a one-week time period, if all collection is done on the CMS. To do so, multiply the expected file size per 24 hours total times seven.
The space requirements calculation assumes that only one CICS region is being monitored. If historical data collection is started against, for example, 10 CICS regions, the space required number needs to be multipled by ten
We recommend that you spread the disk space requirements among the systems where data collection is performed. For example, three historical tables might be collected on the CMS and others at the remote managed system. A disk space summary worksheet for OMEGAMON XE for CICS follows.
Table 49. OMEGAMON XE for CICS disk space summary worksheet
History Table Historical DataTable Size (kilobytes)
(24 hours)
No. of Archives
Subtotal Space Required
(kilobytes)
Bottleneck Analysis
Connection Analysis
DBCTL Summary
DB2 Summary
DB2 Task Activity
Dump Analysis
Enqueue Analysis
File Control Analysis
Intercommunication Summary
Internet Status
Journal Analysis
Log Stream Analysis
LSR Pool Status
MQ Connection Details
Disk Space Requirements for Historical Data Tables 161
OMEGAMON XE for CICS
Region Overview
Response Time Elements
Response Time Analysis
RLS Lock Analysis
Storage Analysis
System Initialization Table
Task Class Analysis
Temporary Storage Detail
Temporary Storage Summary
Terminal Storage Violations
Transaction Analysis
Transaction Storage Violations
UOW Analysis
UOW Enqueue Analysis
VSAM Analysis
Total Disk Space Required
Table 49. OMEGAMON XE for CICS disk space summary worksheet
OMEGAMON XE for DB2
162 Historical Data Collection Guide for OMEGAMON XE and CCC
OMEGAMON XE for DB2
OMEGAMON XE for DB2 historical data tablesImportant: This is an MVS product, therefore the space requirements for the tables are only valid when the Candle monitoring agent is connected to a non-MVS CMS and collection is being performed at the CMS. In all other cases, the data is written to the Persistent Data Store files that CICAT allocates. The current default allocation by CICAT is 50 cylinders.
Table 50. OMEGAMON XE for DB2 historical data tables
Attribute History Table Filename for Historical Data
Default HDC Table
Estimated Space Required per
managed system per 24 hours
DB2_Thread_Exceptions DP_TH_EXC Yes 3,840,000
DB2_System_States DP_SY_EXC Yes 17,088
DB2_CICS_Exceptions DP_CI_EXCS Yes 33,792
DB2_CICS_Threads DP_CI_THDS Yes 176,640
DB2_SRM_Subsystem DP_SRM_SUB Yes 25,728
DB2_SRM_Log_Manager DP_SRM_LOG Yes 82,176
DB2_SRM_EDM DP_SRM_EDM Yes 25,728
DB2_SRM_UTL DP_SRM_UTL Yes 24,576
DB2_SRM_BPM DP_SRM_BPM Yes 35,328
DB2_SRM_BPD DP_SRM_BPD Yes 70,656
DB2_DDF_STAT DP_DDF_STA Yes 280,320
DB2_DDF_CONV DP_DDF_CON Yes 40,320
DB2_IMS_Connections DP_IM_CONN Yes 13,824
DB2_IMS_Regions DP_IM_REG Yes 115,200
DB2_Volume_Activity DP_VOL_ACT Yes 522,240
Total Default Space 5,195,712
Disk Space Requirements for Historical Data Tables 163
OMEGAMON XE for DB2
OMEGAMON XE for DB2 table record sizes
Table 51. OMEGAMON XE for DB2 table record sizes
History Table Record Size Frequency
DP_TH_EXC 400 1 per DB2 thread
DP_SY_EXC 178 1 per monitored DB2 subsystem
DP_CI_EXCS 88 1 per CICS connection
DP_CI_THDS 92 1 per CICS thread
DP_SRM_SUB 268 1 per monitored DB2 subsystem
DP_SRM_LOG 428 1 per log
DP_SRM_EDM 268 1 per monitored DB2 subsystem
DP_SRM_UTL 128 1 per stopped utility
DP_SRM_BPM 92 1 per buffer pool
DP_SRM_BPD 184 1 per buffer pool
DP_DDF_STA 292 1 per DDF thread
DP_DDF_CON 84 1 per DDF conversation
DP_IM_CONN 72 1 per IMS connection
DP_IM_REG 120 1 per IMS region
DP_VOL_ACT 136 1 per active volume
OMEGAMON XE for DB2
164 Historical Data Collection Guide for OMEGAMON XE and CCC
OMEGAMON XE for DB2 space requirements worksheetsUse the following worksheets to estimate expected file sizes and the additional disk space requirements for your site. A sample calculation is provided for each historical data collection table. The minimum collection interval unit of 15 minutes is used in the calculation.
Table 52. DB2_Thread_Exceptions worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 400 bytes 100 threads x 400 bytes x 2 DB2 systems x 96 intervals
7,680,000 kilobytes
Table 53. DB2_System_States worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 178 bytes 178 bytes x 2 DB2 systems x 96 intervals
34,176 kilobytes
Table 54. DB2_CICS_Exceptions worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 88 bytes 4 connections x 88 bytes x 2 DB2 systems x 96 intervals
67,584 kilobytes
Disk Space Requirements for Historical Data Tables 165
OMEGAMON XE for DB2
Table 55. DB2_CICS_Threads worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 92 bytes 20 threads x 92 bytes x 2 DB2 systems x 96 intervals
353,280 kilobytes
Table 56. DB2_SRM_Subsystem worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 268 bytes 268 bytes x 2 DB2 systems x 96 intervals
51,456 kilobytes
Table 57. DB2_SRM_Log_Manager worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 428 bytes 2 logs x 428 bytes x 2 DB2 systems x 96 intervals
164,352 kilobytes
OMEGAMON XE for DB2
166 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 58. DB2_SRM_EDM worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 268 bytes 268 bytes x 2 DB2 systems x 96 intervals
51,456 kilobytes
Table 59. DB2_SRM_UTL worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 128 bytes 2 utilities x 128 bytes x 2 DB2 systems x 96 intervals
49,152 kilobytes
Table 60. DB2_SRM_BPM worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 92 bytes 4 buffer pools x 92 bytes x 2 DB2 systems x 96 intervals
70,656 kilobytes
Disk Space Requirements for Historical Data Tables 167
OMEGAMON XE for DB2
Table 61. DB2_SRM_BPD worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 184 bytes 4 buffer pools x 184 bytes x 2 DB2 systems x 96 intervals
141,312 kilobytes
Table 62. DB2_DDF_STAT worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 292 bytes 10 distributed threads x 292 bytes x 2 DB2 systems x 96 intervals
560,640 kilobytes
Table 63. DB2_DDF_CONV worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 84 bytes 5 conversations x 84 bytes x 2 DB2 systems x 96 intervals
80,640 kilobytes
OMEGAMON XE for DB2
168 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 64. DB2_IMS_Connections worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 72 bytes 2 IMS connections x 72 bytes x 2 DB2 systems x 96 intervals
27,648 kilobytes
Table 65. DB2_IMS_Regions worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 120 bytes 10 IMS regions x 120 bytes x 2 DB2 systems x 96 intervals
230,400 kilobytes
Table 66. DB2_Volume_Activity worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 136 bytes 40 volumes x 136 bytes x 2 DB2 systems x 96 intervals
1,044,480 kilobytes
Disk Space Requirements for Historical Data Tables 169
OMEGAMON XE for DB2
OMEGAMON XE for DB2 disk space summary worksheetWe recommend that you spread the disk space requirements among the systems where data collection is performed. For example, three historical tables might be collected on the CMS and others at the remote managed system. A disk space summary worksheet for OMEGAMON XE for DB2 follows.
Table 67. OMEGAMON XE for DB2 disk space summary worksheet
History Table Historical DataTable Size (kilobytes)
(24 hours)
No. of Archives
Subtotal Space Required
(kilobytes)
DB2_Thread_Exceptions
DB2_System_States
DB2_CICS_Exceptions
DB2_CICS_Threads
DB2_SRM_Subsystem
DB2_SRM_Log_Manager
DB2_SRM_EDM
DB2_SRM_UTL
DB2_SRM_BPM
DB2_SRM_BPD
DB2_DDF_STAT
DB2_DDF_CONV
DB2_IMS_Connections
DB2IMS_Regions_
DB2_Volume_Activity
Total Disk Space Required
OMEGAMON XE for DB2 Universal Database
170 Historical Data Collection Guide for OMEGAMON XE and CCC
OMEGAMON XE for DB2 Universal Database
OMEGAMON XE for DB2 historical data tablesImportant: This is an MVS product, therefore the space requirements for the tables are only valid when the Candle monitoring agent is connected to a non-MVS CMS and collection is being performed at the CMS. In all other cases, the data is written to the Persistent Data Store files that CICAT allocates. The current default allocation by CICAT is 50 cylinders.
Table 68. OMEGAMON XE for DB2 Universal Database historical data tables
Attribute History Table Filename for Historical Data
Default HDC Table
Estimated Space Required per
managed system per 24 hours
Application
(KUDDB2APPLGROUP00)
KUD2649700 Yes 347 Kilobytes
Database (KUDDBASEGROUP00)
KUD3437500 Yes 269 Kilobytes
System Overview (KUD4238000)
KUDINFO00 Yes 88 Kilobytes
Locking Conflict (KUD5214100)
KUDLOCKCONFLICT00
Yes 117 Kilobytes
Buffer Pool (KUD4177600) KUDBUFFERPOOL00
Yes 153 Kilobytes
Total Default Space 974 Kilobytes
Disk Space Requirements for Historical Data Tables 171
OMEGAMON XE for DB2 Universal Database
OMEGAMON XE for DB2 Universal Database table record sizes
OMEGAMON XE for DB2 Universal Database space requirements worksheets
Use the following worksheets to estimate expected file sizes and the additional disk space requirements for your site. A sample calculation is provided for each historical data collection table. A sample calculation is also provided for all tables.The minimum collection interval unit of 15 minutes is used in the calculation
For this table that follows, the calculation applies to all tables, every five minutes, 24x7 for 1 year on two instances of the Universal Database.
Table 69. OMEGAMON XE for DB2 Universal Database table record sizes
History Table Record Size Frequency
Application 1204 bytes 1 record per interval
Database 934 bytes 1 record per interval
System Overview 304 bytes 1 record per interval
Locking Conflict 404 bytes 1 record per interval
Buffer Pool 528 bytes 1 record per interval
Table 70. All tables worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
12 samplings per hour
1204 bytes+ 934 bytes+ 304 bytes+404 bytes +528 bytes
1204 bytes+ 934 bytes+ 304 bytes +404 bytes +528 bytes * ((12 samplings/hour) * (24 hours/day) * (365 days/year)) / (1024 bytes/kbytes) * (2 instances)
654,381.6 kilobytes
OMEGAMON XE for DB2 Universal Database
172 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 71. Application worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 178 bytes 178 bytes x 2 DB2 systems x 96 intervals
34,176 kilobytes
Table 72. Database worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 88 bytes 4 connections x 88 bytes x 2 DB2 systems x 96 intervals
67,584 kilobytes
Table 73. System Overview worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 92 bytes 20 threads x 92 bytes x 2 DB2 systems x 96 intervals
353,280 kilobytes
Disk Space Requirements for Historical Data Tables 173
OMEGAMON XE for DB2 Universal Database
Table 74. Locking Conflict worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 268 bytes 268 bytes x 2 DB2 systems x 96 intervals
51,456 kilobytes
Table 75. Buffer Pool worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 428 bytes 2 logs x 428 bytes x 2 DB2 systems x 96 intervals
164,352 kilobytes
OMEGAMON XE for DB2 Universal Database
174 Historical Data Collection Guide for OMEGAMON XE and CCC
OMEGAMON XE for DB2 Universal Database disk space summary worksheet
We recommend that you spread the disk space requirements among the systems where data collection is performed. For example, three historical tables might be collected on the CMS and others at the remote managed system. A disk space summary worksheet for OMEGAMON XE for DB2 Universal Database follows.
Table 76. OMEGAMON XE for DB2 Universal Database disk space summary worksheet
History Table Historical DataTable Size (kilobytes)
(24 hours)
No. of Archives
Subtotal Space Required
(kilobytes)
Application
Database
System Overview
Locking Conflict
Buffer Pool
Total Disk Space Required
Disk Space Requirements for Historical Data Tables 175
OMEGAMON XE for NetWare
OMEGAMON XE for NetWare
OMEGAMON XE for NetWare historical data tables
Note: Historical data collection for NetWare is only supported at the CMS. You cannot collect data at remote managed systems.
OMEGAMON XE for NetWare table record sizes
Table 77. OMEGAMON XE for NetWare historical data tables
Attribute History Table
Filename for Historical Data
Default HDC Table
Estimated Space Required per managed system per 24 Hours
Server SERVER Yes 37 kilobytes
Volume VOLUME Yes 20 kilobytes
Volume Usage VOLUSAGE Yes 17 kilobytes
Queue Jobs QJOB Yes 46 kilobytes
Connections CONNECT Yes 21 kilobytes
Total Default Space 141 kilobytes
Table 78. OMEGAMON XE for NetWare table record sizes
History Table Record Size Frequency
Server 388 bytes 1 record per interval
Volume 212 bytes 1 record per interval per mounted volume
Volume Usage 176 bytes 1 record per interval per mounted volume per user
Queue Jobs 488 bytes 1 record per interval
Connections 216 bytes 1 record per interval per connection
OMEGAMON XE for NetWare
176 Historical Data Collection Guide for OMEGAMON XE and CCC
OMEGAMON XE for NetWare space requirement worksheetsUse the following worksheets to estimate expected file sizes and the additional disk space requirements for your site. A sample calculation is provided for each historical data collection table.
Table 79. Server (SERVER) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 388 bytes (60/15 x 24 x 388) / 1024 37 kilobytes
Table 80. Volume (VOLUME) worksheet
Interval Record Size
No. of Volumes
Formula Expected File Size per 24 Hours
15 min. 212 bytes 1 (60/15 x 24 x 196) x 1 / 1024
20 kilobytes
Table 81. Volume Usage (VOLUSAGE) worksheet
Interval Record Size
No. of Volumes
No. of Users
Formula Expected File Size per 24 Hours
15 min. 176 bytes 1 1 (60/15 x 24 x 176) x 1 x 1 / 1024
17 kilobytes
Disk Space Requirements for Historical Data Tables 177
OMEGAMON XE for NetWare
OMEGAMON XE for NetWare disk space summary worksheetWe recommend that you spread the disk space requirements among the systems where data collection is performed. For example, three historical tables might be collected on the CMS and others at the remote agent managed system. A disk space summary worksheet for OMEGAMON XE for NetWare follows.
Table 82. Queue Jobs (QJOB) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 488 bytes (60/15 x 24 x 488) / 1024 46 kilobytes
Table 83. Connections (CONNECT) worksheet
Interval Record Size
No. of Connections
Formula Expected File Size per 24 Hours
15 min. 216 bytes 1 (60/15 x 24 x 216) x 1 / 1024
21 kilobytes
Table 84. OMEGAMON XE for NetWare disk space summary worksheet
History Table Historical DataTable Size (kilobytes)
(24 hours)
No. of Archives
Subtotal Space Required
(kilobytes)
Server
Volume
Volume Usage
OMEGAMON XE for NetWare
178 Historical Data Collection Guide for OMEGAMON XE and CCC
History Table Historical DataTable Size (kilobytes)
(24 hours)
No. of Archives
Subtotal Space Required
(kilobytes)
Queue Jobs
Connect
Total Disk Space Required
Table 84. OMEGAMON XE for NetWare disk space summary worksheet
Disk Space Requirements for Historical Data Tables 179
OMEGAMON XE for ORACLE
OMEGAMON XE for ORACLE
OMEGAMON XE for ORACLE historical data tables
Table 85. OMEGAMON XE for ORACLE historical data tables
Attribute History Table
Filename for Historical Data
Default HDCTable
Estimated Space Required per DB Instance
per 24 Hours
Alert Log Details KORALRTD No
Alert Log Summary KORALRTS Yes 23 kilobytes
Cache Totals KORCACHE Yes 27 kilobytes
Configuration KORCONFS No
Contention Summary KORLOCKS Yes 22 kilobytes
Databases Summary KORDB Yes 27 kilobytes
Files KORFILES No
Library Cache Usage KORLIBCU No
Lock Conflicts KORLCONF No
Logging Summary KORLOGS Yes 23 kilobytes
Process Detail KORPROCD No
Process Summary KORPROCS Yes 21 kilobytes
Rollback Segments KORRBST No
Segments KORSEGS No
Server Summary KORSRVR Yes 27 kilobytes
Server Options Detail KORSRVRD No
Session Detail KORSESSD No
Session Summary KORSESSS Yes 23 kilobytes
SGA Memory Summary KORSGA Yes 23 kilobytes
SQL Text Full KORSQLF No
OMEGAMON XE for ORACLE
180 Historical Data Collection Guide for OMEGAMON XE and CCC
Note: The number of kilobytes for values varies significantly. Space requirements depend upon conditions in your distributed database enterprise.
Note: A DB instance is each monitored Database server.
OMEGAMON XE for ORACLE table record sizesThe tables created in OMEGAMON XE for ORACLE differ from those created in OMEGAMON XE for UNIX. Some ORACLE tables create multiple records (rows) per interval based on a variable. Additionally, multiple instances of the ORACLE Database Manager can be running on any individual managed system. The chart below identifies those tables in which multiple rows are collected per interval per instance of the database manager and the variables associated with the table collection.
Attribute History Table
Filename for Historical Data
Default HDCTable
Estimated Space Required per DB Instance
per 24 Hours
Statistics Details KORSTATD No
Statistics Summary KORSTATS Yes 26 kilobytes
Tablespaces KORTS No
Trans Blocking Rollback Segment Wrap
KORTBRSW No
Total Default Space 242 kilobytes
Table 85. OMEGAMON XE for ORACLE historical data tables (continued)
Disk Space Requirements for Historical Data Tables 181
OMEGAMON XE for ORACLE
Values shown below for Files, Process Detail, Segments, Session Detail, and Tablespaces are minimums. Actual values vary depending on the number of files, processes, segments, active sessions, and tablespaces in each managed system.
Table 86. OMEGAMON XE for ORACLE table record sizes
History Table Record Size
(bytes)
Average Number of Records per Interval per Instance
Interval Size per Instance
(bytes)
Minimum Varies by
Alert Log Details 332 * *
Alert Log Summary 236 1 236
Cache Totals 288 1 288
Configuration 260 1 260
Contention Summary 226 1 226
Databases Summary 281 1 281
Files 339 5 Number of files 1,695
Library Cache Usage 231 * *
Lock Conflicts 254 * *
Logging Summary 244 1 244
Process Detail 235 15 Number of processes 3,525
Process Summary 244 1 244
Rollback Segments 297 * *
Segments 309 200 Number of segments 61,800
Server 286 1 286
Server Options 246 1 246
Session Detail 426 12 Number of sessions 5,112
Session Summary 236 1 236
SGA Memory Summary 236 1 236
Statistics Details 263 135 35,505
OMEGAMON XE for ORACLE
182 Historical Data Collection Guide for OMEGAMON XE and CCC
* Undetermined
OMEGAMON XE for ORACLE space requirement worksheetsUse the following worksheets to estimate expected file sizes and the additional disk space requirements for your site. A sample calculation is provided for each historical data collection table. Note that the term “instance” is substituted for the term “managed system” in the worksheets. Multiple instances of the database manager can be running on any individual managed system and, therefore, the number of instances is included in the calculation to determine expected file size for a 24-hour period.
* Undetermined
History Table Record Size
(bytes)
Average Number of Records per Interval per Instance
Interval Size per Instance
(bytes)
Statistics Summary 276 1 276
Tablespaces 287 5 Number of tablespaces 1,435
Trans Blocking Rollback Segment Wrap
232 * *
Table 87. Alert Log Details (KORALRTD) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. * bytes (60/15 x 24 x *) x instances /1024
* kilobytes
Table 86. OMEGAMON XE for ORACLE table record sizes (continued)
Disk Space Requirements for Historical Data Tables 183
OMEGAMON XE for ORACLE
Table 88. Alert Log Summary (KORALRTS) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 236 bytes (60/15 x 24 x 236) x instances /1024
23 kilobytes
Table 89. Cache Totals (KORCACHE) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 288 bytes (60/15 x 24 x 288) x instances /1024
27 kilobytes
Table 90. Configuration (KORCONFS) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 260 bytes (60/15 x 24 x 260) x instances /1024
25 kilobytes
OMEGAMON XE for ORACLE
184 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 91. Contention Summary (KORLOCKS) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 226 bytes (60/15 x 24 x 226) x instances /1024
22 kilobytes
Table 92. Databases Summary (KORDB) worksheets
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 281 bytes (60/15 x 24 x 281) x instances /1024
27 kilobytes
Table 93. Files (KORFILES) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 1695 bytes (60/15 x 24 x 1695) x instances /1024
159 kilobytes
Disk Space Requirements for Historical Data Tables 185
OMEGAMON XE for ORACLE
* Undetermined
* Undetermined
* Undetermined
Table 94. Library Cache Usage (KORLIBCU) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. * bytes (60/15 x 24 x *) x instances /1024
* kilobytes
Table 95. Lock Conflicts (KORLCONF) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. * bytes (60/15 x 24 x *) x instances /1024
* kilobytes
Table 96. Logging Summary (KORLOGS) worksheet
Interval 244 Record Size
Formula Expected File Size per 24 Hours
15 min. * bytes (60/15 x 24 x *) x instances /1024
* kilobytes
OMEGAMON XE for ORACLE
186 Historical Data Collection Guide for OMEGAMON XE and CCC
* Undetermined
Table 97. Process Detail (KORPROCD) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 3525 bytes (60/15 x 24 x 3525) x instances /1024
331 kilobytes
Table 98. Process Summary (KORPROCS) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 244 220bytes (60/15 x 24 x 244) x instances /1024
23 kilobytes
Table 99. Rollback Segments (KORRBST) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. * bytes (60/15 x 24 x *) x instances /1024
* kilobytes
Disk Space Requirements for Historical Data Tables 187
OMEGAMON XE for ORACLE
Table 100. Segments (KORSEGS) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 61800 bytes (60/15 x 24 x 61800) x instances /1024
5,794 kilobytes
Table 101. Server (KORSRVR) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 286 bytes (60/15 x 24 x 286) x instances /1024
27 kilobytes
Table 102. Server Options (KORSRVRD) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 246 bytes (60/15 x 24 x 246) x instances /1024 24 kilobytes
OMEGAMON XE for ORACLE
188 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 103. Session Detail (KORSESSD) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 5112 bytes (60/15 x 24 x 5112) x instances /1024
480 kilobytes
Table 104. Session Summary (KORSESSS) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 236 bytes (60/15 x 24 x 236) x instances /1024 23 kilobytes
Table 105. SGA Memory Summary (KORSGA) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 236 bytes (60/15 x 24 x 236) x instances /1024 23 kilobytes
Disk Space Requirements for Historical Data Tables 189
OMEGAMON XE for ORACLE
* Undetermined
Table 106. SQL Text Full (KORSQLF) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. * bytes (60/15 x 24 x *) x instances /1024
* kilobytes
Table 107. Statistics Detail (KORSTATD) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 35505 bytes (60/15 x 24 x 35505) x instances /1024
3,329 kilobytes
Table 108. Statistics Summary (KORSTATS) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 276 bytes (60/15 x 24 x 276) x instances /1024
26 kilobytes
OMEGAMON XE for ORACLE
190 Historical Data Collection Guide for OMEGAMON XE and CCC
* Undetermined
OMEGAMON XE for ORACLE disk space summary worksheetWe recommend that you spread the disk space requirements among the systems where data collection is performed. For example, three historical tables might be collected on the CMS and others at the remote agent managed system. A disk space summary worksheet for OMEGAMON XE for ORACLE follows.
Table 109. Tablespaces (KORTS) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 1435 bytes (60/15 x 24 x 1435) x instances /1024
135 kilobytes
Table 110. Trans Blocking Rollback Segment Wrap (KORTBRSW) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. * bytes (60/15 x 24 x *) x instances /1024
* kilobytes
Table 111. OMEGAMON XE for ORACLE disk space summary worksheet
History Table Historical DataTable Size (kilobytes)
(24 hours)
No. of Archives
Subtotal Space Required
(kilobytes)
Alert Log Details undetermined
Alert Log Summary 23
Disk Space Requirements for Historical Data Tables 191
OMEGAMON XE for ORACLE
History Table Historical DataTable Size (kilobytes)
(24 hours)
No. of Archives
Subtotal Space Required
(kilobytes)
Cache Totals 27
Configuration 25
Contention Summary 22
Databases Summary 27
Files 159
Library Cache Usage undetermined
Lock Conflicts undetermined
Logging Summary 23
Process Detail 331
Process Summary 23
Rollback Segments undetermined
Segments 5,794
Server 27
Server Options 24
Session Detail 480
Session Summary 23
SGA Memory Summary
23
SQL Text Full undetermined
Statistics Details 3,329
Statistics Summary 26
Tablespaces 135
Trans Blocking Rollback Segment Wrap Tablespaces
undetermined
Total Disk Space Required
Table 111. OMEGAMON XE for ORACLE disk space summary worksheet
OMEGAMON XE for Sybase
192 Historical Data Collection Guide for OMEGAMON XE and CCC
OMEGAMON XE for Sybase
OMEGAMON XE for Sybase historical data tables
Table 112. OMEGAMON XE for Sybase historical data tables
Attribute History Table
Filename for Historical Data
Default HDC Table
Estimated Space Required per DB
Instance per 24 Hours
Cache Detail KOYCACD No
Cache Summary KOYCACS Yes 32 kilobytes
Configuration KOYSCFG No
Databases Detail KOYDBD No
Databases Summary KOYDBS Yes 117 kilobytes
Devices Detail KOYDEVD No
Engine Detail KOYENGD No
Engine Summary KOYENGS Yes 24 kilobytes
Lock Conflict Detail KOYLOCK No
Lock Conflict Summary KOYLOCKS Yes 26 kilobytes
Lock Detail KOYLOCKD No
Locks Summary KOYLOCKS Yes 27 kilobytes
Log Detail KOYLOGD No
Log Summary KOYLOGS Yes 22 kilobytes
Physical Device Detail KOYSDEVD No
Problem Detail KOYPROBD No
Problem Summary KOYPROBS Yes 24 kilobytes
Process Detail KOYPRCD No
Process Summary KOYPRCS Yes 25 kilobytes
Remote Servers KOYSRVR Yes 24 kilobytes
Disk Space Requirements for Historical Data Tables 193
OMEGAMON XE for Sybase
Note: The number of kilobytes for values varies significantly. Space requirements depend upon conditions in your distributed database enterprise.
Note: A DB instance is each monitored Database server.
OMEGAMON XE for Sybase table record sizesThe tables created in OMEGAMON XE for Sybase differ from those created in OMEGAMON XE for UNIX. Some Sybase tables create multiple records (rows) per interval based on a variable. Additionally, multiple instances of the Sybase Database Manager can be running on any individual managed system. The chart below identifies those tables in which multiple rows are collected per interval per instance of the database manager and the variables associated with the table collection.
Attribute History Table
Filename for Historical Data
Default HDC Table
Estimated Space Required per DB
Instance per 24 Hours
Server Detail KOYSRVRD No
Server Summary KOYSRVS Yes 26 kilobytes
SQL Detail KOYSQLD No
Statistics Detail KOYSTATD No
Statistics Summary KOYSTATS Yes 24 kilobytes
Task Detail KOYTSKD No
Total Default Space 371 kilobytes
Table 112. OMEGAMON XE for Sybase historical data tables (continued)
OMEGAMON XE for Sybase
194 Historical Data Collection Guide for OMEGAMON XE and CCC
Values shown below for Database Detail, Device Detail, Lock Conflict Detail, and Process Detail are minimums. Actual values vary, depending on the number of databases, devices, lock conflicts, and processes in each managed system.
Table 113. OMEGAMON XE for Sybase table record sizes
History Table Record Size
(bytes)
Average Records per Interval per Instance
Record Size per Instance
(bytes)
Minimum Varies by
Cache Detail 404 * *
Cache Summary 334 1 334
Configuration 283 43 12,169
Databases Detail 311 4 Number of databases 1,244
Databases Summary 234 1 234
Devices Detail 418 4 Number of devices (1+/DB)
1,672
Engine Detail 328 * *
Engine Summary 256 1 256
Lock Conflict Detail 367 * Number of lock conflicts
*
Lock Conflict Summary
274 1 274
Lock Detail 287 * *
Locks Summary 286 1 286
Log Detail 287 * *
Log Summary 226 1 226
Physical Device Detail 316 * *
Problem Detail 368 1 368
Problem Summary 246 1 246
Process Detail 418 6 Number of processes 2,508
Disk Space Requirements for Historical Data Tables 195
OMEGAMON XE for Sybase
* Undetermined
OMEGAMON XE for Sybase space requirement worksheetsUse the following worksheets to estimate expected file sizes and the additional disk space requirements for your site. A sample calculation is provided for each historical data collection table. Note that the term “instance” is substituted for the term “managed system” in the worksheets. Multiple instances of the database manager can be running on any individual managed system and, therefore, the number of instances is included in the calculation to determine expected file size for a 24-hour period.
* Undetermined
History Table Record Size
(bytes)
Average Records per Interval per Instance
Record Size per Instance
(bytes)
Minimum Varies by
Process Summary 260 1 260
Remote Servers 254 1 254
Server Detail 354 1 354
Server Summary 270 1 270
Statistics Detail 250 10 2,500
Statistics Summary 250 1 250
Task Detail 246 * *
Table 114. Cache Detail (KOYCACD) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. * bytes (60/15 x 24 x *) x instances /1024 * kilobytes
Table 113. OMEGAMON XE for Sybase table record sizes (continued)
OMEGAMON XE for Sybase
196 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 115. Cache Summary (KOYCACS) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 334 bytes (60/15 x 24 x 334) x instances /1024 32 kilobytes
Table 116. Configuration (KOYSCFG) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 12,169 bytes (60/15 x 24 x 12,169) x instances /1024
1141 kilobytes
Table 117. Databases Detail (KOYDBD) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 1,244 bytes (60/15 x 24 x 1,244) x instances /1024
117 kilobytes
Disk Space Requirements for Historical Data Tables 197
OMEGAMON XE for Sybase
* Undetermined
Table 118. Databases Summary (KOYDBS) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 234 bytes (60/15 x 24 x 234) x instances /1024 22 kilobytes
Table 119. Devices Detail (KOYDEVD) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 1,384 bytes (60/15 x 24 x 1,384) x instances /1024
130 kilobytes
Table 120. Engine Detail (KOYENGD) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. * bytes (60/15 x 24 x *) x instances /1024 *
Table 121. Engine Summary (KOYENGS) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
OMEGAMON XE for Sybase
198 Historical Data Collection Guide for OMEGAMON XE and CCC
15 min. 256 bytes (60/15 x 24 x 256) x instances /1024 24 kilobytes
Table 122. Lock Conflict Detail (KOYLOCK) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 376 bytes (60/15 x 24 x 376) x instances /1024 36 kilobytes
Table 123. Lock Conflict Summary (KOYLOCKS) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 274 bytes (60/15 x 24 x 274) x instances /1024 26 kilobytes
Table 121. Engine Summary (KOYENGS) worksheet
Disk Space Requirements for Historical Data Tables 199
OMEGAMON XE for Sybase
* Undetermined
* Undetermined
Table 124. Lock Detail (KOYLCK) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. * bytes (60/15 x 24 x *) x instances /1024 * kilobytes
Table 125. Lock Summary (KOYLCKS) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 286 bytes (60/15 x 24 x 274) x instances /1024 27 kilobytes
Table 126. Log Detail (KOYLOGD) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. * bytes (60/15 x 24 x *) x instances /1024 * kilobytes
OMEGAMON XE for Sybase
200 Historical Data Collection Guide for OMEGAMON XE and CCC
* Undetermined
Table 127. Log Summary (KOYLOGS) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 226 bytes (60/15 x 24 x 226) x instances /1024 22 kilobytes
Table 128. Physical Device Detail (KOYSDEVD) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. * bytes (60/15 x 24 x *) x instances /1024 * kilobytes
Table 129. Problem Detail (KOYPROBD) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 366 bytes (60/15 x 24 x 366) x instances /1024 35 kilobytes
Table 130. Problem Summary (KOYPROBS) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
Disk Space Requirements for Historical Data Tables 201
OMEGAMON XE for Sybase
15 min. 246 bytes (60/15 x 24 x 246) x instances /1024 24 kilobytes
Table 131. Process Detail (KOYPRCD) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 2,328 bytes (60/15 x 24 x 2,328) x instances /1024
219 kilobytes
Table 132. Process Summary (KOYPRCS) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 260 bytes (60/15 x 24 x 260) x instances /1024 25 kilobytes
Table 130. Problem Summary (KOYPROBS) worksheet
OMEGAMON XE for Sybase
202 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 133. Remote Servers (KOYSRVR) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 254 bytes (60/15 x 24 x 254) x instances /1024 24 kilobytes
Table 134. Server Detail (KOYSRVD) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 354 bytes (60/15 x 24 x 354) x instances /1024 34 kilobytes
Table 135. Server Summary (KOYSRVS) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 270 bytes (60/15 x 24 x 270) x instances /1024 26 kilobytes
Disk Space Requirements for Historical Data Tables 203
OMEGAMON XE for Sybase
* Undetermined
Table 136. SQL Detail (KOYSQLD) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. * bytes (60/15 x 24 x *) x instances /1024 * kilobytes
Table 137. Statistics Detail (KOYSTATD) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 2,500 bytes (60/15 x 24 x 2,500) x instances /1024
235 kilobytes
Table 138. Statistics Summary (KOYSTATS) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 250 bytes (60/15 x 24 x 250) x instances /1024 24 kilobytes
OMEGAMON XE for Sybase
204 Historical Data Collection Guide for OMEGAMON XE and CCC
* Undetermined
Table 139. Task Detail (KOYTSKD) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. * bytes (60/15 x 24 x *) x instances /1024 * kilobytes
Disk Space Requirements for Historical Data Tables 205
OMEGAMON XE for Sybase
OMEGAMON XE for Sybase disk space summary worksheetWe recommend that you spread the disk space requirements among the systems where data collection is performed. For example, three historical tables might be collected on the CMS and others at the remote agent managed system. A disk space summary worksheet for OMEGAMON XE for Sybase follows.
Table 140. OMEGAMON XE for Sybase disk space summary worksheet
History Table Historical DataTable Size (kilobytes)
(24 hours)
No. of Archives
Subtotal Space Required
(kilobytes)
Cache Detail undetermined
Cache Summary 32
Configuration 1,141
Databases Detail 117
Databases Summary 22
Devices Detail 130
Engine Detail undetermined
Engine Summary 24
Lock Conflict Detail 36
Lock Conflict Summary
26
Lock Detail undetermined
Locks Summary 27
Log Detail undetermined
Log Summary 22
Physical Device Detail undetermined
Problem Detail 35
Problem Summary 24
Process Detail 219
OMEGAMON XE for Sybase
206 Historical Data Collection Guide for OMEGAMON XE and CCC
History Table Historical DataTable Size (kilobytes)
(24 hours)
No. of Archives
Subtotal Space Required
(kilobytes)
Process Summary 25
Remote Servers 24
Server Detail 34
Server Summary 26
SQL Detail undetermined
Statistics Detail 235
Statistics Summary 24
Task Detail undetermined
Total Disk Space Required
Table 140. OMEGAMON XE for Sybase disk space summary worksheet
Disk Space Requirements for Historical Data Tables 207
OMEGAMON XE for MS SQL Server
OMEGAMON XE for MS SQL Server
OMEGAMON XE for MS SQL Server historical data tables
Note: The number of kilobytes for values varies significantly. Space requirements depend upon conditions in your distributed database enterprise.
Table 141. OMEGAMON XE for MS SQL Server historical data tables
Attribute History Table
Filename for Historical Data
Default HDCTable
Estimated Space Required per DB
Instance per 24 Hours
Configuration KOQSCFG No 19 kilobytes
Database Detail KOQDBD No 22 kilobytes
Database Summary KOQDBS Yes 15 kilobytes
Device Detail KOQDEVD No
Lock Conflict Detail KOQLOCK No
Lock Detail KOQLOCKS No
Problem Detail KOQPROBD No
Problem Summary KOQPROBS Yes 16 kilobytes
Process Detail KOQPRCD No
Process Summary KOQPRCS Yes 18 kilobytes
Remote Servers KOQSRVR Yes 17 kilobytes
Server Detail KOQSRVD No
Server Summary KOQSRVS Yes 18 kilobytes
Statistics Detail KOQSTATD No
Statistics Summary KOQSTATS Yes 16 kilobytes
Text KOQSQL No
Total Default Space 141 kilobytes
OMEGAMON XE for MS SQL Server
208 Historical Data Collection Guide for OMEGAMON XE and CCC
OMEGAMON XE for MS SQL Server table record sizesSome MS SQL Server tables create multiple records (rows) per interval based on a variable. Additionally, multiple instances of the MS SQL Server Database Manager can be running on any individual managed system. The chart below identifies those tables in which multiple rows are collected per interval per instance of the database manager and the variables associated with the table collection.
Values shown below for tables marked by an asterisk (*) are minimums. Actual values vary, depending on the number of devices, lock, and processes in each managed system.
Table 142. OMEGAMON XE for MS SQL Server table record sizes
History Table Record Size
(bytes)
Average Number of Records per Interval per Instance
Interval Size per Instance
(bytes)
Minimum Varies by
Configuration 197 43
Database Detail 225 1 225
Database Summary 148 1 148
Device Detail* 324 * Number of devices *
Lock Conflict Detail 178 1 178
Lock Detail* 186 * Number of locks *
Problem Detail* 282 1 Number of messages *
Problem Summary 160 1 160
Process Detail* 334 * Number of processes *
Process Summary 184 1 184
Remote Servers 168 1 168
Server Detail 342 1 342
Server Summary 184 1 184
History Table Record Size
(bytes)
Average Number of Records per Interval per Instance
Interval Size per Instance
(bytes)
Disk Space Requirements for Historical Data Tables 209
OMEGAMON XE for MS SQL Server
* Undetermined
Note: A DB instance is each monitored database server.
OMEGAMON XE for MS SQL Server space requirement worksheetsUse the following worksheets to estimate expected file sizes and the additional disk space requirements for your site. A sample calculation is provided for each historical data collection table. Note that the term “instance” is substituted for the term “managed system” in the worksheets. Multiple instances of the database manager can be running on any individual managed system and, therefore, the number of instances is included in the calculation to determine expected file size for a 24-hour period.
The expected file size for a single instance that is designated by an asterisk (*) is undetermined. The value depends upon variables, such as the number of sessions, that are specific to your database enterprise. Refer to the formula to calculate your size requirements.
Statistics Detail* 164 * Number of statistics *
Statistics Summary 164 1 164
Text* 449 * Number of statements *
Table 143. Configuration (KOQSCFG) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 197 bytes (60/15 x 24 x 197) x instances /1024
19 kilobytes
Table 144. Database Detail (KOQDBD) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
Table 142. OMEGAMON XE for MS SQL Server table record sizes
OMEGAMON XE for MS SQL Server
210 Historical Data Collection Guide for OMEGAMON XE and CCC
* Undetermined
15 min. 225 bytes (60/15 x 24 x 225) x instances /1024 22 kilobytes
Table 145. Database Summary (KOQDBS) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 148 bytes (60/15 x 24 x 148) x instances /1024 15 kilobytes
Table 146. Device Detail (KOQDEVD) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 324 bytes (60/15 x 24 x (324 x devices)) instances /1024
* kilobytes
Table 144. Database Detail (KOQDBD) worksheet
Disk Space Requirements for Historical Data Tables 211
OMEGAMON XE for MS SQL Server
* Undetermined
* Undetermined
* Undetermined
Table 147. Lock Conflict Detail (KOQLOCK) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 178 bytes (60/15 x 24 x (178 x conflicts) instances /1024
* kilobytes
Table 148. Lock Detail (KOQLOCKS) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 186 bytes (60/15 x 24 x (186 x locks)) x instances /1024
* kilobytes
Table 149. Problem Detail (KOQPROBD) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 282 bytes (60/15 x 24 x 282 x messages) instances /1024
* kilobytes
OMEGAMON XE for MS SQL Server
212 Historical Data Collection Guide for OMEGAMON XE and CCC
* Undetermined
Table 150. Problem Summary (KOXPROBS) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 160 bytes (60/15 x 24 x 160) x instances /1024 16 kilobytes
Table 151. Process Detail (KOQPRCD) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 334 bytes (60/15 x 24 x (334 x processes)) x instances /1024
* kilobytes
Table 152. Process Summary (KOQPRCS) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 184 bytes (60/15 x 24 x 184) x instances /1024 18 kilobytes
Disk Space Requirements for Historical Data Tables 213
OMEGAMON XE for MS SQL Server
Table 153. Remote Servers (KOQSRVR) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 168 bytes (60/15 x 24 x 168) x instances /1024 17 kilobytes
Table 154. Server Detail (KOQSRVD) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 342 220bytes (60/15 x 24 x 342) x instances /1024 33 kilobytes
Table 155. Server Summary (KOQSRVS) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 184 bytes (60/15 x 24 x 184) x instances /1024 18 kilobytes
OMEGAMON XE for MS SQL Server
214 Historical Data Collection Guide for OMEGAMON XE and CCC
* Undetermined
* Undetermined
Table 156. Statistics Detail (KOQSTATD) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 164 bytes (60/15 x 24 x (164 x statistics) x instances /1024
* kilobytes
Table 157. Statistics Summary (KOQSTATS) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 164 bytes (60/15 x 24 x 164) x instances /1024 16 kilobytes
Table 158. Text (KOQSQL) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 449 bytes (60/15 x 24 x (449 x statements)) x instances /1024
* kilobytes
Disk Space Requirements for Historical Data Tables 215
OMEGAMON XE for MS SQL Server
OMEGAMON XE for MS SQL Server disk space summary worksheetWe recommend that you spread the disk space requirements among the systems where data collection is performed. For example, three historical tables might be collected on the CMS and others at the remote agent managed system. A disk space summary worksheet for OMEGAMON XE for MS SQL Server follows.
Note: *This value varies significantly. It depends upon conditions in your distributed database enterprise.
Table 159. OMEGAMON XE for MS SQL Server disk space summary worksheet
History Table Historical DataTable Size (kilobytes)
(24 hours)
No. of Archives
Subtotal Space Required
(kilobytes)
Configuration 19
Database Detail 22
Database Summary 15
Device Detail 31
Lock Conflict Detail* undetermined
Lock Detail* undetermined
Problem Detail* undetermined
Problem Summary 16
Process Detail* undetermined
Process Summary 18
Remote Servers 17
Server Detail 33
Server Summary 18
Statistics Detail* undetermined
Statistics Summary 16
Text* undetermined
Total Disk Space Required
OMEGAMON XE for OS/390
216 Historical Data Collection Guide for OMEGAMON XE and CCC
OMEGAMON XE for OS/390
OMEGAMON XE for OS/390 historical data tablesThe amount of default space required for a 24-hour period on a monitored system varies greatly and depends upon your specific operating environment.
Important: Requests for historical data from tables that collect a large amount of data will have a negative impact on the performance of the Candle components involved. To reduce the performance impact on your system, we recommend setting a longer collection interval for tables that collect a large amount of data. For this product, the Address Space tables, the DASD MVS Devices table, and the Enqueue table (for sites that are active with WebSphere) collect a large amount of data. For additional information, see “Performance Impact of Historical Data Requests” on page 42.
Table 160. OMEGAMON XE for OS/390 historical data tables
Attribute History Table Filename for Historical Data
Default HDDC Table
Estimated Space Required per
Managed System per 24 Hours
(kilobytes)
Address Space CPU Utilization
ASCPUUTIL Yes 9188 (assumes 500 address spaces)
Address Space Real Storage ASREALSTOR Yes 4594 (assumes 500 active address spaces)
Address Space Virtual Storage
ASVIRTSTOR Yes 6094 (assumes 500 address spaces)
Channel Paths CHNPATHS Yes 1182 (assumes 100 channel paths)
Common Storage COMSTOR Yes 38
DASD_MVS DASDMVS Yes 8
DASD MVS Devices DASDMVSDEV Yes 37687 (assumes 2,000 devices)
Disk Space Requirements for Historical Data Tables 217
OMEGAMON XE for OS/390
Enclave Table ENCTABLE Yes 7669 (average count of created enclaves)
Enqueues ENQUEUE Yes 344 (assumes 10 active enqueues)
LPAR Clusters LPCLUST Yes 184 (assumes 5 LPARs and 1 cluster)
Operator Alerts OPERALRT Yes 8
Page Dataset Activity PAGEDS Yes 114 (assumes 10 local page datasets, 1 common page data set, 1 PLPA page dataset, and 0 swap datasets.
Real Storage REALSTOR Yes 22
System CPU Utilization SYSCPUUTIL Yes 18
System Paging Activity PAGING Yes 9
Tape Drives TAPEDRVS Yes 197 (assumes 20 tape devices)
User Response Time URESPTM Yes 259 (assumes 30 active TSO users)
WLM Service Class Resources
MWLMPR Yes 2813 (assumes 50 service classes and 100 report classes)
Table 160. OMEGAMON XE for OS/390 historical data tables (continued)
Attribute History Table Filename for Historical Data
Default HDDC Table
Estimated Space Required per
Managed System per 24 Hours
(kilobytes)
OMEGAMON XE for OS/390
218 Historical Data Collection Guide for OMEGAMON XE and CCC
OMEGAMON XE for OS/390 table record sizesThe following table contains record sizes for each OMEGAMON XE for OS/390 attribute table.
Table 161. OMEGAMON XE for OS/390 table record sizes
History Table Record Sizein bytes
Frequency
Address Space CPU Utilization
168 1 row per started address space
Address Space Real Storage Utilization
70 1 row per started address space
Address Space Virtual Storage
102 1 row per started address space
Channel Path Activity 98 1 row per channel path defined
Common Storage Utilization 72 4 rows
DASD_MVS 52 1 row
DASD MVS Devices 173 1 row per DASD volume
Enclave Table 790 1 row per created enclave
Enqueues 339 1 row per active enqueue conflict
LPAR Clusters 251 1 row per LPAR, plus 1 row per cluster, plus 1
Operator Alerts 53 1 row
Page Dataset Activity 73 1 row per paging volume
Real Storage Utilization 85 2 rows
System CPU Utilization 162 1 row
System Paging Activity 68 1 row
Tape Drives 77 1 row per tape device
User Response Time 64 1 row per active TSO user
WLM Service Class Resources
172 1 row per service class period active, plus 1 row per report class
Disk Space Requirements for Historical Data Tables 219
OMEGAMON XE for OS/390
Table 162. Address Space (ASCPUUTIL) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 168 + 28 (overhead) bytes
(60/15 x 24 x 196 x 500) / 1024 9188 kilobytes
Table 163. Address Space Real Storage (ASREALSTOR) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 70 + 28 (overhead) bytes
(60/15 x 24 x 98 x 500) / 1024 4594 kilobytes
Table 164. Address Space Virtual Storage (ASVIRTSTOR) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 102 + 28 (overhead) bytes
(60/15 x 24 x 130 x 500) / 1024 6094 kilobytes
OMEGAMON XE for OS/390
220 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 165. Channel Paths (CHNPATHS) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 98 + 28 (overhead) bytes
(60/15 x 24 x 116 x 100) / 1024 1182 kilobytes
Table 166. Common Storage (COMSTOR) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 72 + 28 (overhead) bytes
(60/15 x 24 x 100 x 4) / 1024 38 kilobytes
Table 167. DASD MVS Devices (DASDMVSDEV) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 173 + 28 (overhead) bytes
(60/15 x 24 x 201 x 500) / 2000 37688 kilobytes
Disk Space Requirements for Historical Data Tables 221
OMEGAMON XE for OS/390
Table 168. DASD MVS (DASDMVS) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 52 + 28 (overhead) bytes
(60/15 x 24 x 80 x 1) / 1024 8 kilobytes
Table 169. Enclave Table (ENCTABLE) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 790 + 28 (overhead) bytes
(60/15 x 24 x 818 x 100) / 1024 7669 kilobytes
Table 170. Enqueues (ENQUEUE) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 339 + 28 (overhead) bytes
(60/15 x 24 x 367 x 100) / 1024 344 kilobytes
OMEGAMON XE for OS/390
222 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 171. LPAR Clusters (LPCLUST) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 251 + 28 (overhead) bytes
(60/15 x 24 x 279 x 7) / 1024 183 kilobytes
Table 172. Operator Alerts (OPERALRT) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 54 + 28 (overhead) bytes
(60/15 x 24 x 82 x 1) / 1024 8 kilobytes
Table 173. Page Dataset Activity (PAGEDS) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 73 + 28 (overhead) bytes
(60/15 x 24 x 101 x 12) / 1024 114 kilobytes
Disk Space Requirements for Historical Data Tables 223
OMEGAMON XE for OS/390
Table 174. Real Storage (REALSTOR) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 85 + 28 (overhead) bytes
(60/15 x 24 x 113 x 2) / 1024 22 kilobytes
Table 175. System Paging Activity (PAGING) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 68 + 28 (overhead) bytes
(60/15 x 24 x 96 x 1) / 1024 9 kilobytes
Table 176. System CPU Utilization (SYSCPUUTIL) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 162 + 28 (overhead) bytes
(60/15 x 24 x 190 x 1) / 1024 18 kilobytes
OMEGAMON XE for OS/390
224 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 177. Tape Drives (TAPEDRVS) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 77 + 28 (overhead) bytes
(60/15 x 24 x 105 x 20) / 1024 197 kilobytes
Table 178. User Response Time (URESPTM) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 64 + 28 (overhead) bytes
(60/15 x 24 x 92 x 30) / 1024 259 kilobytes
Table 179. WLM Service Class Resources (MWLMPR) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 172 + 28 (overhead) bytes
(60/15 x 24 x 200 x 150) / 1024 183 kilobytes
Disk Space Requirements for Historical Data Tables 225
OMEGAMON XE for OS/390
OMEGAMON XE for OS/390 disk space summary worksheetWe recommend that you spread the disk space requirements among the systems where data collection is performed. For example, three historical tables might be collected on the CMS and others at the remote agent managed system. A disk space summary worksheet for OMEGAMON XE for OS/390 follows.
Table 180. OMEGAMON XE for OS/390 disk space summary worksheet
History Table Historical Data Table Size (kilobytes) (24
hours)
No. of Archives
Subtotal Space Required
(kilobytes)
Address Space CPU Utilization
Address Space Real Storage
Address Space Virtual Storage
Channel Paths
Common Storage
Queue Load
DASD MVS Devices
DASD MVS
Enclave Table
Enqueues
LPAR Clusters
WLM Service Class Resources
Operator Alerts
Page Dataset Activity
System Paging Activity
Real Storage
System CPU Utilitization
Tape Drives
User Response Time
Total Disk Space Required
OMEGAMON XE for OS/390 UNIX System Services
226 Historical Data Collection Guide for OMEGAMON XE and CCC
OMEGAMON XE for OS/390 UNIX System Services
OMEGAMON XE for OS/390 UNIX System Services historical data tables
The amount of default space required for a 24-hour period on a monitored system varies greatly and depends upon your specific operating environment.
Table 181. OMEGAMON XE for OS/390 UNIX System Services historical data tables
Attribute History Table Filename for Historical Data
Default HDDC Table
Estimated Space Required per
Managed System per 24 Hours
(kilobytes)
USS Address Spaces ASRESRC2 Yes 2466
USS Kernel OEKERNL2 Yes 28
USS Processes OPS2 Yes 49238
USS Logged on Users OUSERS2 Yes 360
USS Mounted File Systems MOUNTS2 Yes 17599
USS BPXPRMxx Values BPXPRM2 Yes 5625
USS Threads THREAD2 Yes 6300
USS HFS ENQ Contention HFSENQC2 No
Total Default Space 81616
Disk Space Requirements for Historical Data Tables 227
OMEGAMON XE for OS/390 UNIX System Services
OMEGAMON XE for OS/390 UNIX System Services table record sizesThe following table contains record sizes for each OMEGAMON XE for OS/390 UNIX System Services attribute table.
OMEGAMON XE for OS/390 UNIX System Services space requirement worksheets
Use the following worksheets to estimate expected file sizes and the additional disk space requirements for your site. A sample calculation is provided for each historical data collection table. The minimum collection interval unit of 15 minutes is used in the calculation.
Table 182. OMEGAMON XE for OS/390 UNIX System Services table record sizes
History Table Record Sizein bytes
Frequency
USS Address Spaces 263 1 row per dubbed address space
USS Kernel 294 1 row
USS Processes 2626 1 row per active process
USS Logged on Users 192 1 row per user
USS Mounted File Systems 1444 1 row per file system
USS BPXPRMxx Values 1200 1 row per BPXPRM keyword
USS Threads 168 1 row per thread
USS HFS ENQ Contention 190 1 row per HFS enqueue
Table 183. USS Address Spaces (ASRESRC2) worksheet
Interval Record Size
No. of Dubbed Address Spaces
Formula Expected File Size per 24 Hours
15 min. 263 bytes 100 (60/15 x 24 x 263 x 100) / 1024
2466 kilobytes
OMEGAMON XE for OS/390 UNIX System Services
228 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 184. USS Kernel (OEKERNL2) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 294 bytes (60/15 x 24 x 294 x 100) / 1024 28 kilobytes
Table 185. USS Processes (OPS2) worksheet
Interval Record Size
No. of Processes
Formula Expected File Size per 24 Hours
15 min. 2626 bytes
200 (60/15 x 24 x 2626 x 200) / 1024
49238 kilobytes
Table 186. USS Logged on Users (OUSERS2) worksheet
Interval Record Size
No. of Processes
Formula Expected File Size per 24 Hours
15 min. 192bytes 20 (60/15 x 24 x 192 x 20) / 1024
360 kilobytes
Disk Space Requirements for Historical Data Tables 229
OMEGAMON XE for OS/390 UNIX System Services
Table 187. USS Mounted File Systems (MOUNTS2) worksheet
Interval Record Size
No. of Mounted
File Systems
Formula Expected File Size per 24 Hours
15 min. 1444 bytes
130 (60/15 x 24 x 1444 x 130) / 1024
17599 kilobytes
Table 188. USS BPXPRMxx Values (BPXPRM2) worksheet
Interval Record Size
No. of BPRXPR
Mxx Keywords
Formula Expected File Size per 24 Hours
15 min. 1200 bytes
50 (60/15 x 24 x 1200 x 50) / 1024
5625 kilobytes
Table 189. USS Threads (THREAD2) worksheet
Interval Record Size
No. of Threads
Formula Expected File Size per 24 Hours
15 min. 168 bytes 400 (60/15 x 24 x 168 x 400) / 1024
6300 kilobytes
OMEGAMON XE for OS/390 UNIX System Services
230 Historical Data Collection Guide for OMEGAMON XE and CCC
OMEGAMON XE for OS/390 UNIX System Services disk space summary worksheet
We recommend that you spread the disk space requirements among the systems where data collection is performed. For example, three historical tables might be collected on the CMS and other tables at the remote agent managed system. A disk space summary worksheet for OMEGAMON XE for OS/390 follows.
Table 190. USS HFS ENQ Contention (HFSENQC2) worksheet
Interval Record Size
No. of Enqueues
Formula Expected File Size per 24 Hours
15 min. 190 bytes 1 (60/15 x 24 x 190 x 1) / 1024 18 kilobytes
Table 191. OMEGAMON XE for OS/390 UNIX System Services disk space summary worksheet
History Table Historical Data Table Size (kilobytes) (24
hours)
No. of Archives
Subtotal Space Required
(kilobytes)
USS Address Spaces
USS Kernel
USS Processes
USS Logged on Users
USS Mounted File Systems
USS BPXPRMxx Values
USS Threads
USS HFS ENQ Contention
USS Address Spaces
USS Kernel
USS Processes
USS Logged on Users
Disk Space Requirements for Historical Data Tables 231
OMEGAMON XE for OS/390 UNIX System Services
USS Mounted File Systems
User Response Time
Total Disk Space Required
Table 191. OMEGAMON XE for OS/390 UNIX System Services disk space summary worksheet (continued)
History Table Historical Data Table Size (kilobytes) (24
hours)
No. of Archives
Subtotal Space Required
(kilobytes)
OMEGAMON XE for OS/400
232 Historical Data Collection Guide for OMEGAMON XE and CCC
OMEGAMON XE for OS/400
OMEGAMON XE for OS/400 historical data tables.
Table 192. OMEGAMON XE for OS/400 historical data tables
Attribute History Table
Filename for Historical Data
Default HDC Table
Estimated Space Required per
managed system per 24 Hours
Async Performance QUSRSYS/KA4ASYNC No
Bsync Performance QUSRSYS/KA4BSYNC No
Controller Description QUSRSYS/KA4CTLD No
Device Description QUSRSYS/KA4DEVD No
Disk Performance QUSRSYS/KA4DISK No
Ethernet Performance QUSRSYS/KA4ENET No
IOP Performance QUSRSYS/KA4PFIOP No
Job Performance QUSRSYS/KA4PFJOB No
Line Description QUSRSYS/KA4LIND No
Network Attributes QUSRSYS/KA4NETA Yes 45 kilobytes
Pool Activity QUSRSYS/KA4POOL No
SDLC Performance QUSRSYS/KA4SDLC No
System Status QUSRSYS/KA4SYSTS Yes 11 kilobytes
System Values QUSRSYS/KA4SVAL Yes 19 kilobytes
System Values: Activity QUSRSYS/KA4SVACT Yes 40 kilobytes
System Values: Device QUSRSYS/KA4SVDEV Yes 12 kilobytes
System Values: IPL QUSRSYS/KA4SVIPL Yes 11 kilobytes
System Values: Performance
QUSRSYS/KA4SVPRF No
Disk Space Requirements for Historical Data Tables 233
OMEGAMON XE for OS/400
OMEGAMON XE for OS/400 table record sizes
Attribute History Table
Filename for Historical Data
Default HDC Table
Estimated Space Required per
managed system per 24 Hours
System Values: Problems
QUSRSYS/KA4SVPRB No
System Values: Users QUSRSYS/KA4SVUSR Yes 60 kilobytes
Token-Ring Performance
QUSRSYS/KA4TKRNG No
X.25 Performance QUSRSYS/KA4X25 No
Total Default Space 198 kilobytes
Table 193. OMEGAMON XE for OS/400 table record sizes
History Table Record Size Frequency
Async Performance 120 bytes 1 record per async line per interval
Bsync Performance 124 bytes 1 record per bsync line per interval
Controller Description 116 bytes 1 record per controller per interval
Device Description 152 bytes 1 record per device per interval
Disk Performance 190 bytes 1 record per disk unit per interval
Ethernet Performance 128 bytes 1 record per Ethernet line per interval
IOP Performance 195 bytes 1 record per IOP per interval
Job Performance 327 bytes 1 record per job per interval
Line Description 116 bytes 1 record per line per interval
Network Attributes 466 bytes 1 record per interval
Pool Activity 172 bytes 1 record per pool per interval
SDLC Performance 136 bytes 1 record per SDLC line per interval
System Status 112 bytes 1 record per interval
Table 192. OMEGAMON XE for OS/400 historical data tables (continued)
OMEGAMON XE for OS/400
234 Historical Data Collection Guide for OMEGAMON XE and CCC
OMEGAMON XE for OS/400 space requirement worksheetsUse the following worksheets to estimate expected file sizes and the additional disk space requirements for your site. A sample calculation is provided for each historical data collection table.
System Values 193 bytes 1 record per interval
System Values: Activity 413 bytes 1 record per interval
System Values: Device 122 bytes 1 record per interval
System Values: IPL 108 bytes 1 record per interval
System Values: Performance
187 bytes 1 record per interval
System Values: Problems 116 bytes 1 record per interval
System Values: Users 616 bytes 1 record per interval
Token-Ring Performance 128 bytes 1 record per Token-Ring line per interval
X.25 Performance 140 bytes 1 record per X.25 line per interval
Table 194. Async Performance (KA4ASYNC) worksheet
Interval Record Size
No. of Lines
Formula Expected File Size per 24 Hours
15 min. 120 bytes 2 (60/15) x 24 x 120 x 2) / 1024
23 kilobytes
Table 193. OMEGAMON XE for OS/400 table record sizes (continued)
Disk Space Requirements for Historical Data Tables 235
OMEGAMON XE for OS/400
Table 195. Bsync Performance (KA4BSYNC) worksheet
Interval Record Size
No. of Lines
Formula Expected File Size per 24 Hours
15 min. 124 bytes 2 (60/15) x 24 x 124 x 2) / 1024
24 kilobytes
Table 196. Controller Description (KA4CTLD) worksheet
Interval Record Size
No. of Controllers
Formula Expected File Size per 24 Hours
15 min. 116 bytes 30 (60/15) x 24 x 116 x 30) / 1024
327 kilobytes
Table 197. Device Description (KA4DEVD) worksheet
Interval Record Size
No. of Devices
Formula Expected File Size per 24 Hours
15 min. 152 bytes 100 (60/15) x 24 x 152 x 100) / 1024
1425 kilobytes
OMEGAMON XE for OS/400
236 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 198. Disk Performance (KA4DISK) worksheet
Interval Record Size
No. of Units
Formula Expected File Size per 24 Hours
15 min. 190 bytes 10 (60/15) x 24 x 190 x 10) / 1024
179 kilobytes
Table 199. Ethernet Performance (KA4ENET) worksheet
Interval Record Size
No. of Lines
Formula Expected File Size per 24 Hours
15 min. 128 bytes 2 (60/15) x 24 x 128 x 2) / 1024
24 kilobytes
Table 200. IOP Performance (KA4PFIOP) worksheet
Interval Record Size
No. of JOBs
Formula Expected File Size per 24 Hours
15 min. 195 bytes 10 (60/15) x 24 x 195 x 10) / 1024
183 kilobytes
Disk Space Requirements for Historical Data Tables 237
OMEGAMON XE for OS/400
Table 201. Job Performance (KA4PFJOB) worksheet
Interval Record Size
No. of Jobs
Formula Expected File Size per 24 Hours
15 min. 327 bytes 250 (60/15) x 24 x 327 x 250) / 1024
7,665 kilobytes
Table 202. Line Description (KA4LIND) worksheet
Interval Record Size
No. of Lines
Formula Expected File Size per 24 Hours
15 min. 116 bytes 10 (60/15) x 24 x 116 x 10) / 1024
109 kilobytes
Table 203. Network Attributes (KA4NETA) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 466 bytes (60/15) x 24 x 466) / 1024 44 kilobytes
OMEGAMON XE for OS/400
238 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 204. Pool Activity (KA4POOL) worksheet
Interval Record Size
No. of Pools
Formula Expected File Size per 24 Hours
15 min. 172 bytes 4 (60/15) x 24 x 172 x 4) / 1024
65 kilobytes
Table 205. SDLC Performance (KA4SDLC) worksheet
Interval Record Size
No. of Lines
Formula Expected File Size per 24 Hours
15 min. 136 bytes 2 (60/15) x 24 x 136 x 2) / 1024
26 kilobytes
Table 206. System Status (KA4SYSTS) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 112 bytes (60/15) x 24 x 112) / 1024 11 kilobytes
Disk Space Requirements for Historical Data Tables 239
OMEGAMON XE for OS/400
Table 207. System Values (KA4SVAL) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 193 bytes (60/15) x 24 x 193) / 1024 19 kilobytes
Table 208. System Values: Activity (KA4SVACT) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 413 bytes (60/15) x 24 x 413) / 1024 39 kilobytes
Table 209. System Values: Device (KA4SVDEV) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 122 bytes (60/15) x 24 x 122) / 1024 12 kilobytes
OMEGAMON XE for OS/400
240 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 210. System Values: IPL (KA4SVIPL) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 108 bytes (60/15) x 24 x 108) / 1024 11 kilobytes
Table 211. System Values: Performance (KA4SVPRF) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 187 bytes (60/15) x 24 x 187) / 1024 18 kilobytes
Table 212. System Values: Problems (KA4SVPRB) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 116 bytes (60/15) x 24 x 116) / 1024 11 kilobytes
Disk Space Requirements for Historical Data Tables 241
OMEGAMON XE for OS/400
Table 213. System Values: Users (KA4SVUSR) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 616 bytes (60/15) x 24 x 616) / 1024 58 kilobytes
Table 214. Token-Ring Performance (KA4TKRNG) worksheet
Interval Record Size
No. of Lines
Formula Expected File Size per 24 Hours
15 min. 128 bytes 2 (60/15) x 24 x 128 x 2) / 1024
24 kilobytes
Table 215. X.25 Performance (KA4X25) worksheet
Interval Record Size
No. of Lines
Formula Expected File Size per 24 Hours
15 min. 140 bytes 2 (60/15) x 24 x 140 x 2) / 1024
27 kilobytes
OMEGAMON XE for R/3™
242 Historical Data Collection Guide for OMEGAMON XE and CCC
OMEGAMON XE for R/3™
OMEGAMON XE for R/3™ historical data tables.
OMEGAMON XE for R/3™ table record sizes
Table 216. OMEGAMON XE for R/3™ historical data tables
Attribute History Table
Filename for Historical Data
Default HDC Table
Estimated Space Required per
managed system per 24 hours
Instance Configuration KSASYS
Service Response KSAPERF
Alerts KSAALERTS
Operating System and LAN
KSAOSP
File Systems KSAFSYSTEM
Buffer Performance KSABUFFER
Table 217. OMEGAMON XE for R/3™ table record sizes
History Table Record Size Frequency
Instance Configuration
416 bytes 1 record per R/3™ instance per interval
Service Response 176 bytes 1 record per R/3™ service used per R/3™ instance per interval
Alerts 252 bytes 0 to many per R/3™ instance per interval
Operating System and LAN
148 bytes 1 record per R/3™ instance per interval
File Systems 188 bytes 1 record per file system per R/3™ instance per interval
Buffer Performance 220 bytes 10/11 records per R/3™ instance per interval
(10 records prior to release 3.1H; 11 records from Release 3.1H
Disk Space Requirements for Historical Data Tables 243
OMEGAMON XE for R/3™
OMEGAMON XE for R/3™ space requirement worksheetsUse the following worksheets to estimate expected file sizes and the additional disk space requirements for your site. A sample calculation is provided for each historical data collection table.
Table 219. Service Response (KSAPERF) worksheet
Table 218. Instance Configuration (KSASYS) worksheet
Interval Record Size
Instances Formula Expected File Size per 24 Hours
15 min. 416 bytes 3 (60/15 x 24 x 416)x 5 / 1024 117 kilobytes
Interval Record Size
Instances Formula Expected File Size per 24 Hours
15 min. 416 bytes 3 (60/15 x 24 x 416)x 5 / 1024
for 5 services
117 kilobytes
Table 220. Alerts (KSAALERTS) worksheet
Interval Record Size
Instances Formula Expected File Size per 24 Hours
15 min. 252 bytes 3 (60/15 x 24 x 252)x 3/ 1024
for 4 alerts
284 kilobytes
Table 221. Operating System and LAN (KSAOSP) Worksheet
Interval Record Size
Instances Formula Expected File Size per 24 Hours
15 min. 148 bytes 3 (60/15 x 24 x 252)x 3/ 1024 42 kilobytes
OMEGAMON XE for R/3™
244 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 222. File System (KSAFSYSTEM) worksheet
Interval Record Size
Instances Formula Expected File Size per 24 Hours
15 min. 188 bytes 3 (60/15 x 24 x 188)x 3/ 1024 for 8 file systems
423 kilobytes
Table 223. Buffer Performance (KSABUFFER) worksheet
Interval Record Size
Instances Formula Expected File Size per 24 Hours
15 min. 220 bytes 3 (60/15 x 24 x 220)x 3/ 1024 for 11 buffers
681 kilobytes
Disk Space Requirements for Historical Data Tables 245
OMEGAMON XE for R/3™
OMEGAMON XE for R/3™ disk space summary worksheetWe recommend that you spread the disk space requirements among the systems where data collection is performed. For example, three historical tables might be collected on the CMS and others at the remote managed system. A disk space summary worksheet for OMEGAMON XE for R/3™ follows.
Table 224. OMEGAMON XE for R/3™ disk space summary worksheet
History Table Historical DataTable Size (kilobytes)
(24 hours)
No. of Archives
Subtotal Space Required
(kilobytes)
Instance Configuration
Service Response
Alerts
Operating System and LAN
File Systems
Buffer Performance
Total Disk Space Required
OMEGAMON XE for Sysplex
246 Historical Data Collection Guide for OMEGAMON XE and CCC
OMEGAMON XE for Sysplex
OMEGAMON XE for Sysplex historical data tables
Table 225. OMEGAMON XE for Sysplex historical data tables
Attribute History Table Filename for Historical Data
Default HDC Table
Estimated Space Required per
managed system per 24 Hours
Service Class Address Spaces
MADDSPC No
CF Path MCFPATH No
CF Structure to MVS System
MCFSMVS No
CF Structures MCFSTRCT No
Sysplex DASD Device MDASD_DEV No
Sysplex DASD Group MDASD_GRP No
Sysplex DASD MDASD_SYS No
Global Enqueues MGLBLENQ No
Resource Groups MRESGRP No
Report Classes MRPTCLS No
Sysplex WLM Service Class Period
MSRVCLS No
Service Definition MSRVDEF No
Service Class Subsys Workflow Analysis
MSSWFA Yes
Service Class Enqueue Workflow Analysis
MWFAENQ Yes
Service Class I/O Workflow Analysis
MWFAIO Yes
XCF Paths MXCFPATH No
XCF System Statistics MXCFSSTA No
Disk Space Requirements for Historical Data Tables 247
OMEGAMON XE for Sysplex
OMEGAMON XE for Sysplex table record sizes
XCF System MXCFSYS No
Table 226. OMEGAMON XE for Sysplex table record sizes
History Table Record Size (bytes)
Frequency (per interval)
Service Class Address Spaces 178 1 per Address Space per Managed System
CF Path 120 1 per Coupling Facility per Managed System
CF Structure to MVS System 152 1 per Coupling Facility Structure per Managed System
CF Structures 378 1 per Coupling Facility Structure per Managed System
Sysplex DASD Device 166 1 per DASD Device per Managed System
Sysplex DASD Group 215 1 per DASD Device per Group
Sysplex DASD 157 1 per DASD Group per Managed System
Global Enqueues 371 1 per Major name/Minor name per Owning Task/Waiting Task
Resource Groups 124 1 per Resource Group
Report Classes 116 1 per Report Class
Sysplex WLM Service Class Period
293 1 per Service Class per Period Number per Managed System
Service Definition 188 1 per Service Definition
Table 225. OMEGAMON XE for Sysplex historical data tables (continued)
Attribute History Table Filename for Historical Data
Default HDC Table
Estimated Space Required per
managed system per 24 Hours
OMEGAMON XE for Sysplex
248 Historical Data Collection Guide for OMEGAMON XE and CCC
OMEGAMON XE for Sysplex space requirement worksheetsUse the following worksheets to estimate expected file sizes and the additional disk space requirements for your site. A sample calculation is provided for each historical data collection table.
Service Class Subsys Workflow Analysis
195 1 per Service Class per Managed System
Service Class Enqueue Workflow Analysis
130 1 per Enqueue per Service Class per Managed System
Service Class I/O Workflow Analysis
138 1 per Service Class per Period Number per Managed System
XCF Paths 176 3 per Transport Class per System From/To per Origin/Destination
XCF System Statistics 180 1 per Transport Class per System From per System To
XCF System 172 1 per Managed System
Table 227. Service Class Address Spaces (MADDSPC) worksheet
Interval Record Size
Number of
Address Spaces
Number of
Managed Systems
Formula Expected File Size per 24 Hours
15 min. 178 bytes 100 5 (60/15 x 24 x 178) x 100 x 5 / 1024*1024)
9 Megabytes
Table 226. OMEGAMON XE for Sysplex table record sizes (continued)
History Table Record Size (bytes)
Frequency (per interval)
Disk Space Requirements for Historical Data Tables 249
OMEGAMON XE for Sysplex
.
.
.
Table 228. CF Path (MCFPATH) worksheet
Interval Record Size
Number of
Coupling Facilities
Number of
Managed Systems
Formula Expected File Size per 24 Hours
15 min. 120 bytes 10 5 (60/15 x 24 x 120 x 10 x 5) / 1024
563 kilobytes
Table 229. CF Structure to MVS System (MCFSMVS) worksheet
Interval Record Size
Number of CF
Structures
Number of
Managed Systems
Formula Expected File Size per 24 Hours
15 min. 152 bytes 50 5 (60/15 x 24 x 152 x 50 x 5) / 1024*1024)
4 Megabytes
Table 230. CF Structures (MCFSTRCT) worksheet
Interval Record Size
Number of CF
Structures
Number of
Managed Systems
Formula Expected File Size per 24 Hours
15 min. 378 bytes 50 5 (60/15 x 24 x 178 x 50 x 5) / 1024*1024)
9 Megabytes
OMEGAMON XE for Sysplex
250 Historical Data Collection Guide for OMEGAMON XE and CCC
.
Table 231. Sysplex DASD Device (MDASD_DEV) worksheet
Interval Record Size
Number of DASD Devices
Number of
Managed Systems
Formula Expected File Size per 24 Hours
15 min. 166 bytes 300 5 (60/15 x 24 x 166 x 300 x 5) / 1024*1024)
23 Megabytes
Table 232. Sysplex DASD Group (MDASD_GRP) worksheet
Interval Record Size
Number of DASD Devices
Number of DASD Groups
Formula Expected File Size per 24 Hours
15 min. 215 bytes 300 30 (60/15 x 24 x 215 x 300 x 30) / 1024*1024)
178 Megabytes
Table 233. Sysplex DASD (MDASD_SYS) worksheet
Interval Record Size
Number of DASD Groups
Formula Expected File Size per 24
Hours
15 min. 157 bytes 30 (60/15 x 24 x 157 x 30) / 1024 442 Kilobytes
Disk Space Requirements for Historical Data Tables 251
OMEGAMON XE for Sysplex
Table 234. Global Enqueues (MGLBLENQ) worksheet
Interval Record Size
Number of Major name / Minor Name
Owning Task / Waiting Task
Formula Expected File Size per 24 Hours
15 min. 371 bytes 20 (60/15 x 24 x 371 x 20) / 1024
696 Kilobytes
Table 235. Resource Groups (MRESGRP) worksheet
Interval Record Size
Number of Resource Groups
Formula Expected File Size per 24 Hours
15 min. 124 bytes 10 (60/15 x 24 x 124 x 10) / 1024 117 Kilobytes
Table 236. Report Classes (MRPTCLS) worksheet
Interval Record Size
Number of Report Classes
Formula Expected File Size per 24 Hours
15 min. 116 bytes 10 (60/15 x 24 x 116 x 10) / 1024 109 Kilobytes
OMEGAMON XE for Sysplex
252 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 237. Sysplex WLM Service Class Period (MSRVCLS) worksheet
Interval Record Size
Number of
Service Classes
Number of
Periods
Number of Man-
aged Systems
Formula Expected File Size per 24 Hours
15 min. 293 bytes
10 4 5 (60/15 x 24 x 293 x 10 x 4 x 5) / 1024*1024)
6 Megabytes
Table 238. Service Definition (MSRVDEF) worksheet
Interval Record Size
Number of Service
Definitions
Formula Expected File Size per 24 Hours
15 min. 188 bytes 8 (60/15 x 24 x 188 x 8) / 1024 141 Kilobytes
Disk Space Requirements for Historical Data Tables 253
OMEGAMON XE for Sysplex
Table 239. Service Class Subsys Workflow Analysis (MSSWFA) worksheet
Interval Record Size
Number of
Service Classes
Number of
Managed Systems
Formula Expected File Size per 24 Hours
15 min. 195 bytes 10 5 (60/15 x 24 x 195 x 10 x 5) / 1024*1024)
915 Kilobytes
Table 240. Service Class Enqueue Workflow Analysis (MWFAENQ) worksheet
Interval Record Size
Number of
Enqueues
Number of
Service Classes
Number of Man-
aged Systems
Formula Expected File Size per 24 Hours
15 min. 130 bytes
4 10 5 (60/15 x 24 x 130 x 4 x 10 x 5) / 1024*1024)
3 Megabytes
OMEGAMON XE for Sysplex
254 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 241. Service Class I/O Workflow Analysis (MWFAIO) worksheet
Interval Record Size
Number of Service Classes
Number of
Periods
Number of Man-
aged Systems
Formula Expected File Size per 24 Hours
15 min. 138 bytes
10 4 5 (60/15 x 24 x 138 x 10 x 4 x 5) / 1024*1024)
3 Megabytes
Table 242. XCF Paths (MXCFPATH) worksheet
Interval Record Size
Number of
Transport Classes
Number of
System From/To
Number of
Origin/Destination
Formula Expected File Size per 24 Hours
15 min. 176 bytes
20 16 18 (60/15 x 24 x 176 x 20 x 16 x 18 x 3) / 1024*1024)
279 Megabytes
Disk Space Requirements for Historical Data Tables 255
OMEGAMON XE for Sysplex
Table 243. XCF System Statistics (MXCFSSTA) worksheet
Interval Record Size
Number of
Transport Classes
Number of
System From’s
Number of
System To’s
Formula Expected File Size per 24 Hours
15 min. 180 bytes
20 4 4 (60/15 x 24 x 180 x 20 x 4 x 4) / 1024*1024)
6 Megabytes
Table 244. XCF System (MXCFSYS) worksheet
Interval Record Size
Number of Service
Definitions
Formula Expected File Size per 24 Hours
15 min. 172 bytes 5 (60/15 x 24 x 172 x 5) / 1024 81 Kilobytes
OMEGAMON XE for Sysplex
256 Historical Data Collection Guide for OMEGAMON XE and CCC
OMEGAMON XE for Sysplex disk space summary worksheetWe recommend that you spread the disk space requirements among the systems where data collection is performed. For example, three historical tables might be collected on the CMS and others at the remote managed system. A disk space summary worksheet for OMEGAMON XE for Sysplex follows.
Table 245. OMEGAMON XE for Sysplex disk space summary worksheet
History Table Historical DataTable Size (kilobytes)
(24 hours)
No. of Archives
Subtotal Space Required
(kilobytes)
Service Class Address Spaces
CF Path
CF Structure to MVS System
CF Structures
Sysplex DASD Device
Sysplex DASD Group
Sysplex DASD
Global Enqueues
Resource Groups
Report Classes
Sysplex WLM Service Class Period
Service Definition
Service Class Subsys Workflow Analysis
Service Class Enqueue Workflow Analysis
Service Class I/O Workflow Analysis
Disk Space Requirements for Historical Data Tables 257
OMEGAMON XE for Sysplex
History Table Historical DataTable Size (kilobytes)
(24 hours)
No. of Archives
Subtotal Space Required
(kilobytes)
XCF Paths
XCF System Statistics
XCF System
Total Disk Space Required
Table 245. OMEGAMON XE for Sysplex disk space summary worksheet
OMEGAMON XE for Tuxedo
258 Historical Data Collection Guide for OMEGAMON XE and CCC
OMEGAMON XE for Tuxedo
OMEGAMON XE for Tuxedo historical data tablesThe amount of default space required for a 24-hour period on a monitored system varies greatly and depends upon your specific operating environment. Note that history tables for Tuxedo User Logs can become very large depending on the number of entries in the log. We recommend that you set the history collection interval for this table to once a day only. Otherwise, there is a possibility of duplicate records in the database. If you decide to collect history for this table, you should have procedures in place to archive its data on a regular basis.
Table 246. OMEGAMON XE for Tuxedo historical data tables
Attribute History Table
Filename for Historical Data
Default HDDC Table
Estimated Space Required per
Managed System per 24 Hours
(kilobytes)
Application Queues TXAPPQ No 38
Application Server Queues
TXSVRQ Yes 31
Machine Configuration TXMCONF No 50
Machine Stats TXMSTATS Yes 31
Queue Load TXQLOAD Yes 24
Server Groups TXSVRGP No 23
Service Group TXSVCGRP Yes 49
System Message Queues
TXSYSMQ Yes 17
Tuxedo App Queue Msgs
TXQMSG No 32
Tuxedo App Queue Spcs
TXQSPCS No 34
Tuxedo App Queue Trans
TXQTRAN No 29
Disk Space Requirements for Historical Data Tables 259
OMEGAMON XE for Tuxedo
OMEGAMON XE for Tuxedo table record sizesThe following table contains record sizes for each OMEGAMON XE for Tuxedo attribute table.
Tuxedo BBL Statistics TXBBLCFG No 29
Tuxedo Client Conversations
TXCONV No 29
Tuxedo Client Statistics TXSTATS Yes 30
Tuxedo Clients TXCLIENTS Yes 34
Tuxedo Domain Configuration
TXDOMCFG No 29
Tuxedo Servers TXSERVER Yes 63
Tuxedo Server Statistics TXSRVSTAT Yes 29
Tuxedo Transactions TXTRANSACT No 31
Tuxedo User Logs TXULOGS Yes 65
Total Default Space 697 kilobytes
Table 247. OMEGAMON XE for Tuxedo table record sizes
History Table Record Sizein bytes
Frequency
Application Queues 402 1 record per application queque per interval
Application Server Queues 326 1 record per server queue per interval
Machine Configuration 522 1 record per machine in this domain per interval
Table 246. OMEGAMON XE for Tuxedo historical data tables (continued)
Attribute History Table
Filename for Historical Data
Default HDDC Table
Estimated Space Required per
Managed System per 24 Hours
(kilobytes)
OMEGAMON XE for Tuxedo
260 Historical Data Collection Guide for OMEGAMON XE and CCC
Machine Stats 328 1 record per machine in this domain per interval
Queue Load 248 1 record per server queue per interval
Server Groups 236 1 record per server group per interval
Service Group 517 1 record per service per interval
System Message Queues 176 1 record per system message queue per interval
Tuxedo App Queue Msgs 336 1 record per message in the application queues per interval
Tuxedo App Queue Spcs 360 1 record per application queue space per interval
Tuxedo App Queue Trans 306 1 record per application queue transaction per interval
Tuxedo BBL Statistics 304 1 record per interval
Tuxedo Client Conversations 306 1 record per client per interval
Tuxedo Client Conversations 306 1 record per client per interval
Tuxedo Clients 356 1 record per client per interval
Tuxedo Domain Configuration
300 1 record per interval
Tuxedo Server Statistics 308 1 record per server per interval
Tuxedo Servers 670 1 record per server within the application per interval
Table 247. OMEGAMON XE for Tuxedo table record sizes (continued)
History Table Record Sizein bytes
Frequency
Disk Space Requirements for Historical Data Tables 261
OMEGAMON XE for Tuxedo
OMEGAMON XE for Tuxedo space requirement worksheetsUse the following worksheets to estimated expected file sizes and the additional disk space requirements for your site. A sample calculation is provided for each historical data collection table. Use calculations that would reflect conditions in your environment.
*Calculated on 1 application queue
*Calculated on 1 application server queue
Tuxedo Transactions 324 1 record per transaction per interval
Tuxedo User Logs 664 undetermined
Table 248. Application Queues (TXAPPQ) worksheet
Interval Record size Formula Expected File Size per 24 Hours
15 min. 402 bytes (60/15 x 24 x 402 bytes x application queues) / 1024
38 kilobytes*
Table 249. Application Server Queues (TXSVRQ) worksheet
Interval Record size Formula Expected File Size per 24 Hours
15 min. 326 bytes (60/15 x 24 x 326 bytes x application server queues) / 1024
31 kilobytes*
Table 247. OMEGAMON XE for Tuxedo table record sizes (continued)
History Table Record Sizein bytes
Frequency
OMEGAMON XE for Tuxedo
262 Historical Data Collection Guide for OMEGAMON XE and CCC
*Calculated on 1 machine
*Calculated on 1 machine
*Calculated on 1 application server queue
Table 250. Machine Configuration (TXMCONF) worksheet
Interval Record size Formula Expected File Size per 24 Hours
15 min. 522 bytes (60/15 x 24 x 522 bytes x machines) / 1024
50 kilobytes*
Table 251. Machine Stats (TXMSTATS) worksheet
Interval Record size Formula Expected File Size per 24 Hours
15 min. 328 bytes (60/15 x 24 x 328 bytes x machines) / 1024
31 kilobytes*
Table 252. Queue Load (TXQLOAD) worksheet
Interval Record size Formula Expected File Size per 24 Hours
15 min. 248 bytes (60/15 x 24 x 248 bytes x application server queues) / 1024
24 kilobytes*
Disk Space Requirements for Historical Data Tables 263
OMEGAMON XE for Tuxedo
*Calculated on 1 server group
*Calculated on 1 application service group
*Calculated on 1 message queue
Table 253. Server Groups (TXSRVGP) worksheet
Interval Record size Formula Expected File Size per 24 Hours
15 min. 236 bytes (60/15 x 24 x 236 bytes x server groups) / 1024
23 kilobytes*
Table 254. Service Group (TXSVCGRP) worksheet
Interval Record size Formula Expected File Size per 24 Hours
15 min. 517 bytes (60/15 x 24 x 517 bytes x service groups) / 1024
49 kilobytes*
Table 255. System Message Queues (TXSYSMQ) worksheet
Interval Record size Formula Expected File Size per 24 Hours
15 min. 176 bytes (60/15 x 24 x 176 bytes x system message queues) / 1024
17 kilobytes*
OMEGAMON XE for Tuxedo
264 Historical Data Collection Guide for OMEGAMON XE and CCC
*Calculated on 1 application queue message
*Calculated on 1 application queue space
*Calculated on 1 application queue transaction
Table 256. Tuxedo App Queue Msgs (TCQMSG) worksheet
Interval Record size Formula Expected File Size per 24 Hours
15 min. 336 bytes (60/15 x 24 x 336 bytes x application queue messages) / 1024
32 kilobytes*
Table 257. Tuxedo App Queue Spcs (TCQSPCS) worksheet
Interval Record size Formula Expected File Size per 24 Hours
15 min. 360 bytes (60/15 x 24 x 360 bytes x application queue spaces) / 1024
34 kilobytes*
Table 258. Tuxedo App Queue Trans (TXQTRAN) worksheet
Interval Record size Formula Expected File Size per 24 Hours
15 min. 306 bytes (60/15 x 24 x 306 bytes x application queue transaction) / 1024
29 kilobytes*
Disk Space Requirements for Historical Data Tables 265
OMEGAMON XE for Tuxedo
*One row per client**Calculated on 1 client
*Calculated on 1 client.
Table 259. Tuxedo BBL Statistics (TXBBLCFG) worksheet
Interval Record size Formula Expected File Size per 24 Hours
15 min. 304 bytes (60/15 x 24 x 304 bytes x 1) / 1024 29 kilobytes
Table 260. Tuxedo Client Conversations (TXCONV) worksheet
Interval Record size Formula Expected File Size per 24 Hours
15 min. 306 bytes (60/15 x 24 x 306 bytes x clients*) / 1024
29 kilobytes**
Table 261. Tuxedo Client Statistics (TXSTATS) worksheet
Interval Record size Formula Expected File Size per 24 Hours
15 min. 310 bytes (60/15 x 24 x 310 bytes x clients) / 1024
30 kilobytes*
OMEGAMON XE for Tuxedo
266 Historical Data Collection Guide for OMEGAMON XE and CCC
*Calculated on 1 client
*Only one domain is monitored by an agent.
*Calculated on 1 application server queue
Table 262. Tuxedo Clients (TXCLIENTS) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min 356 bytes (60/15 x 24 x 356 bytes x clients) / 1024
34 kilobytes
Table 263. Tuxedo Domain Configuration (TXDOMCFG) worksheet
Interval Record size Formula Expected File Size per 24 Hours
15 min. 300 bytes (60/15 x 24 x 300 bytes x 1) / 1024 29 kilobytes
Table 264. Tuxedo Servers (TXSERVER) worksheet
Interval Record size Formula Expected File Size per 24 Hours
15 min. 670 bytes (60/15 x 24 x 670 bytes x application servers) / 1024
63 kilobytes*
Disk Space Requirements for Historical Data Tables 267
OMEGAMON XE for Tuxedo
*Calculated on 1 application server queue
*Calculated on 1 application transaction
*An interval of once a day is recommended to avoid duplication of records in the database.
**Calculated on 100 entries in the user log
Table 265. Tuxedo Server Statistics (TXSRVSTAT) worksheet
Interval Record size Formula Expected File Size per 24 Hours
15 min. 308 bytes (60/15 x 24 x 308 bytes x application servers) / 1024
29 kilobytes*
Table 266. Tuxedo Transactions (TXTRANSACT) worksheet
Interval Record size Formula Expected File Size per 24 Hours
15 min. 324 bytes (60/15 x 24 x 324 bytes x transactions) / 1024
31 kilobytes*
Table 267. Tuxedo User Logs (TXLOGS) worksheet
Interval Record size Formula Expected File Size per 24 Hours
1 day* 664 bytes (1 x 664 bytes x entries in user log) / 1024
600 kilobytes**
OMEGAMON XE for Tuxedo
268 Historical Data Collection Guide for OMEGAMON XE and CCC
OMEGAMON XE for Tuxedo disk space summary worksheetWe recommend that you spread the disk space requirements among the systems where data collection is performed. For example, three historical tables might be collected on the CMS and others at the remote agent managed system. A disk space summary worksheet for OMEGAMON XE for Tuxedo follows.
*Use kilobyte values for 24-hour data collection that reflect operations of your environment.
Table 268. OMEGAMON XE for Tuxedo disk space summary worksheet
History Table Historical Data Table Size (kilobytes) (24
hours)
No. of Archives
Subtotal Space Required
(kilobytes)
Application Queues
Application Server Queues
Machine Configuration
Machine Stats
Queue Load
Server Groups
Service Group
System Message Queues
Tuxedo App Queue Msgs
Tuxedo App Queue Spcs
Tuxedo App Queue Trans
Tuxedo BBL Statistics
Tuxedo Client Conversations
Tuxedo Client Statistics
Tuxedo Clients
Tuxedo Domain Configuration
Tuxedo Server Statistics
Tuxedo Transactions
Tuxedo User Logs
Total Disk Space Required
Disk Space Requirements for Historical Data Tables 269
OMEGAMON XE for UNIX Systems
OMEGAMON XE for UNIX Systems
OMEGAMON XE for UNIX default historical data tables.
OMEGAMON XE for UNIX systems table record sizes
Table 269. OMEGAMON XE for UNIX historical data tables
Attribute History Table
Filename for Historical Data
Default HDC Table
Estimated Space Required per managed system per 24 Hours
System UNIXOS Yes 490 kilobytes
Filesystem Space UNIXDISK Yes 11,158 kilobytes
Disk Performance UNIXDPERF No
Network Interface UNIXNET No
Online Users UNIXUSER No
Running Processes UNIXPS No
Network Filesystem UNIXNFS No
Processor/CPU UNIXCPU No
Total Default Space 11,648 kilobytes
Table 270. OMEGAMON XE for UNIX systems table record sizes
History Table Record Size Frequency
System 748 bytes 1 record per interval
Filesystem Space 340 bytes 1 record per mounted filesystem per interval
Disk Performance 208 bytes 1 record per disk drive per interval
Network Interface 256 bytes 1 record per network interface per interval
Online Users 208 bytes 1 record per logged-in user per interval
Running Processes 752 bytes 1 record per running process per interval
Network Filesystem 640 bytes 1 record per interval
Processor/CPU 352 bytes
OMEGAMON XE for UNIX Systems
270 Historical Data Collection Guide for OMEGAMON XE and CCC
OMEGAMON XE for UNIX systems space requirement worksheetsUse the following worksheets to estimate expected file sizes and the additional disk space requirements for your site. A sample calculation is provided for each historical data collection table.
Table 271. System (UNIXOS) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 748 bytes (60/15 x 24 x 748) / 1024 70 kilobytes
Table 272. Filesystem space (UNIXDISK) worksheet
Interval Record Size
No. of Filesystems
Formula Expected File Size per 24 Hours
15 min. 340 bytes 50 (60/15 x 24 x 340 50) / 1024
1,594 kilobytes
Table 273. Disk Performance space (UNIXDPERF) worksheet
Interval Record Size
No. of Phys. Disks
Formula Expected File Size per 24 Hours
15 min. 208 bytes 20 (60/15 x 24 x 208 x 20) / 1024 390 kilobytes
Disk Space Requirements for Historical Data Tables 271
OMEGAMON XE for UNIX Systems
Table 274. Network Interface space (UNIXNET) worksheet
Interval Record Size
No. of Network
Interfaces
Formula Expected File Size per 24 Hours
15 min. 256 bytes 4 (60/15 x 24 x 256 4) / 1024 96 kilobytes
Table 275. Online Users space (UNIXUSER) worksheet
Interval Record Size
No. of Online Users
Formula Expected File Size per 24
Hours
15 min. 208 bytes 50 (60/15 x 24 x 208 x 50) / 1024
975 kilobytes
Table 276. Running Processes (UNIXPS) worksheet
Interval Record Size
No. of Running
Processes
Formula Expected File Size per 24 Hours
15 min. 752 bytes 300 (60/15 x 24 x 752 x 300) / 1024
21,150 kilobytes
OMEGAMON XE for UNIX Systems
272 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 278. Unix Processor/CPU (UNIXCPU) worksheet
Table 277. Network Filesystem (UNIXNFS) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 640 bytes (60/15 x 24 x 640) / 1024 60 kilobytes
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 352 bytes (60/15 x 24 x 352) / 1024 33 kilobytes
Disk Space Requirements for Historical Data Tables 273
OMEGAMON XE for UNIX Systems
OMEGAMON XE for UNIX disk space summary worksheetWe recommend that you spread the disk space requirements among the systems where data collection is performed. For example, three historical tables might be collected on the CMS and other tables at the remote agent managed system. A disk space summary worksheet for OMEGAMON XE for UNIX follows.
Table 279. OMEGAMON XE for UNIX disk space summary worksheet
History Table Historical DataTable Size (kilobytes)
(24 hours)
No. of Archives
Subtotal Space Required
(kilobytes)
System
Filesystem Space
Disk Performance
Network Interface
Online Users
Running Processes
NFS
Processor/CPU
Total Disk Space Required
OMEGAMON XE for WebSphere Application Server
274 Historical Data Collection Guide for OMEGAMON XE and CCC
OMEGAMON XE for WebSphere Application Server
OMEGAMON XE for WebSphere Application Server historical data tables
Note: These tables apply for the distributed product.
The amount of default space required for a 24-hour period on a monitored system varies greatly and depends upon your specific operating environment. These calculations are based on a 15-minute sampling interval.
Table 280. OMEGAMON XE for WebSphere Application Server historical data tables
Attribute History Table
Filename for Historical Data
Default HDDC Table
Estimated Space Required per
Managed System per 24 Hours
(kilobytes)
All Workloads KWEWKLDS Yes 107 kilobytes
Application Server KWEAPPSRV Yes 95 kilobytes
Application Server Errors
KWEASERR No 1 kilobyte for every message written the application server log
Application Server Status
KWEAPSST No None. (not used for historical purposes)
Container Object Pools KWEEBOP Yes 78 kilobytes (assumes 1 container per application)
Container Transactions KWETRANS Yes 86 kilobytes (assumes 1 container per application server)
DB Connection Pools KWEDBCONP Yes 98 kilobytes per data source
Disk Space Requirements for Historical Data Tables 275
OMEGAMON XE for WebSphere Application Server
*Modify this total to reflect approximate conditions in your operating environment.
EJB Containers KWECONTNR Yes 96 kilobytes (assumes 1 container per application server)
Enterprise Java Bean Methods
KWEEJBMTD No 100 kilobytes per EJB method
Enterprise Java Beans KWEEJB No 98 kilobytes per EJB
JVM Garbage Collector Activity
KWEGC No 69 kilobytes
Longest Running Workloads
KWEWKLEX Yes 132 kilobytes
Product Events KWEPREV Yes incalculable (estimate 0.5 kilobytes per product event)
Selected Workload Delays
KWEWKLDD Yes 118 kilobytes
Servlets/JSPs KWESERVLT No 114 kilobytes per servlet
Web Applications KWEAPP Yes 99 kilobytes per web application
Total Default Space 1,290 kilobytes*
Table 280. OMEGAMON XE for WebSphere Application Server historical data tables (continued)
Attribute History Table
Filename for Historical Data
Default HDDC Table
Estimated Space Required per
Managed System per 24 Hours
(kilobytes)
OMEGAMON XE for WebSphere Application Server
276 Historical Data Collection Guide for OMEGAMON XE and CCC
OMEGAMON XE for WebSphere Application Server table record sizesThe following table contains record sizes for each OMEGAMON XE for WebSphere Application attribute table.
Table 281. OMEGAMON XE for WebSphere Application Server table record sizes
History Table Record Sizein bytes
Frequency
All Workloads 1136 1 record per interval for each workload in each application server
Application Server 1020 1 record per interval per application server
Application Server Errors 968 1 record for every record written into the application server log file
Container Object Pools 832 1 record per interval per application server, plus 1 record per interval per EJB container
Container Transactions 920 1 record per interval per application server, plus 1 record per interval per EJB container
DB Connection Pools 1052 1 record per interval per application server, plus 1 record per interval per data source
EJB Containers 1024 1 record per interval per application server, plus 1 record per interval per EJB container
Enterprise Java Bean Methods
1180 1 record per interval for each EJB method
Enterprise Java Beans 1048 1 record per interval for each EJB method
Disk Space Requirements for Historical Data Tables 277
OMEGAMON XE for WebSphere Application Server
OMEGAMON XE for WebSphere Application Server space requirement worksheets
Use the following worksheets to estimate expected file sizes and the additional disk space requirements for your site. A sample calculation is provided for each historical data collection table. Use calculations that would reflect conditions in your environment.
JVM Garbage Collector Activity
736 1 record per interval per application server
Longest Running Workloads 1404 1 record per interval for each exceptional workload in each application server
Product Events 484 1 record for each product event (These records are written when problems occur. It is impossible to say how often this occurs.)
Selected Workload Delays 1264 1 record per interval for each workload degradation in each application server
Servlets/JSPs 1212 1 record per interval per servlet
Web Applications 1052 1 record per interval per web application
Table 282. All Workloads (KWEWKLDS) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min 1136 bytes (60/15 x 24 x 1136) x workloads x application servers/1024
107 kilobytes*
Table 281. OMEGAMON XE for WebSphere Application Server table record sizes (continued)
History Table Record Sizein bytes
Frequency
OMEGAMON XE for WebSphere Application Server
278 Historical Data Collection Guide for OMEGAMON XE and CCC
*Calculated on 1 workload.
To calculate the size for your site, replace the workloads and application server variables in the formula with your estimate of the number of workloads in each application server and the number of application servers in your WebSphere Application Server environment.
You can control the number of workload degradations by modifying the Workload Analysis Control files. Refer to the configuration chapter in the OMEGAMON XE for WebSphere Application Server User’s Guide for information about configuring workload analysis.
*Calculated on 1 application server
*Calculated on 1 log record
Table 283. Application Server (KWEAPPSRV) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min 1020 bytes (60/15 x 24 x 1020) x application servers/1024
96 kilobytes*
Table 284. Application Server Errors (KWEASERR) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 968 bytes (60/15 x 24 x 968) x log records/1024
91 kilobytes*
Disk Space Requirements for Historical Data Tables 279
OMEGAMON XE for WebSphere Application Server
*Calculated on 1 application server and 1 EJB container
*Calculated on 1 application server and 1 EJB container
Table 285. Container Object Pools (KWEEBOP) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 832 bytes (60/15 x 24 x 832) x application servers x EJB containers/1024
78 kilobytes*
Table 286. Container Transactions (KWETRANS) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 920 bytes (60/15 x 24 x 920) x application servers x EJB containers/1024
87 kilobytes*
Table 287. DB Container Pools (KWEDBCONP) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 1052 bytes (60/15 x 24 x 1052) x application servers x data sources/1024
99 kilobytes*
OMEGAMON XE for WebSphere Application Server
280 Historical Data Collection Guide for OMEGAMON XE and CCC
*Calculated on 1 application server and 1 data source
*Calculated on 1 application server and 1 EJB container
*Calculated on 1 EJB methods
Table 288. EJB Containers (KWECONTNR) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 1024 bytes (60/15 x 24 x 1024) x application servers x EJB containers/1024
96 kilobytes*
Table 289. Enterprise Java Bean Methods (KWEEJBMTD) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 1180 bytes (60/15 x 24 x 1180) x EJB methods/1024
111 kilobytes*
Table 290. Enterprise Java Beans (KWEEJB) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 1048 bytes (60/15 x 24 x 1048) x EJB methods/1024
99 kilobytes*
Disk Space Requirements for Historical Data Tables 281
OMEGAMON XE for WebSphere Application Server
*Calculated on 1 EJB method
*Calculated on 1 application server
*Calculated on 1 exceptional workload.
To calculate the size for your site, replace the exceptional workloads and application server variables in the formula with your estimate of the number of exceptional workloads in each application server and the number of application servers in your WebSphere Application Server environment.
You can control the number of exceptional workloads in each application server by using the configuration ExceptionWorkloadName tag or the ExceptionWorkloadMax tag or the ExceptionWorkloadMinResponseTime tag in the agent configuration file, kwe.xml. Refer to the configuration chapter in the OMEGAMON XE for WebSphere Application Server User’s Guide for information about these parameters.
Table 291. JVM Garbage Collector Activity (KWEGC) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 736 bytes (60/15 x 24 x 736) x application servers/1024
69 kilobytes*
Table 292. Longest Running Workloads (KWEWKLEX) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min 1404 bytes (60/15 x 24 x 1404) x exceptional workloads x application servers/1024
132 kilobytes*
OMEGAMON XE for WebSphere Application Server
282 Historical Data Collection Guide for OMEGAMON XE and CCC
*Calculated on 1 product event, but it is not possible to predict how often problems will occur and the resulting records written to the log.
*Calculated on 1 workload degradation.
To calculate the size for your site, replace the workload degradations and application server variables in the formula with your estimate of the number of workload degradations in each application server and the number of application servers in your WebSphere Application Server environment.
You can control the number of workload degradations by modifying the Workload Analysis Control files. Refer to the configuration chapter in the OMEGAMON XE for WebSphere Application Server User’s Guide for information about configuring workload analysis.
Table 293. Product Events (KWEPREV) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 484 bytes (60/15 x 24 x 484) x product events/1024
46 kilobytes*
Table 294. Selected Workload Delays (KWEWKLDD) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min 1264 bytes (60/15 x 24 x 1264) x workload degradations x application servers/1024
118 kilobytes*
Disk Space Requirements for Historical Data Tables 283
OMEGAMON XE for WebSphere Application Server
*Calculated on 1 servlet
*Calculated on 1 web application
Table 295. Servlets/JSPs (KWESERVLT) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 1212 bytes (60/15 x 24 x 1212) x servlets/1024 114 kilobytes*
Table 296. Web Applications (KWEAPP) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 1052 bytes (60/15 x 24 x 1052) x web applications/1024
99 kilobytes*
OMEGAMON XE for WebSphere Application Server
284 Historical Data Collection Guide for OMEGAMON XE and CCC
OMEGAMON XE for WebSphere Application Server disk space summary worksheet
We recommend that you spread the disk space requirements among the systems where data collection is performed. For example, three historical tables might be collected on the CMS and others at the remote agent managed system. A disk space summary worksheet for OMEGAMON XE for WebSphere Application Server follows.
Table 297. OMEGAMON XE for WebSphere Application Server disk space summary worksheet
History Table Historical Data Table Size (kilobytes) (24
hours)
No. of Archives
Subtotal Space Required
(kilobytes)
All Workloads
Application Server
Application Server Errors
Application Server Status
Container Object Pools
Container Transactions
DB Connection Pools
EJB Containers
Enterprise Java Bean Methods
Enterprise Java Beans
JVM Garbage Collector Activity
Longest Running Workloads
Product Events
Selected Workload Delays
Servlets/JSPs
Web Applications
Total Disk Space Required
Disk Space Requirements for Historical Data Tables 285
OMEGAMON XE for WebSphere Application Server for OS/390
OMEGAMON XE for WebSphere Application Server for OS/390
OMEGAMON XE for WebSphere Application Server for OS/390 historical data tables
The amount of default space required for a 24-hour period on a monitored system varies greatly and depends upon your specific operating environment. These calculations are based on a 15-minute sampling interval.
Table 298. OMEGAMON XE for WebSphere Application Server for OS/390 historical data tables
Attribute History Table
Filename for Historical Data
Default HDDC Table
Estimated Space Required per
Managed System per 24 Hours
(kilobytes)
Application Server Error Log
KWWERRLG Yes 54
Application Server Instance
KWWAPPSV Yes 159
Application Server Instance SMF Interval Statistics
KWWAPPSM Yes 32
Application Trace for WAS OS/390
KWWAPPTR This table does not create a history file.
Application Trace File for WAS OS/390
KWWAPPTF This table does not create a history file.
Datasource Detail for WAS OS/390
KWWDATAS Yes 107
Environmental Variables KWWENVAR This table does not create a history file.
HTTP Session Detail for WAS OS/390
KWWHTTPS Yes 119
J2EE Server Bean Methods
KWWJBMTH No 152
J2EE Server Beans KWWJBEAN No 77
OMEGAMON XE for WebSphere Application Server for OS/390
286 Historical Data Collection Guide for OMEGAMON XE and CCC
*Modify this total to reflect approximate conditions in your operating environment.
J2EE Server Containers KWWJCONT Yes 59
JVM Garbage Collector Activity for WAS OS/390
KWWGC No 67
MOFW Server Classes KWWMCLAS No 76
MOFW Server Containers
KWWMCONT Yes 53
MOFW Server Methods KWWMMETH No 98
MQSeries Access for WAS OS/390
KWWMQSAC Yes 77
Product Events for WAS 0S/390
KWWPREV Yes 45
Server Instance Status KWWAPSST No 52
Workload Exception for WAS OS/390
KWWWKLEX Yes 120
Workload Degradation Detail for WAS OS/390
KWWWKLDD Yes 119
Workload Degradation Summary for WAS OS/390
KWWWKLDS Yes 108
Total Default Space 874 kilobytes*
Table 298. OMEGAMON XE for WebSphere Application Server for OS/390 historical data tables (continued)
Attribute History Table
Filename for Historical Data
Default HDDC Table
Estimated Space Required per
Managed System per 24 Hours
(kilobytes)
Disk Space Requirements for Historical Data Tables 287
OMEGAMON XE for WebSphere Application Server for OS/390
OMEGAMON XE for WebSphere Application Server for OS/390 table record sizes
The following table contains record sizes for each OMEGAMON XE for WebSphere Application Server for OS/390 attribute table.
Table 299. OMEGAMON XE for WebSphere Application Server for OS/390 table record sizes
History Table Record Sizein bytes
Frequency
Application Server Error Log 580 1 record per interval for each entry written into the application server logstream
Application Server Instance 1700 1 record per interval per server instance
Application Server Instance SMF Interval Statistics
336 1 record per interval per server instance
Application Trace for WAS OS/390
This history table does not exist
Application Trace File for WAS OS/390
This history table does not exist
Datasource Detail for WAS OS/390
1400 1 record per interval per data source in each application server
Environmental Variables This history table does not exist
HTTP Session Detail for WAS OS/390
1272 1 record per interval for each active HTTP session in each application server
J2EE Server Bean Methods 1620 1 record per interval for each active EJB method in each server instance
J2EE Server Beans 820 1 record per interval for each active EJB in each server
J2EE Server Containers 632 1 record per interval per server instance
OMEGAMON XE for WebSphere Application Server for OS/390
288 Historical Data Collection Guide for OMEGAMON XE and CCC
JVM Garbage Collector Activity for WAS OS/390
712 1 record per interval per application server
MOFW Server Classes 812 1 record per interval for each active class in each server instance
MOFW Server Containers 564 1 record per interval per server instance
MOFW Server Methods 1044 1 record per interval for each active class method in each server instance
MQSeries Access for WAS OS/390
820 1 record per interval per MQSeries queue in each application server
Product Events for WAS OS/390
484 1 record per interval for each product event (These records are written when problems occur.)
Server Instance Status 552 1 record per interval per server instance
Workload Exception for WAS OS/390
1280 1 record per interval for each exceptional workload in each application server
Workload Degradation Detail for WAS OS/390
1272 1 record per interval for each workload degradation in each application server
Workload Degradation Summary for WAS OS/390
1156 1 record per interval for each workload in each application server
Table 299. OMEGAMON XE for WebSphere Application Server for OS/390 table record sizes (continued)
History Table Record Sizein bytes
Frequency
Disk Space Requirements for Historical Data Tables 289
OMEGAMON XE for WebSphere Application Server for OS/390
OMEGAMON XE for WebSphere Application Server for OS/390 space requirement worksheets
Use the following worksheets to estimate expected file sizes and the additional disk space requirements for your site. A sample calculation is provided for each historical data collection table. Use calculations that would reflect conditions in your environment.
*Calculated on 1 log record.
The size of the history file for an application server’s logstream will vary in size because the number of entries written to an application server’s logstream varies from time to time. To calculate the size for your site, replace the log records variable in the formula with your estimate for the number of entries written to the logstream during the sampling interval.
*Calculated on 1 server instance.
To calculate the size for your site, replace the server instances variable in the formula with the exact number of server instances in your WebSphere environment.
Table 300. Application Server Error Log (KWWERRLG) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min 580 bytes (60/15 x 24 x 580) x log records/1024 54 kilobytes*
Table 301. Application Server Instance (KWWAPPSV) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 1700 bytes (60/15 x 24 x 1700) x server instances/1024
159 kilobytes*
OMEGAMON XE for WebSphere Application Server for OS/390
290 Historical Data Collection Guide for OMEGAMON XE and CCC
*Calculated on 1 server instance.
To calculate the size for your site, replace the server instances variable in the formula with the exact number of server instances in your WebSphere environment.
*Calculated on 1 data source.
To calculate the size for your site, replace the data sources and J2EE application server variables in the formula with the number configured data sources and number of J2EE application servers in your WebSphere environment.
Table 302. Application Server Instance SMF Interval Statistics (KWWAPPSM) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 336 bytes (60/15 x 24 x 336) x server instances/1024
32 kilobytes*
Table 303. Datasource Detail (KWWDATAS) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 1140 bytes (60/15 x 24 x 1140) x data sources x J2EE application servers/1024
107 kilobytes*
Disk Space Requirements for Historical Data Tables 291
OMEGAMON XE for WebSphere Application Server for OS/390
Table 304. HTTP Session Detail for WAS OS/390 (KWWHTTPS) worksheet
*Calculated on 1 HTTP session in 1 J2EE application server.
To calculate the size for your site, replace the HTTP sessions and J2EE application server variables in the formula with your estimate of the number of active HTTP sessions in each J2EE application server and the number of J2EE application servers in your WebSphere environment.
*Calculated on 1 EJB method.
To calculate the size for your site, replace the EJB methods variable in the formula with the number of active EJB methods in all J2EE server instances in your WebSphere environment.
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 1272 bytes (60/15 x 24 x 1272) x HTTP sessions x J2EE application servers/1024
119 kilobytes*
Table 305. J2EE Server Bean Methods (KWWJBMTH) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 1620 bytes (60/15 x 24 x 1620) x EJB methods/1024
152 kilobytes*
OMEGAMON XE for WebSphere Application Server for OS/390
292 Historical Data Collection Guide for OMEGAMON XE and CCC
*Calculated on 1 EJB.
To calculate the size for your site, replace the EJBs variable in the formula with the exact number of active EJBs in all J2EE server instances in your WebSphere environment.
*Calculated on 1 server instance.
To calculate the size for your site, replace the server instances variable in the formula with the exact number of server instances in your WebSphere environment.
Table 306. J2EE Server Beans (KWWJBEAN) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 820 bytes (60/15 x 24 x 820) x EJBs/1024 77 kilobytes*
Table 307. J2EE Server Containers (KWWJCONT) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 632 bytes (60/15 x 24 x 632) x server instances/1024
59 kilobytes*
Disk Space Requirements for Historical Data Tables 293
OMEGAMON XE for WebSphere Application Server for OS/390
*Calculated on 1 application server.
To calculate the size for your site, replace the application servers variable in the formula with the number of application servers in your WebSphere environment.
*Calculated on 1 class.
To calculate the size for your site, replace the classes variable in the formula with your estimate of the number of active classes in all MOFW server instances in your WebSphere environment.
Table 308. JVM Garbage Collector Activity for WAS OS/390 (KWWGC) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 712 bytes (60/15 x 24 x 712) x application servers/1024
67 kilobytes*
Table 309. MOFW Server Classes (KWWMCLAS) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 812 bytes (60/15 x 24 x 812) x classes/1024 76 kilobytes*
OMEGAMON XE for WebSphere Application Server for OS/390
294 Historical Data Collection Guide for OMEGAMON XE and CCC
*Calculated on 1 server instance.
To calculate the size for your site, replace the server instances variable in the formula with the exact number of server instances in your WebSphere environment.
*Calculated on 1 method.
To calculate the size for your site, replace the class methods variable in the formula with the number of active class methods in all MOFW server instances in your WebSphere environment.
Table 310. MOFW Server Containers (KWWMCONT) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 564 bytes (60/15 x 24 x 564) x server instances/1024
53 kilobytes*
Table 311. MOFW Server Methods (KWWMMETH) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 1044 bytes (60/15 x 24 x 1044) x class methods/1024
98 kilobytes*
Disk Space Requirements for Historical Data Tables 295
OMEGAMON XE for WebSphere Application Server for OS/390
*Calculated on 1 MQSeries queue.
To calculate the size for your site, replace the queues and application servers variables in the formula with the number of configured MQSeries queues in the J2EE application server and the number of J2EE application servers in your WebSphere environment.
*Calculated on 1 product event.
The size of the history file for product events for WAS OS/390 will vary in size because the number of events generated by the agent varies from time to time. To calculate the size for your site, replace the product events variable in the formula with your estimate of the number of events issued by the agent during the specified sampling interval.
The configuration RetainProductsEvents tag in the agent configuration file, KWWXML, determines the maximum number of events that will be retained in the history file. (Refer to the configuration chapter in the OMEGAMON XE for WebSphere Application Server for OS/390 User’s Guide for information about this parameter.)
Table 312. MQSeries Access for WAS OS/390 (KWWMQSAC) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 820 bytes (60/15 x 24 x 820) x queues x application servers/1024
77 kilobytes*
Table 313. Product Events for WAS OS/390 (KWWPREV) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 484 bytes (60/15 x 24 x 484) x product events/1024
45 kilobytes*
OMEGAMON XE for WebSphere Application Server for OS/390
296 Historical Data Collection Guide for OMEGAMON XE and CCC
*Calculated on 1 server instance.
To calculate the size for your site, replace the server instances variable in the formula with the exact number of server instances in your WebSphere environment.
*Calculated on 1 exceptional workload.
To calculate the size for your site, replace the exceptional workloads and J2EE application servers variables in the formula with your estimate of the number of exceptional workloads in each J2EE application server and the number of J2EE application servers in your WebSphere environment.
You can control the number of exceptional workloads in each J2EE application serve by using the configuration ExceptionWorkloadName or ExceptionWorkloadMax or ExceptionWorkloadMinResponseTime tag in the agent configuration file, KWWXML. (Refer to the configuration chapter in the OMEGAMON XE for WebSphere Application Server for OS/390 User’s Guide for information about these parameters.)
Table 314. Server Instance Status (KWWAPSST) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 552 bytes (60/15 x 24 x 552) x server instances/1024
52 kilobytes*
Table 315. Workload Exception for WAS OS/390 (KWWWKLEX) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 1280 bytes (60/15 x 24 x 1280) x exceptional workloads x J2EE application servers/1024
120 kilobytes*
Disk Space Requirements for Historical Data Tables 297
OMEGAMON XE for WebSphere Application Server for OS/390
*Calculated on 1 workload degradation.
To calculate the size for your site, replace the workload degradations and J2EE application servers variables in the formula with your estimate of the number of workload degradations in each J2EE application server and the number of J2EE application servers in your WebSphere environment.
You can control the number of workload degradations by modifying the Workload Analysis Control files. (Refer to the configuration chapter in the OMEGAMON XE for WebSphere Application Server for OS/390 User’s Guide for information about configuring workload analysis.)
Table 316. Workload Degradation Detail for WAS OS/390 (KWWWKLDD) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 1272 bytes (60/15 x 24 x 1272) x workload degradations x J2EE application servers/1024
119 kilobytes*
OMEGAMON XE for WebSphere Application Server for OS/390
298 Historical Data Collection Guide for OMEGAMON XE and CCC
*Calculated on 1 workload.
To calculate the size for your site, replace the workloads and J2EE application server variables in the formula with your estimate of the number of workloads in each J2EE application server and the number of J2EE application servers in your WebSphere environment.
You can control the number of workload degradations by modifying the workload analysis control file. (Refer to the configuration chapter in the OMEGAMON XE for WebSphere Application Server for OS/390 User’s Guide for information about configuring workload analysis.)
Table 317. Workload Degradation Summary for WAS OS/390 (KWWWKLDS) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 1156 bytes (60/15 x 24 x1156) x workloads x J2EE application servers/1024
108 kilobytes*
Disk Space Requirements for Historical Data Tables 299
OMEGAMON XE for WebSphere Application Server for OS/390
OMEGAMON XE for WebSphere Application Server for OS/390 disk space summary worksheet
We recommend that you spread the disk space requirements among the systems where data collection is performed. For example, three historical tables might be collected on the CMS and others at the remote agent managed system. A disk space summary worksheet for OMEGAMON XE for WebSphere Application Server for OS/390 follows.
Table 318. OMEGAMON XE for WebSphere Application Server for OS/390 disk space summary worksheet
History Table Historical Data Table Size (kilobytes) (24
hours)
No. of Archives
Subtotal Space Required
(kilobytes)
Application Server Error Log
Application Server Instance
Application Server Instance SMF Interval Statistics
Application Trace for WAS OS/390
Application Trace File for WAS OS/390
Datasource Detail for WAS OS/390
Environmental Variables
HTTP Sessions for WAS OS/390
J2EE Server Bean Methods
J2EE Server Beans
J2EE Server Containers
MOFW Server Classes
MOFW Server Containers
MOFW Server Methods
MQSeries Access for WAS OS/390
Product Events for WAS OS/390
OMEGAMON XE for WebSphere Application Server for OS/390
300 Historical Data Collection Guide for OMEGAMON XE and CCC
Server Instance Status
Workload Exception for WAS OS/390
Workload Degradation Detail for WAS OS/390
Workload Degradation Summary for WAS OS/390
Total Disk Space Required
Table 318. OMEGAMON XE for WebSphere Application Server for OS/390 disk space summary worksheet (continued)
History Table Historical Data Table Size (kilobytes) (24
hours)
No. of Archives
Subtotal Space Required
(kilobytes)
Disk Space Requirements for Historical Data Tables 301
OMEGAMON XE for WebSphere Integration Brokers
OMEGAMON XE for WebSphere Integration Brokers
OMEGAMON XE for WebSphere Integration Brokers historical data tables
The amount of default space required for a 24 hour period on a monitored system varies greatly depending on customer configuration. The estimates below are taken from an example Windows system with the following WebSphere MQ Integrator components installed: configuration manager, user name server, Control Center, and a single message broker. The broker has 2 monitored execution groups, 4 message flows being monitored configured with 2 sub-flows, 12 CandleMonitor nodes, a total of 50 nodes with 150 terminals, and 2 threads per message flow. IBM accounting statistics has been turned on to collect all possible Archive data for the 4 message flows with the default IBM interval in place at 60 minutes, and the default agent parameter setting is in place to only collect Archive data for history, not Snapshot data.
Note: The historical collection interval must be set to the same value for each of the five statistics tables (Broker Statistics, Execution Group Statistics, Message Flow Statistics, Sub-Flow Statistics, and CandleMonitor Node Statistics). Also, the historical collection interval must be set to the same value for each of the four accounting tables (Message Flow Accounting, Thread Accounting, Node Accounting, and Terminal Accounting). The historical collection interval may be set to be a different value for the two groups of tables (statistics and accounting). The default for all collection is 15 minutes. OMEGAMON XE for WebSphere Integration Brokers does not support multiple collection intervals for either the statistics group of tables or the accounting group of tables.
Table 319. OMEGAMON XE for WebSphere Integration Brokers historical data tables
Attribute History Table Filename for Historical
Data
Default HDC Table
Estimated Space Required per managed
system per 24 hours
Components Kqitcomp Yes 234 kilobytes
Product Events Kqitprev Yes 17 kilobytes
Broker Information Kqitbrkr Yes 72 kilobytes
OMEGAMON XE for WebSphere Integration Brokers
302 Historical Data Collection Guide for OMEGAMON XE and CCC
Broker Events Kqitbrev Yes 28 kilobytes
Message Flow Events Kqitflev Yes 24 kilobytes
Broker Statistics Kqitstbr Yes 65 kilobytes
Execution Group Statistics Kqitsteg Yes 176 kilobytes
Message Flow Statistics Kqitstmf Yes 411 kilobytes
Sub-Flow Statistics Kqitstsf Yes 252 kilobytes
CandleMonitor Node Statistics Kqitstfn Yes 1,782 kilobytes
Message Flow Accounting Kqitasmf Yes 82 kilobytes
Thread Accounting Kqitasth Yes 115 kilobytes
Node Accounting Kqitasnd Yes 741 kilobytes
Terminal Accounting Kqitastr Yes 1547 kilobytes
Execution Group Information Kqitdfeg No
Message Flow Information Kqitdfmf No
Message Processing Node Information
Kqitdffn No
Neighbors Kqitdsen No
Subscriptions Kqitdses No
Retained Publications Kqitdser No
ACL Entries Kqitdsea No
Total Default Space 5,546 kilobytes
Table 319. OMEGAMON XE for WebSphere Integration Brokers historical data tables (continued)
Disk Space Requirements for Historical Data Tables 303
OMEGAMON XE for WebSphere Integration Brokers
OMEGAMON XE for WebSphere Integration Brokers table record sizes
Table 320. OMEGAMON XE for WebSphere Integration Brokers Table Record Sizes
History Table Record Size Frequency
Components 624 bytes 1 row per WebSphere broker component installed on system monitored by agent per interval
Product Events 692 bytes 1 row per product monitoring event noted by agent (pure event table, so not affected by interval)
Broker Information 772 bytes 1 row per interval
Broker Events 972 bytes 1 row per broker event publication (pure event table, so not affected by interval)
Message Flow Events 1,604 bytes 1 row per message flow event detected (pure event table, so not affected by interval)
Broker Statistics 688 bytes 1 row per interval
Execution Group Statistics
940 bytes 1 row per monitored execution group per interval
Message Flow Statistics 1,096 bytes 1 row per monitored message flow per interval
Sub-Flow Statistics 1344 bytes 1 row per monitored sub-flow per interval
CandleMonitor Node Statistics
1,584 bytes 1 row per CandleMonitor node per interval
Message Flow Accounting
872 bytes 1 row per message flow with IBM’s accounting feature turned on per IBM’s accounting interval for Archive data and, if selected for history, per 20 seconds for Snapshot data
Thread Accounting 612 bytes 1 row per thread per message flow with IBM’s accounting feature turned on per IBM’s accounting interval for Archive data and, if selected for history, per 20 seconds for Snapshot data
OMEGAMON XE for WebSphere Integration Brokers
304 Historical Data Collection Guide for OMEGAMON XE and CCC
OMEGAMON XE for WebSphere Integration Brokers space requirement worksheets
Node Accounting 632 bytes 1 row per node per message flow with IBM’s accounting feature turned on per IBM’s accounting interval for Archive data and, if selected for history, per 20 seconds for Snapshot data
Terminal Accounting 440 bytes 1 row per terminal per node per message flow with IBM’s accounting feature turned on per IBM’s accounting interval for Archive data and, if selected for history, per 20 seconds for Snapshot data
Execution Group Information
784 bytes 1 row per execution group per interval
Message Flow Information
1,040 bytes 1 row per message flow per interval
Message Processing Node Information
1,996 bytes 1 row per message processing node per interval
Neighbors 580 bytes 1 row per neighbor to the broker per interval
Subscriptions 1,784 bytes 1 row per subscription per interval
Retained Publications 1,180 bytes 1 row per retained publication per interval
ACL Entries 1,012 bytes 1 row per ACL entry per interval
Table 321. Components (kqitcomp) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 624 bytes (60/15 x 24 x 624 x 4) / 1024 for 4 installed components
234 kilobytes
Table 320. OMEGAMON XE for WebSphere Integration Brokers Table Record Sizes (continued)
Disk Space Requirements for Historical Data Tables 305
OMEGAMON XE for WebSphere Integration Brokers
Table 322. Product Events (kqitprev) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
N/A 692 bytes (692 x 25) / 1024 for 25 product monitoring events occurring
17 kilobytes
Table 323. Broker Information (kqitbrkr) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 772 bytes (60/15 x 24 x 772 x 1) / 1024 for 1 broker
72 kilobytes
Table 324. Broker Events (kqitbrev) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
N/A 972 bytes (972 x 30) / 1024 for 30 broker events occurring
28 kilobytes
OMEGAMON XE for WebSphere Integration Brokers
306 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 325. Message Flow Events (kqitflev) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
N/A 1,604 bytes (1604 x 15) / 1024 for 15 message flow events occurring
24 kilobytes
Table 326. Broker Statistics (kqitstbr) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 688 bytes (60/15 x 24 x 688 x 1) / 1024 for 1 broker
65 kilobytes
Table 327. Execution Group Statistics (kqitsteg) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 940 bytes (60/15 x 24 x 940 x 2) / 1024 for 2 monitored execution groups
176 kilobytes
Disk Space Requirements for Historical Data Tables 307
OMEGAMON XE for WebSphere Integration Brokers
Table 328. Message Flow Statistics (kqitstmf) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 1,096 bytes (60/15 x 24 x 1096 x 8) / 1024 for 4 monitored message flows
411 kilobytes
Table 329. Sub-Flow Statistics (kqitstsf) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 1,344 bytes (60/15 x 24 x 1344 x 2) / 1024 for 2 monitored sub-flows
252 kilobytes
OMEGAMON XE for WebSphere Integration Brokers
308 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 330. CandleMonitor Node Statistics (kqitstfn) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 1,584 bytes (60/15 x 24 x 1584 x 12) / 1024 for 12 CandleMonitor nodes in flows
1,782 kilobytes
Table 331. Message Flow Accounting (kqitasmf) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
1 hour * 872 bytes (60/60 x 24 x 872 x 4) / 1024 for 4 monitored message flows
82 kilobytes
*Note: This is the IBM default interval; even if you set the history interval to less, the data will only be produced as often as the IBM interval occurs.
Table 332. Thread Accounting (kqitasth) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
1 hour* 612 bytes (60/60 x 24 x 612 x 8) / 1024 for 4 monitored message flows with 2 threads each
115 kilobytes
*Note: This is the IBM default interval; even if you set the history interval to less, the data will only be produced as often as the IBM interval occurs.
Disk Space Requirements for Historical Data Tables 309
OMEGAMON XE for WebSphere Integration Brokers
Table 333. Node Accounting (kqitasnd) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
1 hour* 632 bytes (60/60 x 24 x 632 x 50) / 1024 for a total of 50 nodes in monitored message flows
741 kilobytes
*Note: This is the IBM default interval; even if you set the history interval to less, the data will only be produced as often as the IBM interval occurs.
Table 334. Terminal Accounting (kqitastr) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
1 hour* 440 bytes (60/60 x 24 x 440 x 150) / 1024 for a total of 150 terminals in monitored message flows
1,547 kilobytes
*Note: This is the IBM default interval; even if you set the history interval to less, the data will only be produced as often as the IBM interval occurs.
Table 335. Execution Group Information (kqitdfeg) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 784 bytes (60/15 x 24 x 784 x 2) / 1024 for 2 execution groups
147 kilobytes
OMEGAMON XE for WebSphere Integration Brokers
310 Historical Data Collection Guide for OMEGAMON XE and CCC
:
Table 336. Message Flow Information (kqitdfmf) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 1,040 bytes (60/15 x 24 x 1040 x 12) / 1024 for 12 message flows
1,170 kilobytes
Table 337. Message Processing Node Information (kqitdffn) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 1,996 bytes (60/15 x 24 x 1996 x 60) / 1024 for 60 message processing nodes
11,228 kilobytes
Table 338. Neighbors (kqitdsen) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 580 bytes (60/15 x 24 x 580 x 2) / 1024 for 2 neighbors to the broker
109 kilobytes
Disk Space Requirements for Historical Data Tables 311
OMEGAMON XE for WebSphere Integration Brokers
:
Table 339. Subscriptions (kqitdses) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 1,784 bytes (60/15 x 24 x 1784 x 20) / 1024 for 20 subscriptions
3,345 kilobytes
Table 340. Retained Publications (kqitdser) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 1,180 bytes (60/15 x 24 x 1180 x 3) / 1024 for 3 retained publications
332 kilobytes
Table 341. ACL Entries (kqitdsea) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 min. 1,012 bytes (60/15 x 24 x 1012 x 8) / 1024 for 8 ACL entries
759 kilobytes
OMEGAMON XE for WebSphere Integration Brokers
312 Historical Data Collection Guide for OMEGAMON XE and CCC
OMEGAMON XE for WebSphere Integration Brokers disk space summary worksheet
In the worksheet examples, we used the minimum collection interval unit of 15 minutes. You can create a summary table that provides a representative disk space storage space figure for all of the history files and archived files for a one-week time period, if all collection is done at the remote agent managed system. To do so, multiply the expected file size per 24 hours total times seven. Note that it is not recommended that historical collection be turned on for those tables not collected by default. If historical data is desired for those tables, a much longer collection interval than the default 15 minutes is recommended since the data is not expected to change often. We recommend that you spread the disk space requirements among the systems where data collection is performed. A disk space summary worksheet for OMEGAMON XE for WebSphere Integration Brokers follows.
Table 342. OMEGAMON XE for WebSphere Integration Brokers disk space summary worksheet
History Table Historical Data Table
Size (kilobytes)
(24 hours)
No. of Archives
Subtotal Space Required (kilobytes)
Components
Product Events
Broker Information
Broker Events
Message Flow Events
Broker Statistics
Execution Group Statistics
Message Flow Statistics
Sub-Flow Statistics
CandleMonitor Node Statistics
Message Flow Accounting
Thread Accounting
Node Accounting
Disk Space Requirements for Historical Data Tables 313
OMEGAMON XE for WebSphere Integration Brokers
Terminal Accounting
Execution Group Information
Message Flow Information
Message Processing Node Information
Neighbors
Subscriptions
Retained Publications
ACL Entries
Total Disk Space Required
Table 342. OMEGAMON XE for WebSphere Integration Brokers disk space summary worksheet (continued)
History Table Historical Data Table
Size (kilobytes)
(24 hours)
No. of Archives
Subtotal Space Required (kilobytes)
OMEGAMON XE for WebSphere MQ Configuration
314 Historical Data Collection Guide for OMEGAMON XE and CCC
OMEGAMON XE for WebSphere MQ Configuration
OMEGAMON XE for WebSphere MQ Configuration data tablesThis data table is used for logging changes made to your configuration. You can use this table if you are using archiving and conversion facilities. This log data is written only on the CMS node and cannot be configured by the historical configuration program.
Note: *Based on making 100 configuration updates per 24-hour period.
OMEGAMON XE for WebSphere MQ Configuration table record size
OMEGAMON XE for WebSphere MQ Configuration space requirement worksheet
Use the following worksheet to estimate expected file sizes and the additional disk space requirements for your site. A sample calculation is provided for the historical data collection table.
Table 343. OMEGAMON XE for WebSphere MQ Configuration historical data table
Attribute History Table
Filename for Historical Data
Default HDC Table
Estimated Space Required per
managed system per 24 hours
Audit Log KCFAUDIT No 59 kilobytes*
History Table Record Size Frequency
Audit Log 600 bytes 1 record per change to the configuration
Disk Space Requirements for Historical Data Tables 315
OMEGAMON XE for WebSphere MQ Configuration
Table 344. OMEGAMON XE for WebSphere MQ Configuration disk space summary worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
N/A 600 bytes (600 x 100) / 1024 for 100 configuration changes
59 kilobytes
OMEGAMON XE for WebSphere MQ Monitoring
316 Historical Data Collection Guide for OMEGAMON XE and CCC
OMEGAMON XE for WebSphere MQ Monitoring
OMEGAMON XE for WebSphere MQ Monitoring historical data tablesWe recommend that you always collect OMEGAMON XE for WebSphere MQ Monitoring historical tables at the remote managed system for MQSeries for two reasons:
n (Version 350 of this product) Historical reports are only available on the CMW if the data is kept at the remote managed system. That is, if you are collecting MQSeries historical data, you must collect the data at the agent running on MQSeries. You will be unable to access historical data on the CMW if you collect it at the CMS. If you choose to collect data at the CMS, you can view the historical reports on CandleNet Portal.
n Product performance is improved by keeping the data at the remote managed system, especially since much of the data applies only to MVS/ESA©, which can deal with large volumes of data more efficiently.
Important: To reduce the performance impact on your system, we recommend setting a longer collection interval for tables that collect a large amount of data. For this product, the Queue Statistics table collects a large amount of data. For additional information, see “Performance Impact of Historical Data Requests” on page 42.
The attribute history tables, default filenames, default tables collected, and the estimated disk space required per 24-hour period for the historical data collected for the OMEGAMON XE for WebSphere MQ Monitoring are listed in the table that follows. Total default space is the estimated space required per managed system per 24-hour period for the default file collection option for all MQSeries platforms except OS/390 or z/OS, and is based on monitoring 100 queues, 10 channels, and 500 events.
For information specific to OMEGAMON XE for WebSphere MQ Monitoring relating to historical data collection, see the Customizing Monitoring Options topic found in your version of the product documentation.
Disk Space Requirements for Historical Data Tables 317
OMEGAMON XE for WebSphere MQ Monitoring
Table 345. OMEGAMON XE for WebSphere MQ Monitoring historical data tables
Attribute History Table
Filename for Historical Data
Default HDC Table
Estimated Space Required per
managed system per 24 hours
Application Statistics* QM_APAL Yes
Application Queue Statistics*
QM_APQL Yes
Application Transaction/Program Statistics*
QM_APTL Yes
Channel Initiator* QMCHIN_LH
Channel Statistics QMCH_LH Yes 630 kilobytes
Buffer Pools* QMLHBM Yes
Error Log QMERRLOG
Event Log** QMEVENTH No** 1,229 kilobytes
Log Manager* QMLHLM Yes
Message Manager* QMLHMM Yes
Message Statistics QMSG_STAT
Page Sets* QMPS_LH Yes
Queue Statistics QMQ_LH Yes 4,088 kilobytes
Queue Sharing Group CF Structure Backups*
QSG_CFBKUP Yes
Queue Sharing Group CF Structure Statistics*
QSG_CFSTR Yes
Queue Sharing Group Channel Statistics*
QSG_CHANS Yes
Queue Sharing Group Queue Statistics*
QSG_QUEUES Yes
OMEGAMON XE for WebSphere MQ Monitoring
318 Historical Data Collection Guide for OMEGAMON XE and CCC
Note: *These tables are not available on platforms other than OS/390 or z/OS. They are not included for determining default space estimates.
Note: **The Event Log is created for all platforms but cannot be configured via option 3, Customize Historical Collection, on the HDC Main menu. It is included here since the data is available for use in the same way as history data. By default, QMEVENTH is automatically archived into CTIRA_HIST_DIR when it reaches 10MB. The name of the archive is QMEVENTH.arc.
OMEGAMON XE for WebSphere MQ Monitoring table record sizes
Attribute History Table
Filename for Historical Data
Default HDC Table
Estimated Space Required per
managed system per 24 hours
Queue Sharing Group Queue Managers*
QSG_QMGR Yes
Queue Sharing Group CF Structure Connection Statistics*
QSG_CFCONN Yes
Total Default Space 5,947 kilobytes
Table 346. OMEGAMON XE for WebSphere MQ Monitoring table record sizes
History Table Record Size Frequency
Application Statistics 348 bytes 1 record per application monitored per interval
Application Queue Statistics
440 bytes 1 record per queue per transaction/program per application monitored per interval
Application Transaction/ Program Statistics
360 bytes 1 record per transaction/program per application monitored per interval
Channel Initiator 196 bytes One record for each OS/390 queue manager
Channel Statistics 812 bytes 1 record per active channel monitored per interval
Table 345. OMEGAMON XE for WebSphere MQ Monitoring historical data tables (continued)
Disk Space Requirements for Historical Data Tables 319
OMEGAMON XE for WebSphere MQ Monitoring
Buffer Pools 352 bytes 1 record per buffer pool in use per interval
Error Log 1,496 bytes One record for each message written to the error log
Event Log 2,516 bytes 1 record per event
Log Manager 424 bytes 1 record per queue manager per interval
Message Manager 312 bytes 1 record per queue manager per interval
Message Statistics 556 bytes One record for every row returned by active situations associated with Message Statistics attribute group
Page Sets 328 bytes 1 record per active page set per interval
Queue Statistics 484 bytes 1 record per queue monitored per interval
Queue Sharing Group CF Structure Backups
1 record per per backup of CF Structure per QSG per interval
Queue Sharing Group CF Structure Statistics
1 record per per CF Structure per QSG per interval
Queue Sharing Group Channel Statistics
1 record per per shared channel in QSG per interval
Queue Sharing Group Queue Statistics
1 record per per shared queue in QSG per interval
Queue Sharing Group Queue Managers
1 record per queue manager per interval
Queue Sharing Group CF Structure Connection Statistics
1 record per per connection to CF Structure per QSG per interval
Table 346. OMEGAMON XE for WebSphere MQ Monitoring table record sizes
OMEGAMON XE for WebSphere MQ Monitoring
320 Historical Data Collection Guide for OMEGAMON XE and CCC
OMEGAMON XE for WebSphere MQ Monitoring space requirement worksheets
Use the following worksheets to estimate expected file sizes and the additional disk space requirements for your site. A sample calculation is provided for each historical data collection table.
Table 347. Application Statistics (QM_APAL) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 348 bytes (60/15 x 24 x 348 x 5) / 1024 for 5 monitored applications
163 kilobytes
Table 348. Application Queue Statistics (QM_APQL) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 440 bytes (60/15 x 24 x 440 x 20) / 1024 for 20 queues used by monitored applications
825 kilobytes
Table 349. Application Transaction/Program Statistics (QM_APTL) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 360 bytes (60/15 x 24 x 360 x 10) / 1024 for 10 transaction/programs for monitored applications
338 kilobytes
Disk Space Requirements for Historical Data Tables 321
OMEGAMON XE for WebSphere MQ Monitoring
Table 350. Buffer Pools (QMLHBM) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 352 bytes (60/15 x 24 x 352 x 4) / 1024 for 4 buffer pools in use
132 kilobytes
Table 351. Channel Initiator (QMCHIN_LH) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 196 bytes (60/15 x 24 x 196 x 1) / 1024 for each OS/390 queue manager
19 kilobytes
Table 352. Channel Statistics (QMCH_LH) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 812 bytes (60/15 x 24 x 812 x 10) / 1024 for 10 active monitored channels
761 kilobytes
Table 349. Application Transaction/Program Statistics (QM_APTL) worksheet
OMEGAMON XE for WebSphere MQ Monitoring
322 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 353. Error Log (QMERRLOG) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 1496 bytes (60/15 x 24 x 1496 x 1) / 1024 for 1 monitored queue manager
140 kilobytes
Table 354. Event Log (QMEVENTH) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
Events are written as they happen.
2516 bytes (2516 x 500) / 1024 for 500 events
1,229 kilobytes
Table 355. Log Manager (QMLHLM) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 424 bytes (60/15 x 24 x 424 x 1) / 1024 for 1 monitored queue manager
40 kilobytes
Disk Space Requirements for Historical Data Tables 323
OMEGAMON XE for WebSphere MQ Monitoring
Table 356. Message Manager (QMLHMM) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 312 bytes (60/15 x 24 x 312 x 1) / 1024 for 1 monitored queue manager
29 kilobytes
Table 357. Message Statistics (QMSG_STAT) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 556 bytes (60/15 x 24 x 556 x 30 rows*) / 1024
*Calculated as follows for 10 active situations at 5 minute situation interval written for 10 queues all using Queue as the grouping mechanism. A 5 minute situation interval divided into the 15 minute historical collection interval (15/5) = 3 collection intervals. 10 rows (10 situations per queue) x 3 collection intervals = 30 rows per 15 minute historical interval.
1.56 megabytes
OMEGAMON XE for WebSphere MQ Monitoring
324 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 358. Page Sets (QMPS_LH) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 328 bytes (60/15 x 24 x 328 x 10) / 1024 for 10 active page sets
308 kilobytes
Table 359. Queue Statistics (QMQ_LH) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 484 bytes (60/15 x 24 x 484 x 10) / 1024 for 10 monitored queues
454 kilobytes
Table 360. Queue Sharing Group CF Structure Backups (QSG_CFBKUP) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 240 bytes (60/15 x 24 x 240 x 5) / 1024 for 5 connected queue managers
113 kilobytes
Disk Space Requirements for Historical Data Tables 325
OMEGAMON XE for WebSphere MQ Monitoring
Table 361. Queue Sharing Group CF Structure Statistics (QSG_CFSTR) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 304 bytes (60/15 x 24 x 304 x 3) / 1024 for 3 monitored structures
86 kilobytes
Table 362. Queue Sharing Group Channel Statistics (QSG_CHANS) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 268 bytes (60/15 x 24 x 268 x 10) / 1024 for 10 monitored channels
251 kilobytes
Table 363. Queue Sharing Group Queue Statistics (QSG_QUEUES) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 236 bytes (60/15 x 24 x 236 x 20) / 1024 for 20 monitored queues
443 kilobytes
OMEGAMON XE for WebSphere MQ Monitoring
326 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 364. Queue Sharing Group Queue Managers (QSG_QMGR) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 332 bytes (60/15 x 24 x 332 x 2) / 1024 for 2 monitored queue managers
62 kilobytes
Table 365. Queue Sharing Group CF Structure Connection Statistics (QSG_CFCONN) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 332 bytes (60/15 x 24 x 332 x 5) / 1024 for 5 monitored queue manager connections to DB2
156 kilobytes
Disk Space Requirements for Historical Data Tables 327
OMEGAMON XE for WebSphere MQ Monitoring
OMEGAMON XE for WebSphere MQ Monitoring disk space summary worksheet
Table 366. OMEGAMON XE for WebSphere MQ Monitoring disk space summary worksheet
History Table Historical DataTable Size (kilobytes)
(24 hours)
No. of Archives
Subtotal Space Required
(kilobytes)
Application Statistics
Application Queue Statistics
Application Transaction/Program Statistics
Buffer Pools
Channel Initiator
Channel Statistics
Error Log
Event Log
Log Manager
Message Manager
Message Statistics
Page Sets
Queue Statistics
Queue Sharing Group CF Structure Backups
Queue Sharing Group CF Structure Statistics
Queue Sharing Group Channel Statistics
Queue Sharing Group Queue Statistics
OMEGAMON XE for WebSphere MQ Monitoring
328 Historical Data Collection Guide for OMEGAMON XE and CCC
Queue Sharing Group Queue Managers
Queue Sharing Group CF Structure Connection Statistics
Total Disk Space Required
Table 366. OMEGAMON XE for WebSphere MQ Monitoring disk space summary worksheet (continued)
Disk Space Requirements for Historical Data Tables 329
OMEGAMON XE for Windows Servers
OMEGAMON XE for Windows Servers
Caution: Event Log does not wrapIf you plan to collect historical data for Windows, be aware that the Event Log does not wrap. If many events are being logged, this log can become quite large very rapidly. You should ensure that you have procedures in place to archive data on a regular basis.
OMEGAMON XE for Windows Servers default historical data tables
Table 367. OMEGAMON XE for Windows Servers historical data tables
Attribute History Table
Filename for Historical Data
Default HDC Table
Estimated Space Required per managed system per 24 Hours
Logical Disk WTLOGCLDSK Yes 173 kilobytes
System WTSYSTEM Yes 39 kilobytes
Physical Disk WTPHYSDSK Yes 42 kilobytes
Memory WTMEMORY Yes 33 kilobytes
Process WTPROCESS No 1,671 kilobytes
Processor NTPROCSSR No 37 kilobytes
Page File NTPAGEFILE No 35 kilobytes
Objects WTOBJECTS No 25 kilobytes
Monitored Logs NTLOGINFO No 134 kilobytes
Event Log NTEVTLOG No 41,091 kilobytes
Active Server Pages ACTSRVPG no 38 kilobytes
HTTP Content Index HTTPCNDX No 26 kilobytes
HTTP Server HTTPSRVC No 33 kilobytes
FTP Server FTPSTATS No 29 kilobytes
OMEGAMON XE for Windows Servers
330 Historical Data Collection Guide for OMEGAMON XE and CCC
Attribute History Table
Filename for Historical Data
Default HDC Table
Estimated Space Required per managed system per 24 Hours
Internet Information Server IISSTATS No 28 kilobytes
UDP UDPSTATS No 25 kilobytes
TCP TCPSTATS No 26 kilobytes
IP IPSTATS No 29 kilobytes
ICMP ICMPSTAT No 33 kilobytes
Network Interface NETWRKIN No 47 kilobytes
Network Segment NETSEGMT No 39 kilobytes
Gopher Services GOPHRSVC No 30 kilobytes
MSMQ Information Store MSMQIS No 26 kilobytes
MSMQ Queue MSMQQUE No 37 kilobytes
MSMQ Service MSMQSVC No 26 kilobytes
MSMQ Sessions MSMQSESS No 53 kilobytes
RAS Port KNTRASPT No 233 kilobytes
RAS Total KNTRASTOT No 30 kilobytes
Cache NTCACHE No 33 kilobytes
Printer NTPRINTER No 227 kilobytes
Print Job NTPRTJOB No 117 kilobytes
Services NTSERVICE No 3,527 kilobytes
Service Dependencies NTDEVDEP No 2,888 kilobytes
Devices NTDEVICE No 7,054 kilobytes
Device Dependencies NTDEVDEP No 462 kilobytes
Indexing Service INDEXSVC No 42 kilobytes
Indexing Service Filter INDEXSVCF No 36 kilobytes
DHCP DHCPSRV No 28 kilobytes
Table 367. OMEGAMON XE for Windows Servers historical data tables
Disk Space Requirements for Historical Data Tables 331
OMEGAMON XE for Windows Servers
OMEGAMON XE for Windows Servers default table record sizes
Attribute History Table
Filename for Historical Data
Default HDC Table
Estimated Space Required per managed system per 24 Hours
DNS Memory DNSMEMORY No 25 kilobytes
DNS Zone Transfer DNSZONET No 30 kilobytes
DNS Dynamic Update DNSDYNUPD No 27 kilobytes
DNS Query DNSQUERY No 30 kilobytes
DNW WINS DNSWINS No 26 kilobytes
FTP Service FTPSVC No 46 kilobytes
Job Object JOBOBJ No 44 kilobytes
Job Object Details JOBOBJD No 87 kilobytes
NNTP Commands NNTPCMD No 67 kilobytes
NNTP Server NNTPSRV No 64 kilobytes
Print Queue PRINTQ No 44 kilobytes
SMTP SMTPSRV No 74 kilobytes
Web Service WEBSVC No 111 kilobytes
Total Default Space 59,152 kilobytes
Table 368. OMEGAMON XE for Windows Servers table record sizes
History Table Record Size Frequency
Logical Disk 368 bytes 1 record per logical disk per interval
System 420 bytes 1 record per interval
Physical Disk 224 bytes 1 record per physical disk per interval
Memory 352 bytes 1 record per interval
Process 792 bytes 1 record per process per interval
Processor 196 1 record per processor per interval
Page File 184 1 record per page file per interval
Table 367. OMEGAMON XE for Windows Servers historical data tables
OMEGAMON XE for Windows Servers
332 Historical Data Collection Guide for OMEGAMON XE and CCC
Objects 268 1 record per interval
Monitored Logs 476 1 record per monitored log per interval
Event Log 1461 1 record per event log entry per interval
Active Server Pages 404 1 record per interval
HTTP Content Index 276 1 record per interval
HTTP Server 356 1 record per interval
FTP Server 308 1 record per interval
Internet Information Server
300 1 record per interval
UDP 264 1 record per interval
TCP 280 1 record per interval
IP 312 1 record per interval
ICMP 352 1 record per interval
Network Interface 252 1 record per network instance per interval
Network Segment 208 1 record per network segment per interval
Gopher Services 320 1 record per interval
MSMQ Information Store
272 1 record per interval
MSMQ Queue 196 1 record per MSMQ queue per interval
MSMQ Service 280 1 record per interval
MSMQ Sessions 280 1 record per MSMQ queue per interval
RAS Port 248 1 record per RAS port per interval
RAS Total 316 1 record per interval
Cache 352 1 record per interval
Printer 808 1 record per printer per interval
Print Job 624 1 record per service per interval
Services 396 1 record per service per interval
Service Dependencies 308 1 record per service dependency per interval
Table 368. OMEGAMON XE for Windows Servers table record sizes
Disk Space Requirements for Historical Data Tables 333
OMEGAMON XE for Windows Servers
Devices 396 1 record per device per interval
Device Dependencies 308 1 record per device dependency per interval
Indexing Service 224 1 record per indexing service per interval
Indexing Service Filter 192 1 record per interval
DHCP 300 1 record per interval
DNS Memory 268 1 record per interval
DNS Zone Transfer 316 1 record per interval
DNS Dynamic Update 292 1 record per interval
DNS Query 316 1 record per interval
DNW WINS 276 1 record per interval
FTP Service 244 1 record per FTP service per interval
Job Object 232 1 record per job object per interval
Job Object Details 308 1 record per job object detail per interval
NNTP Commands 356 1 record per NTTP server per interval
NNTP Server 340 1 record per NTTP server per interval
Print Queue 232 1 record per print queue per interval
SMTP 396 1 record per SMTP server per interval
Web Service 396 1 record per web service per interval
Table 368. OMEGAMON XE for Windows Servers table record sizes
OMEGAMON XE for Windows Servers
334 Historical Data Collection Guide for OMEGAMON XE and CCC
OMEGAMON XE for Windows Servers space requirement worksheetsUse the following worksheets to estimate expected file sizes and the additional disk space requirements for your site. A sample calculation is provided for each default historical data collection table. You can use the same calculation for all other OMEGAMON XE for Windows Servers historical data collection tables.
Table 369. Logical Disk (WTLOGCLDSK) worksheet
Interval Record Size Formula Expected File Size per 24 Hours
15 minutes 368 bytes (60/15 x 24 x 368 x 5) / 1024 173 kilobytes
Table 370. System (WTSYSTEM) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 420 bytes (60/15 x 24 x 420) / 1024 39 kilobytes
Table 371. Physical Disk (WTPHYSDSK) worksheet
Interval Record Size
No. of Drives
Formula Expected File Size per 24 Hours
15 min. 224 bytes 10 (60/15 x 24 x 224) x 2 / 1024 42 kilobytes
Table 372. Memory (WTMEMORY) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 352 bytes (60/15 x 24 x 352) / 1024 33 kilobytes
Disk Space Requirements for Historical Data Tables 335
OMEGAMON XE for Windows Servers
Table 373. Process (WTPROCESS) worksheet
Interval Record Size
No. ofProcesses
Formula Expected File Size per 24 Hours
15 min. 396 bytes 45 (60/15 x 24 x 396 x 45) / 1024
1,671 kilobytes
Table 374. Processor (NTPROCSSR) worksheet
Interval Record Size
No. of Processors
Formula Expected File Size per 24 Hours
15 min. 196 2 (60/15 x 24 x 196 x 2) / 1024 37 kilobytes
Table 375. Page File (NTPAGEFILE) worksheet
Interval Record Size
No. of Page Files
Formula Expected File Size per 24 Hours
15 min. 184 2 (60/15 x 24 x 184 x 2) / 1024 35 kilobytes
Table 376. Objects (WTOBJECTS) worksheet
Interval Record Size
No. of Objects
Formula Expected File Size per 24 Hours
15 min. 268 1 (60/15 x 24 x 268 x 1) / 1024 25 kilobytes
OMEGAMON XE for Windows Servers
336 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 377. Monitored Logs (NTLOGINFO) worksheet
Interval Record Size
No. of Monitored
Logs
Formula Expected File Size per 24 Hours
15 min. 476 3 (60/15 x 24 x 476 x 3) / 1024 134 kilobytes
Table 378. Event Log (NTEVTLOG) worksheet
Interval Record Size
No. of Log Entries
Formula Expected File Size per 24 Hours
15 min. 1,461 300 (60/15 x 24 x 1461 x 300) / 1024
41,091 kilobytes
Table 379. Active Server Pages (ACTSRVPG) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 404 (60/15 x 24 x 404) / 1024 38 kilobytes
Table 380. HTTP Content Index (HTTPCNDX) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 276 (60/15 x 24 x 276) / 1024 2638 kilobytes
Disk Space Requirements for Historical Data Tables 337
OMEGAMON XE for Windows Servers
Table 381. HTTP Server (HTTPSRVC) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 356 (60/15 x 24 x 356) / 1024 33 kilobytes
Table 382. FTP Server (FTPSTATS) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 308 (60/15 x 24 x 308) / 1024 29 kilobytes
Table 383. Internet Information Server (IISSTATS) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 300 (60/15 x 24 x 300) / 1024 28 kilobytes
Table 384. UDP (UDPSTATS) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 264 (60/15 x 24 x 264) / 1024 25 kilobytes
OMEGAMON XE for Windows Servers
338 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 385. TCP (TCPSTATS) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 280 (60/15 x 24 x 280) / 1024 26 kilobytes
Table 386. ICMP (ICMPSTAT) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 352 (60/15 x 24 x 352) / 1024 33 kilobytes
Table 387. Network Interface (NETWRKIN) worksheet
Interval Record Size
No. of Network
Instances
Formula Expected File Size per 24 Hours
15 min. 252 2 (60/15 x 24 x 252 x 2) / 1024 47 kilobytes
Table 388. Network Segment (NETSEGMT) worksheet
Interval Record Size
No. of Network
Instances
Formula Expected File Size per 24 Hours
15 min. 208 2 (60/15 x 24 x 208 x 2) / 1024 39 kilobytes
Disk Space Requirements for Historical Data Tables 339
OMEGAMON XE for Windows Servers
Table 389. Gopher Services (GOPHRSVC) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 320 (60/15 x 24 x 320) / 1024 30 kilobytes
Table 390. MSMQ Information Store (MSMQIS) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 272 (60/15 x 24 x 272) / 1024 26 kilobytes
Table 391. MSMQ Queue (MSMQQUE) worksheet
Interval Record Size
No. of Queues
Formula Expected File Size per 24 Hours
15 min. 196 2 (60/15 x 24 x 196 x 2) / 1024 37 kilobytes
Table 392. MSMQ Service (MSMQSVC) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 280 (60/15 x 24 x 280) / 1024 26 kilobytes
OMEGAMON XE for Windows Servers
340 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 393. MSMQ Sessions (MSMQSESS) worksheet
Interval Record Size
No. of Queues
Formula Expected File Size per 24 Hours
15 min. 280 2 (60/15 x 24 x 280 x 2) / 1024 53 kilobytes
Table 394. RAS Port (KNTRASPT) worksheet
Interval Record Size
No. of Ports Formula Expected File Size per 24 Hours
15 min. 248 10 (60/15 x 24 x 248 x 10) / 1024
233 kilobytes
Table 395. RAS Total (KNTRASTOT) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 316 (60/15 x 24 x 316) / 1024 30 kilobytes
Table 396. Cache (NTCACHE) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 352 (60/15 x 24 x 352) / 1024 33 kilobytes
Disk Space Requirements for Historical Data Tables 341
OMEGAMON XE for Windows Servers
Table 397. Printer (NTPRINTER) worksheet
Interval Record Size
No. of Printers
Formula Expected File Size per 24 Hours
15 min. 808 3 (60/15 x 24 x 808 x 3) / 1024 227 kilobytes
Table 398. Print Job (NTPRTJOB) worksheet
Interval Record Size
No. of Jobs Formula Expected File Size per 24 Hours
15 min. 624 2 (60/15 x 24 x 624 x 2) / 1024 117 kilobytes
Table 399. Services (NTSERVICE) worksheet
Interval Record Size
No. of Services
Formula Expected File Size per 24 Hours
15 min. 396 95 (60/15 x 24 x 396 x 95) / 1024
3,527 kilobytes
Table 400. Service Dependencies (NTSVCDEP) worksheet
Interval Record Size
No. of Dependencies
Formula Expected File Size per 24 Hours
15 min. 308 100 (60/15 x 24 x 308 x 100) / 1024
2,888 kilobytes
OMEGAMON XE for Windows Servers
342 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 401. Devices (NTDEVICE) worksheet
Interval Record Size
No. of Devices
Formula Expected File Size per 24 Hours
15 min. 396 190 (60/15 x 24 x 396 x 190) / 1024
7,054 kilobytes
Table 402. Device Dependencies (NTDEVDEP) worksheet
Interval Record Size
No. of Device Dependencies
Formula Expected File Size per 24 Hours
15 min. 308 16 (60/15 x 24 x 308 x 16) / 1024
462 kilobytes
Table 403. Indexing Service (INDEXSVC) worksheet
Interval Record Size
No. of Services
Formula Expected File Size per 24 Hours
15 min. 224 2 (60/15 x 24 x 224 x 2) / 1024 42 kilobytes
Table 404. Indexing Service Filter (INDEXSVCF) worksheet
Interval Record Size
No. of Filters
Formula Expected File Size per 24 Hours
15 min. 192 2 (60/15 x 24 x 192 x 2) / 1024 36 kilobytes
Disk Space Requirements for Historical Data Tables 343
OMEGAMON XE for Windows Servers
Table 405. DHCP (DHCPSRV) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 300 (60/15 x 24 x 300) / 1024 28 kilobytes
Table 406. DNS Memory (DNSMEMORY) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 268 (60/15 x 24 x 268) / 1024 25 kilobytes
Table 407. DNS Zone Transfer (DNSZONET) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 316 (60/15 x 24 x 316) / 1024 30 kilobytes
Table 408. DNS Dynamic Update (DNSDYNUPD) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 292 (60/15 x 24 x 292) / 1024 27 kilobytes
OMEGAMON XE for Windows Servers
344 Historical Data Collection Guide for OMEGAMON XE and CCC
Table 409. DNS Query (DNSQUERY) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 316 (60/15 x 24 x 316) / 1024 30 kilobytes
Table 410. DNS WINS (DNSWINS) worksheet
Interval Record Size
Formula Expected File Size per 24 Hours
15 min. 276 (60/15 x 24 x 276) / 1024 26 kilobytes
Table 411. FTP Service (FTPSVC) worksheet
Interval Record Size
No. of Services
Formula Expected File Size per 24 Hours
15 min. 244 2 (60/15 x 24 x 244 x 2) / 1024 46 kilobytes
Table 412. Job Object (JOBOBJ) worksheet
Interval Record Size
No. of Objects
Formula Expected File Size per 24 Hours
15 min. 232 2 (60/15 x 24 x 232 x 2) / 1024 44 kilobytes
Disk Space Requirements for Historical Data Tables 345
OMEGAMON XE for Windows Servers
Table 413. Job Object Details (JOBOBJD) worksheet
Interval Record Size
No. of Object Details
Formula Expected File Size per 24
Hours
15 min. 308 3 (60/15 x 24 x 308 x 3) / 1024 87 kilobytes
Table 414. NNTP Commands (NNTPCMD) worksheet
Interval Record Size
No. of Servers
Formula Expected File Size per 24 Hours
15 min. 356 2 (60/15 x 24 x 356 x 2) / 1024 67 kilobytes
Table 415. NNTP Server (NNTPSRV) worksheet
Interval Record Size
No. of Servers
Formula Expected File Size per 24 Hours
15 min. 340 2 (60/15 x 24 x 340 x 2) / 1024 64 kilobytes
Table 416. Print Queue (PRINTQ) worksheet
Interval Record Size
No. of Queues
Formula Expected File Size per 24 Hours
15 min. 232 2 (60/15 x 24 x 232 x 2) / 1024 44 kilobytes
OMEGAMON XE for Windows Servers
346 Historical Data Collection Guide for OMEGAMON XE and CCC
OMEGAMON XE for Windows Servers disk space summary worksheetWe recommend that you spread the disk space requirements among the systems where data collection is performed. For example, three historical tables might be collected on the CMS and others at the remote managed system. A disk space summary worksheet for OMEGAMON XE for Windows Servers follows.
Table 417. SMTP (SMTPSRV) worksheet
Interval Record Size
No. of Servers
Formula Expected File Size per 24 Hours
15 min. 396 2 (60/15 x 24 x 396 x 2) / 1024 74 kilobytes
Table 418. Web Service (WEBSVC) worksheet
Interval Record Size
No. of Services
Formula Expected File Size per 24 Hours
15 min. 396 3 (60/15 x 24 x 396 x 3) / 1024 111 kilobytes
Table 419. OMEGAMON XE for Windows Servers disk space summary worksheet
History Table Historical DataTable Size (kilobytes)
(24 hours)
No. of Archives
Subtotal Space Required
(kilobytes)
Logical Disk
System
Physical Disk
Memory
Process
Disk Space Requirements for Historical Data Tables 347
OMEGAMON XE for Windows Servers
History Table Historical DataTable Size (kilobytes)
(24 hours)
No. of Archives
Subtotal Space Required
(kilobytes)
Processor
Page File
Objects
Monitored Logs
Event Log
Active Server Pages
HTTP Content Index
HTTP Server
FTP Server
Internet Information Server
UDP
TCP
IP
ICMP
Network Interface
Network Segment
Gopher Services
MSMQ Information Store
MSMQ Queue
MSMQ Service
MSMQ Sessions
RAS Port
RAS Total
Table 419. OMEGAMON XE for Windows Servers disk space summary worksheet (continued)
OMEGAMON XE for Windows Servers
348 Historical Data Collection Guide for OMEGAMON XE and CCC
History Table Historical DataTable Size (kilobytes)
(24 hours)
No. of Archives
Subtotal Space Required
(kilobytes)
Cache
Printer
Print Job
Services
Service Dependencies
Devices
Device Dependencies
Indexing Service
Indexing Service Filter
DHCP
DNS Memory
DNS Zone Transfer
DNS Dynamic Update
DNS Query
DNW WINS
FTP Service
Job Object
Job Object Details
NNTP Commands
NNTP Server
Print Queue
SMTP
Web Service
Total Disk Space Required
Table 419. OMEGAMON XE for Windows Servers disk space summary worksheet (continued)
Index 349
Index
AAdobe portable document format 29advanced history configuration options 71
CCC Logs product requirements for 72archiving procedures
using LOGSPIN 85archiving procedures using Windows AT
command 87AS/400
location of historical data 91AS/400 considerations 91AT command, Windows 84attributes
specifying for historical data collection 72
Bbegin or end collection 65
CCandle Data Warehouse
configuring 76requirements for 76
Candle Warehouse Proxy Agent 77Candle web site 28CandleHistory, running on UNIX 103caution
NT event log does not wrap 329CCC Logs
advanced history configuration options 72
chapter contents 25CMS
rebuild CMS list 67requirement for 51, 61select target 67
CMS, targetused to generate historical data collection
rules 67collecting historical data
for HP NSK systems 107collection interval, specifying 68collection location, specifying 68collection options
historical data collection rules for 69collection options, specifying 69columns added to history data and to meta
description files 48configuration, custom
for historical data collection 72configure data collection
CMW 61Configure History icon 62configuring data collection
CandleNet Portal 54configuring your warehouse 78contents, chapter 25Conversion 84conversion process 108
MVS 93OS/400 84, 108overview 84, 108Windows 84, 108
conversion, dataautomatic for MVS 94defining 47HP NSK Systems 107mutually exclusive with warehousing 70MVS 93OS/400 83programs to perform 47UNIX 102using a MODIFY command on MVS 93using KPDXTRA on MVS 94Windows 83
converting files using krarfloff 88
350 Historical Data Collection Guide for OMEGAMON XE and CCC
attributes formatting 88on OS/400 89parameters 90using krarloff on Windows 89
converting files using krarloffHP NSK 108OS/400 88Windows 88
converting historical dataUNIX 101
CT/PDS 109commands 134
customizing your history conversion 104
Ddata conversion
automatic for MVS 94defining 47mutually exclusive with warehousing 70mututally exclusive with warehousing 70MVS 93OS/400 83programs to perform 47using a MODIFY command on MVS 93using KPDXTRA on MVS 94Windows 83
data conversion, performingUNIX 102
data conversion, UNIXautomatic 103one time 103
data roll off 53data warehouse
Candle Warehouse Proxy Agent 77configuring 78
data warehousingmutually exclusive with data
conversion 70prerequisites to 76
DDNAMES for KPDXTRAon MVS 95
disk space requirements for historical
tables 141display list of available Candle Management
Servers 67displaying collection status 65documentation set information 27
Eend or begin collection 65error logging for warehoused data 82exported historical data
logging of 79exporting persistent data 123
Ffile conversion
HP NSK systems 107file corruption 80
Ggroup
selecting for Historical Data Collection 68group, selecting
historical data collection rules for 68
Hhistorical data
components used to collect 45location on AS/400 91planning to collect 45selecting a strategy 45warehousing 47
historical data collectionattribute specifications required for 72CCC Logs used with 72custom configuration for 72defining rules 67purpose and use 23rules 46selecting a product for which to collect
data 68starting default collection 65
Index 351
strategy 46using CandleNet Portal 58
historical data collection configurationfor Universal Agent 73
Historical Data Collection Configuration program
invoking 53, 62prerequisites to running 51, 61requirements for invoking 53, 62
historical data collection rulesselecting a group or table 68selecting a product 68selecting the target CMS 67specifying collection options 69
historical data conversionPerforming on UNIX 101
historical data table fileslocation in MVS 98
historical data tablesdisk space requirements 141
historical reportingconfiguration 54file maintenance required 52long-term and short-term 53overview 52
history configurationUniversal Agent 73
History Configuration dialog 54, 63used to display collection status 65
history configuration options, advanced 71history tables
naming of 78history, short term 39HP NSK
file conversion for 107using krarloff on 108
Iicon, Configure History 62invoking the HDC Configuration program
requirements 53, 62steps 53, 62
KKPDXTRA 95
DDNAMES to be allocated 95messages 97parameters 95
krarloffconverting files on HP NSK 108converting files on OS/400 88converting files on Windows 88
krarloff parametersOS/400 90Windows 90
Llocation of MVS executables 98logfile parameters
OS/400 86Windows 86
LOGSPIN 84LOGSPIN program 84LOGSPIN, archiving procedures using 85
MMQSeries historical data
restriction on collecting 316MVS
data conversion using KPDXTRA 94location of historical data table files 98manual archiving procedure 99
MVS executables, location of 98
Nnaming of history tables 78
OODBC 40
requirement for using 38, 40SQL Server database on
Windows/NT 38, 40used to warehouse historical data 47
ODBC data
352 Historical Data Collection Guide for OMEGAMON XE and CCC
logging of successful exports 79OMEGAMON XE for CICS 144
historical data tables 144space requirement worksheets 148table record sizes 145
OMEGAMON XE for DB2disk space summary worksheet 169, 174historical data collection tables 162space requirements worksheets 164table record sizes 163
OMEGAMON XE for DB2 Universal Database
historical data collection tables 170space requirements worksheets 171table record sizes 171
OMEGAMON XE for MS SQL Serverdisk space summary worksheet 215historical data tables 207space requirement worksheets 209table record sizes 208
OMEGAMON XE for NetWaredisk space summary worksheet 177historical data tables 175space requirement worksheets 176table record sizes 175
OMEGAMON XE for ORACLEdisk space summary worksheet 190historical data tables 179space requirement worksheets 182table record sizes 180
OMEGAMON XE for OS/390disk space summary worksheet 225historical data tables 216table record sizes 218
OMEGAMON XE for OS/390 UNIX System Services
disk space summary worksheet 230historical data tables 226space requirement worksheets 227table record sizes 227
OMEGAMON XE for OS/400historical data tables 232
space requirement worksheets 234table record sizes 233
OMEGAMON XE for R/3historical data tables 242space requirement worksheets 243table record sizes 242
OMEGAMON XE for Sybasedisk space summary worksheet 205historical data tables 192space requirement worksheets 195table record sizes 193
OMEGAMON XE for Tuxedodisk space summary worksheet 268historical data tables 258space requirement worksheets 261table record sizes 259
OMEGAMON XE for UNIXdefault historical data tables 269disk space summary worksheet 273space requirement worksheets 270table record sizes 269
OMEGAMON XE for WebSphere Application Server 274
historical data tables 274table record sizes 276
OMEGAMON XE for WebSphere Application Server for OS/390
disk space summary 299disk space summary worksheet 299historical data tables 285space requirements worksheets 289–298table record sizes 287
OMEGAMON XE for WebSphere Integration Brokers 301
historical data tables 301OMEGAMON XE for WebSphere MQ
Configurationdata tables 314disk space summary worksheet 315table record size 314
OMEGAMON XE for WebSphere MQ Monitoring
disk space summary worksheet 327
Index 353
historical data tables 316space requirement worksheets 320table record sizes 318
OMEGAMON XE for WebSphere MQ products
running on HP NSK systems 107OMEGAMON XE for Windows NT
caution about event log size 329default table record sizes 331disk space summary worksheet 346space requirement worksheets 334
Open Database Connectivity 40used to warehouse data 47
OS/400data conversion 83krarloff parameters 86, 90logfile parameters 86overview of conversion process 84, 108
PPDF files, adding annotations 30Performance Attribute Tables, contents
of 143performance impact
on the agent 42on the CMS or the agent 42requests for historical data from large
tables 42warehousing 43
performance impact of large data requests 42
Persistent Data Storemaintaining 109restoring exported data 124
persistent data storebacking up datasets to DASD 117command interface 134commands 134connecting the dataset to the CMS 121data record format of exported data 125dataset naming conventions 119determining the medium for dataset
backup 117disconnecting the dataset 121exporting and restoring persistent
data 123exporting persistent data 123extracted data format 133extracting CT/PDS data to EBCDIC
files 132extracting CT/PDS data to flat files 131extracting data to EBCDIC files 132finding background information 120introduction 111maintaining the persistent data store 109making archived data available 119naming the export datasets 118overview of maintenance process 115what part of maintenance do you
control 116planning to collect historical data 45prerequisites to configuring your historical
warehouse 75prerequisites to running HDC Configuration
program 51, 61prerequisites to warehousing 76preventing file corruption
if your database is corrupted 81when storing data at the agent 81when storing data at the CMS 80
preventing historical data file corruption 80printing problems 29Proxy Agent, Candle Warehouse 77
Rreporting tool
data conversion using 38restoring exported persistent data 124roll off, data 53rules, defining for historical data
collection 67rules, historical data collection
selecting a group or table 68selecting target CMS 67
354 Historical Data Collection Guide for OMEGAMON XE and CCC
selecting the target CMS 67specifying collection options 69
Ssample meta description file (.hdr) 49select CMS targets for data collection 67selecting a product for Historical Data
Collection 68selecting a table or group for Historical Data
Collection 68short term history 39size of Windows NT event log
caution 329SQL Server database on Windows/NT
access via ODBC 38, 40starting default collection 65starting historical data collection
CandleNet Portal 58stopping all historical data collection 65strategy for historical data collection 46
Ttable
selecting for Historical Data Collection 68table, selecting
historical data collection rules for 68Tandem see HP NSK
UUniversal Agent history configuration 73
Wwarehouse
configuring 78prerequisites to configuring 75
warehouse interval, specifying 68warehousing
error logging for 82logging of successful exports 79mutually exclusive with data
conversion 70prerequisites to 76
warehousing historical data 47warehousing, data
mutually exclusive with data conversion 70
Windowskrarloff parameters 86, 90location of executables 92location of historical data table files 92location of history configuration files 92logfile parameters 86overview of conversion process 84, 108
Windows AT command 84Windows data conversion 83Windows NT event log
caution about size of 329