Connect CDC SQData - .NET Framework

92
Connect CDC SQData DB2 Capture Reference Version 4.0

Transcript of Connect CDC SQData - .NET Framework

Page 1: Connect CDC SQData - .NET Framework

Connect CDC SQData

DB2 Capture Reference

Version 4.0

Page 2: Connect CDC SQData - .NET Framework

2 Connect CDC SQData DB2 Capture Reference

DB2 Capture Reference

© 2001, 2021 SQData. All rights reserved.

Version 4.0

Last Update: 1/8/2021

Page 3: Connect CDC SQData - .NET Framework

DB2 Capture Reference

Contents

Introduction ............................................................................................... 6

Db2 Data Capture Summary ................................................................ 7

Organization ........................................................................................ 8

Terminology ......................................................................................... 9

Documentation Conventions ............................................................. 10

Related Documentation .................................................................... 11

Db2 Log Reader Capture ......................................................................... 12

Implementation Checklist ................................................................. 13

Prepare Environment ......................................................................... 15

Identify Source and Target System and Datastore ...................... 15

Confirm/Install Replication Related APARS ................................ 15

Modify z/OS PORCLIB Members ................................................. 16

Verify product is Linked ............................................................... 16

Bind the Db2 Package ................................................................. 16

Verify APF Authorization of LOADLIB .......................................... 16

Create zFS Variable Directories ................................................... 16

Reserve TCP/IP Ports ................................................................... 18

Identify/Authorize zFS User and Started Task IDs ....................... 18

Prepare Db2 for Capture ............................................................. 21

Generate z/OS Public / Private Keys and Authorized Key File ..... 22

Configure z/OS Master Controller ............................................... 24

Setup CDCStore Storage Agent .......................................................... 25

Size Transient Storage Pool ......................................................... 25

Apply Frequency .......................................................................... 26

Create zFS Transient Data Filesystem ......................................... 27

Create z/OS CDCStore CAB file ................................................... 28

Setup Log Reader Capture Agent ...................................................... 31

Configure Db2 Tables for Capture .............................................. 31

Create Db2 Capture CAB File ...................................................... 31

Encryption of Published Data ...................................................... 35

Prepare Db2 Capture Runtime JCL .............................................. 36

Setup Capture Controller Daemon .................................................... 38

Create Access Control List ......................................................... 38

Create Agent Configuration File .................................................. 40

Page 4: Connect CDC SQData - .NET Framework

DB2 Capture Reference

Prepare z/OS Controller Daemon JCL .......................................... 42

Configure Apply Engine ..................................................................... 44

Component Verification .................................................................... 45

Start z/OS Controller Daemon ..................................................... 45

Start Db2 Log Reader Capture Agent .......................................... 45

Start Engine ................................................................................. 45

Db2 Test Transactions ................................................................ 46

Operation .......................................................................................... 47

Start / Reload Controller Daemon ............................................... 47

Setting the Capture Start Point .................................................... 48

Restart / Remine Db2 .................................................................. 48

Apply Capture CAB File Changes ................................................ 50

Displaying Capture Agent Status ................................................ 52

Displaying Storage Agent Statistics ............................................ 53

Interpreting Capture/Storage Status ........................................... 54

Modifying z/OS Transient Storage Pool ...................................... 55

Stopping the Db2 Capture Agent ................................................. 56

Operating Scenarios .......................................................................... 58

Capture New Db2 Data ............................................................... 58

Send Existing Db2 Data to New Target ....................................... 59

Filter Captured Data .................................................................... 62

Implement TLS Support ................................................................ 64

Initial Target Load and Refresh ................................................... 64

Upgrading Db2 to 10 Byte LSN .................................................... 67

Db2 Straight Replication ......................................................................... 69

Target Implementation Checklist ...................................................... 70

Create Target Tables ......................................................................... 71

Generate Engine Public / Private Keys .............................................. 72

Create Straight Replication Script ..................................................... 73

Prepare z/OS Engine JCL ................................................................... 75

Verify Straight Replication ................................................................ 76

Db2 Active/Active Replication ................................................................ 77

Db2 Operational Issues .......................................................................... 78

Db2 Source Database Reorgs and Load Replace .............................. 79

Data Sharing Environments ............................................................... 80

Changes Not Being Captured ............................................................ 81

Page 5: Connect CDC SQData - .NET Framework

DB2 Capture Reference

Db2 Table Names .............................................................................. 82

Flush DB2 Log Buffer to Reduce Delays ........................................... 83

Upgrading Db2 to 10 Byte LSN .......................................................... 84

Compensation Analysis and Elimination ........................................... 85

Adding Uncataloged Tables .............................................................. 87

Signal Errors ...................................................................................... 89

Page 6: Connect CDC SQData - .NET Framework

6 Connect CDC SQData DB2 Capture Reference

Introduction

Precisely's Connect CDC SQData enterprise data integration platform includes Change Data Capture agents for theleading source data repositories, including:

· Db2 on z/OS

This document is a reference manual for the configuration and operation of this capture agent including the transientstorage and publishing of captured data to Engines running on z/OS and other platforms. Included in this reference isan example of Simple Replication of the source datastore. Apply Engines can also perform complex replication tonearly any form of structured target data repository, utilizing business rule driven filters, data transformation logicand code page translation.

The remainder of this section:

· Summarizes features and functions of the Db2 change data capture agent

· Describes how this document is organized

· Defines commonly used terms

· Defines documentation syntax conventions

· Identifies complementary documents

Page 7: Connect CDC SQData - .NET Framework

7Connect CDC SQData DB2 Capture Reference

Introduction

Db2 Data Capture Summary

The Connect CDC SQData Db2 Log Reader Capture provides for the following:

Attribute Db2 Log Reader

Data Capture Latency Near-Real-Time or Asynchronous

Capture Method Log Miner

Unit-of-Work Integrity Committed Only

Output Datastore Options TCP/IP

Runtime Parameter Method SQDCONF

Auto-Disable Feature Yes

Auto-Commit Feature Yes

Multi-Target Assignment Yes

Include/Exclude Filters Correlation ID, Db2 Plan, Authorization ID

Transaction Include/Exclude Yes

Page 8: Connect CDC SQData - .NET Framework

8 Connect CDC SQData DB2 Capture Reference

Introduction

Organization

The following sections provide a detail level Reference to installation, configuration and operation of the ConnectCDC SQData Capture Agent for Db2:

· Db2 Log Reader Capture

· Db2 Straight Replication

· Db2 Active/Active replication

· Db2 Operational Issues

See the Change Data Capture Guide for an overview of the role capture plays in Precisely's Connect CDC SQDataenterprise data integration product, the common features of the capture agents and the transient storage andpublishing of captured data to Engines running on all platforms.

Page 9: Connect CDC SQData - .NET Framework

9Connect CDC SQData DB2 Capture Reference

Introduction

Terminology

Terms commonly used when discussing Change Data Capture:

Term Meaning

Agent Individual components of the Connect CDC SQData product architecture.

CDC Abbreviation for Changed Data Capture.

Datastore An object that contains data such as a hierarchical or relational database, VSAM file, flat file, etc.

ExitA classification for changed data capture components where the implementation utilizes asubsystem exit in IMS, CICS, etc.

File Refers to a sequential (flat) file.

JCL An abbreviation for Job Control Language that is used to execute z/OS processes.

Platform Refers to an operation system instance.

RecordA basic data structure usually consisting of fields in a file, topic or message. A row consisting ofcolumns in a Relational database table. Record may be used interchangeably with row or message.

SegmentA basic data structure consisting of fields in an IMS hierarchical database. Segments are recordshaving parent and child relationships with other records defined by a Database Description (DBD).

Source A datastore monitored for content changes by a Capture Agent.

SQDCONF A Utility that manages configuration parameters used by some data capture components.

SQDXPARMA Utility that manages a set of parameters used by some IMS and VSAM changed data capturecomponents.

TableUsed interchangeably with relational datastore. A table represents a physical structure that containsdata within a relational database management system.

Target A datastore where information is being updated/written.

Page 10: Connect CDC SQData - .NET Framework

10 Connect CDC SQData DB2 Capture Reference

Introduction

Documentation Conventions

The following conventions are used in command and configuration syntax and examples in this

document.

Convention Explanation Example

Regular type Items in regular type must be entered literally usingeither lowercase or uppercase letters. Items in Bold typeare usually "commands" or "Actions". Note, uppercase isoften used in "z/OS" objects for consistency just aslowercase is often used on other platforms where casemay be either enforced or optional.

create

CCSID

/directory

//SYSOUT DD *

<variable> Items between < and > symbols represent variables. Youmust substitute an appropriate numeric or text value forthe variable.

<file_name>

| Bar A vertical Bar indicates that a choice must be madeamong items in a list separated by bars.

'yes' | 'no'

JSON | AVRO

[ ] Brackets Brackets indicate that item is optional. A choice may bemade among multiple items contained in brackets.

[alias]

OR

[+ | -]

-- Double dash Double dashes "--" identify an option keyword. Somekeywords may be abbreviated and preceded by a singledash "-". A double dash in some contexts can be used toindicate the start of a single line comment.

--service=<port>

OR -s <port>

OR --apply

OR -- this is acomment

… Ellipsis An ellipsis indicates that the preceding argument orgroup of arguments may be repeated.

[expression…]

Sequencenumber

A sequence number indicates that a series of argumentsor values may be specified. The sequence number itselfmust never be specified.

field2

' ' Single quotes Single quotation marks that appear in the syntax must bespecified literally.

IF CODE = 'a'

Page 11: Connect CDC SQData - .NET Framework

11Connect CDC SQData DB2 Capture Reference

Introduction

Related Documentation

Installation Guide - This publication describes the installation and maintenance procedures for the Connect CDCSQData for z/OS and Linux/AIX and Windows products.

Product Architecture - Describes the overall architecture of the Connect CDC SQData product and how itscomponents deliver true Enterprise Data Integration.

Data Capture Guide - This publication provides an overview of the role capture plays in Precisely's Connect CDCSQData product, the common features Capture and the methods supported for store and forward transport ofcaptured data to target Engines running on all platforms.

Apply and Replicator Engine References - These document provide a detail level reference describing the operationand command language of the Connect CDC SQData Apply and Replicator Engine components, which support targetdatastores on z/OS, AIX, Linux and Windows.

Secure Communications Guide - This publication describes the Secure Communications architecture and the processused to authenticate client-server connections.

Utility Guides - These publications describes each of the Connect CDC SQData utilities including SQDCONF, SQDMON,SQDUTIL and the zOS Master Controller.

Messages and Codes - This publication describes the messages and associated codes issued by the Capture,Publisher, Storage agents, Parser, Apply and Replicator Engines, and Utilities in all operating environments includingz/OS, Linux, AIX, and Windows.

Quickstart Guides - Tutorial style walk through for some common configuration scenarios including Capture andReplication. z/OS Quickstarts make use of the ISPF interface. While each Quickstart can be viewed in WebHelp, youmay find it useful to print the PDF version of a Quickstart Guide to use as a checklist.

Quickstart for Db2 - Procedures and screen shots that illustrate a fast and simple method of creating, configuring andrunning the components required for Db2 changed data capture.

Page 12: Connect CDC SQData - .NET Framework

12 Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

The Db2 Log Reader Capture is multi-threaded and comprised of three components within the SQDDb2C module; theLog Reader based Capture agent and the CDCStore multi-platform transient Storage Manager and Publisher. TheStorage Manager and Publisher together maintain both transient storage and UOW integrity. Only Committed Units-of-Work are sent by the Publisher to Engines via TCP/IP .

Page 13: Connect CDC SQData - .NET Framework

13Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

Implementation Checklist

This checklist covers the tasks required to prepare the operating environment and configure the Db2 Log ReaderCapture Data Capture Agent. Before beginning these tasks however, the base Connect CDC SQData product must beinstalled. Refer to the Installation Guide for an overview of the entire product and the z/OS installation instructionsand prerequisites.

# Task Sample JCL z/OS ControlCenter

Prepare Environment

1 Identify Source and Target System and Datastores N/A

2 Confirm/Install Db2 Replication Related APARS N/A

3 Modify z/OS Procedure Lib (PROCLIB) Members N/A

4 Verify Product is Linked SQDLINK

5 Bind the Db2 Package BINDSQD

6 Verify APF Authorization of LOADLIB N/A

7 Create ZFS Variable directories ALLOCZDR

8 Reserve TCP/IP Ports N/A

9 Authorize zFS User and Started Task IDs and specify MMAPAREAMAX RACFZFS

10 Prepare Db2 for Capture DB2GRANT

11 Generate z/OS Public/Private keys and Authorized Key Files NACLKEYS

12 Configure z/OS Master Controller N/A *

Environment Preparation Complete

Setup CDCStore Storage Agent

1 Size the Transient Storage Pool N/A *

2 Create Db2 Capture zFS Transient Data File(s) ALLOCZFS

3 Create the CDCStore CAB file SQDCONDS *

CDCStore Storage Agent Setup Complete

Setup Db2 Log Reader Capture Agent

1 Configure Db2 Tables for Capture (DB Server) N/A

2 Create Db2 Log Reader Capture CAB File SQDCONDC *

3 Prepare Log Reader Capture Runtime JCL SQDDB2C *

Capture Agent Setup Complete

Setup Controller Daemon

1 Create Access Control List (acl.cfg) CRDAEMON *

2 Create Agent Configuration File (sqdagents.cfg) CRDAEMON *

3 Prepare z/OS Controller Daemon JCL SQDAEMON *

Controller Daemon Setup Complete

Configure Apply Engine

1 Determine Requirements N/A

2 Configure Apply Engine Environment N/A

3 Create Apply Engine Script N/A

Apply Engine Configuration Complete

Page 14: Connect CDC SQData - .NET Framework

14 Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

Component Verification

1 Start the Controller Daemon SQDAEMON *

2 Start the Capture Agent SQDDB2C *

3 Start the Engine SQDATA *

4 Execute Test Transactions N/A

Verification Complete

Page 15: Connect CDC SQData - .NET Framework

15Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

Prepare Environment

Implementation of the Db2 Log Reader Capture agent requires a number of environment specific activities that ofteninvolve people and resources from different parts of an organization. This section describes those activities so thatthe internal procedures can be initiated to complete those activities prior to the actual setup and configuration ofthe Connect CDC SQData capture components.

· Identify Source and Target System and Datastores

· Confirm/Install Replication Related APARS

· Modify z/OS PROCLIB Members

· Verify Product is Linked

· Bind the Db2 Package

· Verify APF Authorization of LOADLIB

· Create ZFS Variable Directories

· Reserve TCP/IP Ports

· Identify/Authorize Operating User(s) and Started Task(s)

· Prepare Db2 for Capture

· Generate z/OS Public / Private Keys and Authorized Key File

· Configure z/OS Master Controller

Identify Source and Target System and DatastoreConfiguration of the Capture Agents, Engines and their Controller Daemon's require identification of the system andtype of datastore that will be the source of and target for the captured data. Once this information is available,requests for ports, accounts and the necessary file and database permissions for the Engines that will run on eachsystem should be submitted to the responsible organizational units.

Confirm/Install Replication Related APARSThe Db2 Capture Agent utilizes Db2 logging and LogReader IFI calls. That functionality evolves over time as customersand IBM identify problems. IBM initially creates a problem management report (PMR) when a problem is identified.Next an authorized program analysis report (APAR) is issued containing symptoms and work-around's to documentand track its resolution. Eventually IBM may produces a program temporary fix (PTF) to replace the module in error,and the APAR is closed.

Precisely recommends requesting the list of replication related APAR's associated with your installed version of Db2to ensure that your system is up to date before beginning to use the Connect CDC SQData Db2 Log Reader Capture.

Db2 Version 12 also requires implementation of the 10 Byte Log Sequence Number (LSN). If that was not completedprior to implementing the Db2 Change Data Capture see the section Upgrading Db2 to 10 Byte LSN below.

Page 16: Connect CDC SQData - .NET Framework

16 Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

Modify z/OS PORCLIB MembersModify the members of the SQDATA.V4nnnn.PROCLIB as required for your environment to set the supporting systemdataset names (i.e. Language Environment, IMS, VSAM, Db2, etc.) Each member contains instructions on thenecessary modifications. Refer to the SQDLINK procedure for the names of system level datasets as it was updatedduring the base Connect CDC SQData Installation.

Verify product is LinkedThe Connect CDC SQData product should have been linked as part of the base product installation using using JCLsimilar to sample member SQDLINKA. Verify that the return code from this job was 0.

Bind the Db2 PackageThe Db2 Capture Agent and Apply Engine that updates Db2 tables requires a Db2 Plan in order to access the Db2system catalogs to obtain information regarding the tables being processed. A common database request module(DBRM) SQDDDB2D is shipped as part of the Connect CDC SQData product distribution in SQDATA.V400.DBRMLIB. TheSample JCL used to bind the Package and Plan can be found in member BINDSQD.

Verify APF Authorization of LOADLIBExecution of the Connect CDC SQData on z/OS requires APF authorization of the product's Load Library, which isnormally made a permanent part of the IPL APF authorization procedure as part of the base product installation.Verify that Connect CDC SQData is on the list of currently APF authorized files using the z/OS ISPF/SDSF facility. First,enter "/D PROG, APF" at the SDSF command prompt to generate the list. Next, enter "LOG" at the SDSF commandprompt. Scroll to the bottom of the log to display the results of the previous command and then back up and to theright to view the complete listing of the command.

Create zFS Variable DirectoriesThe Controller Daemon, Capture, Storage and Publisher agents require a predefined zFS directory structure used tostore a small number of files. While only the configuration directory is required and the location of the agent anddaemon directories is optional, we recommend the structure described below, where <home> and a "user" named<sqdata> could be modified to conform to the operating environment and a third level created for the ControllerDaemon (see note below):

/<home>/<sqdata> - The home directory used by the Connect CDC SQData

/<home>/<sqdata>/daemon - The working directory used by the Daemon that also contains two subdirectories.

/<home>/<sqdata>/daemon/cfg - A configuration directory that contains two configuration files.

/<home>/<sqdata>/daemon/logs - A logs directory, though not required, is suggested to store logfiles used by the controller daemon. Its suggested location below must match the filelocations specified in the Global section of the sqdagents.cfg file created in the section"Setup Controller Daemon" later in this document.

Additional directories will be create for each Capture/Publisher. We recommend the structures described below:

/<home>/<sqdata>/db2cdc - The working directory for the Db2 Capture and CDCStore Storage agents.The Capture and CDCStore configuration (.cab) Files will be maintained in this directory alongwith small temporary files used to maintain connections to the active agents.

/<home>/<sqdata>/db2cdc/data - A data directory is required by the Db2 Capture. Files will beallocated in this directory as needed by the CDCStore Storage Agent when transient dataexceeds allocated in-memory storage. The suggested location below must match the "data_path"specified in the Storage agent configuration (.cab file) described later in this chapter. Adedicated File System is required in production with this directory as the "mount point".

Page 17: Connect CDC SQData - .NET Framework

17Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

/<home>/<sqdata>/imscdc - The working directory for the IMS Capture and CDCzLOG Publisher agents.The Capture and Publisher (.cab) Files will be maintained in this directory along with smalltemporary files used to maintain connections to the active agents.

/<home>/<sqdata>/[vsampub | kfilepub] - The working directory for the VSAM and Keyed File CompareCapture's CDCzLOG Publisher agent. The Publisher configuration (.cab) File will be maintainedin this directory along with small temporary files used to maintain connections to the activeagents.

Notes:

1. Consider changing default umask setting in the /etc/profile file, or in your .cshrc or .login file.

2. While many zFS File systems are configured with /u as the "home" directory, others use /home, the standardon Linux. References in the Connect CDC SQData JCL and documentation will use /home for consistency.Check with your Systems programmer regarding zFS on your systems.

3. The User-ID(s) and/or Started Task under which the Capture and the Controller Daemon will run must beauthorized for Read/Write access to the zFS directories.

4. A more traditional "nix" style structure may also be used where "sqdata", the product, would be a sub-directory in the structure "/var/opt/sqdata/" with the daemon and data sub-directory structures insidesqdata.

5. The BPXPRMxx member used for IPLs should be updated to include the mount point(s) for this zFS directorystructure.

JCL similar to the sample member ALLOCZDR included in the distribution should be used to allocate the necessarydirectories. The JCL should be edited to conform to the operating environment.

//ALLOCZDR JOB 1,MSGLEVEL=(1,1),MSGCLASS=H,NOTIFY=&SYSUID//*//*--------------------------------------------------------------------//* Allocate zFS Directories for Daemon and CAB Files//*--------------------------------------------------------------------//* Note: 1) These directories are use by the Controller Daemon,//* CDCStore and CDCzLog based capture agents//*//* 2) The 1st, 2nd and 3rd level directories can be changed but//* we recommend the 2nd Level be a User named sqdata.//*//* 3) Leave /daemon and /daemon/cfg as specified//*//* 4) Your UserID may need to be defined as SUPERUSER to//* successfully run this Job//*//*********************************************************************//*//*------------------------------------------------------------//* Delete Existing Directories//*------------------------------------------------------------//*DELETDIR EXEC PGM=IKJEFT01,REGION=64M,DYNAMNBR=99,COND=(0,LT)//*SYSEXEC DD DISP=SHR,DSN=SYS1.SBPXEXEC//*SYSTSPRT DD SYSOUT=*//*OSHOUT1 DD SYSOUT=*//*SYSTSIN DD *//* OSHELL rm -r /home/sqdata/*//*--------------------------------------------------------------------//* Create New ZFS Directories for Controller Daemon & Captures

Page 18: Connect CDC SQData - .NET Framework

18 Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

//*--------------------------------------------------------------------//CREATDIR EXEC PGM=IKJEFT01,REGION=64M,DYNAMNBR=99,COND=(0,LT)//SYSTSPRT DD SYSOUT=*//SYSTSIN DD * PROFILE MSGID WTPMSG MKDIR '/home/sqdata/' + MODE(7,7,5)

MKDIR '/home/sqdata/daemon/' + MODE(7,7,5)

MKDIR '/home/sqdata/daemon/cfg' + MODE(7,7,5)

MKDIR '/home/sqdata/daemon/logs' + MODE(7,7,5)

MKDIR '/home/sqdata/db2cdc/' + MODE(7,7,5)

MKDIR '/home/sqdata/db2cdc/data/' + MODE(7,7,5)/*// MKDIR '/home/sqdata/imscdc/' + MODE(7,7,5)

MKDIR '/home/sqdata/vsampub/' + MODE(7,7,5)

MKDIR '/home/sqdata/kfilepub' + MODE(7,7,5)

Reserve TCP/IP PortsTCP/IP ports are required by the Controller Daemons on source systems and are referenced by the Engines on thetarget system(s) where captured Change Data will be processed. Once the source systems are known, request portnumber assignments for use by Connect CDC SQData on those systems. Connect CDC SQData defaults to port 2626 ifnot otherwise specified.

Identify/Authorize zFS User and Started Task IDsz/OS Capture and Publisher processes can operate as standalone batch Jobs or under a Started Task. Once thedecision has been made as to which configuration will be employed, a User-ID and/or Name of the Started Task mustbe assigned. RACF must then be used to grant access to the OMVS zFS file system.

JCL similar to the sample member RACFZFS included in the distribution can be edited to conform to the operatingenvironment, and be used to provide the appropriate authorizations:

//RACFZFS JOB 1,MSGLEVEL=(1,1),MSGCLASS=H,NOTIFY=&SYSUID//*//*--------------------------------------------------------------------//* Sample RACF Commands to Setup zFS Authorization//*--------------------------------------------------------------------//* Note: 1) The Task/User Names are provided as an example and//* must be changed to fit your environment//*//* Started Tasks included://* SQDAMAST - z/OS Master Controller//* SQDDB2C - DB2 z/OS Capture Agent//* SQDZLOGC - IMS/VSAM Log Stream Publisher

Page 19: Connect CDC SQData - .NET Framework

19Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

//* SQDAEMON - z/OS Listener Daemon//* <admin_user> - Administrative User//*//* 2) MMAPAREAMAX Parm required only for DB2 CDCStore Capture//*//* 3) The FSACCESS step may be needed if the RACF FSACCESS//* class is active. See comments in the step.//*//*--------------------------------------------------------------------//*//RACFZFS EXEC PGM=IKJEFT01//SYSTSPRT DD SYSOUT=*//SYSPRINT DD SYSOUT=*//SYSUADS DD DSN=SYS1.UADS,DISP=SHR//SYSLBC DD DSN=SYS1.BRODCAST,DISP=SHR//SYSTSIN DD *ADDUSER SQDAMAST DFLTGRP(<stc_group>) OWNER(<owner_name>)ALTUSER SQDAMAST NOPASSWORD NOOIDCARDALTUSER SQDAMAST NAME('STASK, SQDATA')ALTUSER SQDAMAST DATA('FOR SQDATA CONTACT:<sqdata_contact_name>')ALTUSER SQDAMAST WORKATTR(WAACCNT('**NOUID**'))CONNECT SQDAMAST GROUP(<stc_group>) OWNER(<owner_name>)PERMIT 'SQDATA.*' ID(SQDAMAST) ACCESS(READ) GEN

ADDUSER SQDDB2C DFLTGRP(<stc_group>) OWNER(<owner_name>)ALTUSER SQDDB2C NOPASSWORD NOOIDCARDALTUSER SQDDB2C NAME('STASK, SQDATA')ALTUSER SQDDB2C DATA('FOR SQDATA CONTACT:<sqdata_contact_name>')ALTUSER SQDDB2C WORKATTR(WAACCNT('**NOUID**'))CONNECT SQDDB2C GROUP(<stc_group>) OWNER(<owner_name>)ALTUSER SQDDB2C OMVS(PROGRAM('/bin/sh'))ALTUSER SQDDB2C OMVS(MMAPAREAMAX(262144))PERMIT 'SQDATA.*' ID(SQDDB2C) ACCESS(READ) GEN

ADDUSER SQDZLOGC DFLTGRP(<stc_group>) OWNER(<owner_name>)ALTUSER SQDZLOGC NOPASSWORD NOOIDCARDALTUSER SQDZLOGC NAME('STASK, SQDATA')ALTUSER SQDZLOGC DATA('FOR SQDATA CONTACT:<sqdata_contact_name>')ALTUSER SQDZLOGC WORKATTR(WAACCNT('**NOUID**'))CONNECT SQDZLOGC GROUP(<stc_group>) OWNER(<owner_name>)ALTUSER SQDZLOGC OMVS(PROGRAM('/bin/sh'))PERMIT 'SQDATA.*' ID(SQDZLOGC) ACCESS(READ) GEN

ADDUSER SQDAEMON DFLTGRP(<stc_group>) OWNER(<owner_name>)ALTUSER SQDAEMON NOPASSWORD NOOIDCARDALTUSER SQDAEMON NAME('STASK, SQDATA')ALTUSER SQDAEMON DATA('FOR SQDATA CONTACT:<sqdata_contact_name>')ALTUSER SQDAEMON WORKATTR(WAACCNT('**NOUID**'))CONNECT SQDAEMON GROUP(<stc_group>) OWNER(<owner_name>)ALTUSER SQDAEMON OMVS(PROGRAM('/bin/sh'))PERMIT 'SQDATA.*' ID(SQDAEMON) ACCESS(READ) GEN

ADDUSER <admin_user> DFLTGRP(<stc_group>) OWNER(<owner_name>)ALTUSER <admin_user> NOPASSWORD NOOIDCARDALTUSER <admin_user> NAME('STASK, SQDATA')ALTUSER <admin_user> DATA('FOR SQDATA CONTACT:<contact_name>')ALTUSER <admin_user> WORKATTR(WAACCNT('**NOUID**'))CONNECT <admin_user> GROUP(<stc_group>) OWNER(<owner_name>)ALTUSER <admin_user> OMVS(PROGRAM('/bin/sh'))ALTUSER <admin_user> OMVS(MMAPAREAMAX(262144))PERMIT 'SQDATA.*' ID(<admin_user>) ACCESS(READ) GEN

Page 20: Connect CDC SQData - .NET Framework

20 Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

SETROPTS GENERIC (DATASET ) REFRESH/*////*--------------------------------------------------------------------//* SETUP R/W ACCESS TO THE SQDATA ZFS FILE SYSTEM//*//* If the FSACCESS RACF class is not active, do not run this step.//*//* The FSACCESS class provides coarse-grained control to z/OS USS//* file systems at the file system name level. It is inactive by//* default and is not always used.//*//* If your RACF administrator has activated this class, and if any//* protected file system will be accessed by a capture, publisher,//* daemon, admin user, or other user or task, then you will need to//* grant access to the relevant profile(s). Check with your RACF//* administrator to determine if this is required.//*//* The example below shows the RACF commands to define a new profile//* in the FSACCESS class for the DB2 CDCStore file system and grant//* UPDATE permission to the users that will access it.//*--------------------------------------------------------------------//FSACCESS EXEC PGM=IKJEFT01//SYSTSPRT DD SYSOUT=*//SYSPRINT DD SYSOUT=*//SYSUADS DD DISP=SHR,DSN=SYS1.UADS//SYSLBC DD DISP=SHR,DSN=SYS1.BRODCAST//SYSTSIN DD *SETROPTS GENERIC(FSACCESS)RDEFINE FSACCESS SQDATA.** UACC(NONE)PERMIT SQDATA.** CLASS(FSACCESS) ID(SQDAMAST) ACCESS(UPDATE)PERMIT SQDATA.** CLASS(FSACCESS) ID(SQDDB2C) ACCESS(UPDATE)PERMIT SQDATA.** CLASS(FSACCESS) ID(SQDZLOGC) ACCESS(UPDATE)PERMIT SQDATA.** CLASS(FSACCESS) ID(SQDAEMON) ACCESS(UPDATE)PERMIT SQDATA.** CLASS(FSACCESS) ID(<admin_user>) ACCESS(UPDATE)SETROPTS RACLIST(FSACCESS) REFRESH/*//

Notes:

· The RACFZFS sample JCL includes users SQDDB2C and SQDZLOGC. These sections are only required when usingthe Db2 CDCSTORE Capture or the IMS/VSAM CDCzLog Publisher agents respectively.

· The Db2 Log Reader Capture avoids "landing" captured data by using memory mapped storage. While Storageis not allocated until memory mapping is active, it is important to specify a value for MMAPAREAMAX usingRACF that will accommodate the data space pages allocated for memory mapping of the z/OS UNIX (OMVS)files. Precisely recommends using a value of 262144 (256MB) because the default of 4096 (16MB) will likelycause the capture to fail as workload increases. The RACF ADDUSER or ALTUSER command, included in thesample RACFZFS JCL above, specifies the MMAPAREAMAX limit. You can read more about MMAPAREAMAXprocess limits and its relationship to MAXPMMAPAREA system limits herehttps://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.1.0/com.ibm.zos.v2r1.bpxb200/maxmm.htm.

Page 21: Connect CDC SQData - .NET Framework

21Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

Prepare Db2 for CaptureThe Db2 Log Reader Capture requires special user privileges and preparation to access and read the Db2 RecoveryLogs.

The following GRANTS are required:

· GRANT MONITOR2 TO < sqdata_user>;

· GRANT EXECUTE ON PLAN SQDV4000 TO < sqdata_user>;

· GRANT SELECT ON SYSIBM.SYSTABLES TO < sqdata_user>;

· GRANT SELECT ON SYSIBM.SYSCOLUMNS TO < sqdata_user>;

· GRANT SELECT ON SYSIBM.SYSINDEXES TO < sqdata_user>;

· GRANT SELECT ON SYSIBM.SYSKEYS TO < sqdata_user>;

· GRANT SELECT ON SYSIBM.SYSTABLESPACE TO < sqdata_user>;

Db2 Reorg and Load procedures may need to be updated:

· KEEPDICTIONARY=YES parameter must be used by all Db2 REORG and LOAD Utilities. If the CDC process is runasynchronously, for some reason gets behind or is configured to recapture older logs, the proper CompressionDictionary must be available.

Notes:

· A common database request module (DBRM) sqdv4000.bnd package is shipped as part of the Connect CDCSQData product distribution and a Bind must be performed on the Package. Use the BINDSQD member in theCNTL Library to bind the Package and Plan to Db2.

· Db2 Monitor 2 privilege, specified above, is required by the Db2 Data Capture component in order to make Db2Instrumentation Facility Interface (IFI) calls.

· Each table to be captured (see Configure Db2 Tables for Capture) also requires:

ALTER TABLE schema.tablename DATA CAPTURE CHANGES;

JCL similar to sample member DB2GRANT included in the distribution can be edited to conform to the operatingenvironment, and be used to provide the appropriate Db2 user Authorizations.

//DB2GRANT JOB 1,MSGLEVEL=(1,1),MSGCLASS=H,NOTIFY=&SYSUID //* //*-------------------------------------------------------------------- //* Grant Db2 Authorizations for SQDATA Userid(s) //*-------------------------------------------------------------------- //* Note: MONITOR2 for IFI Calls //* Execute on the SQDATA PLAN SQDV4000 //* SELECT on Catalog Table SYSIBM.SYSTABLES //* SELECT on Catalog Table SYSIBM.SYSCOLUMNS //* SELECT on Catalog Table SYSIBM.SYSINDEXES //* SELECT on Catalog Table SYSIBM.SYSKEYS //* SELECT on Catalog Table SYSIBM.SYSTABLESPACE //*-------------------------------------------------------------------- //* //DB2GRANT EXEC PGM=IKJEFT01,DYNAMNBR=20 //STEPLIB DD DISP=SHR,DSN=DSNC10.SDSNLOAD //SYSTSPRT DD SYSOUT=*

Page 22: Connect CDC SQData - .NET Framework

22 Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

//SYSTSIN DD * DSN SYSTEM(DBCG) RUN PROGRAM(DSNTIAD) PLAN(DSNTIA11) - LIB('DSNC10.RUNLIB.LOAD') //SYSPRINT DD SYSOUT=* //SYSUDUMP DD SYSOUT=* //SYSIN DD * GRANT MONITOR2 TO <db2_user>; GRANT EXECUTE ON PLAN SQDV4000 TO <db2_user>; GRANT SELECT ON SYSIBM.SYSTABLES TO <db2_user>; GRANT SELECT ON SYSIBM.SYSCOLUMNS TO <db2_user>; GRANT SELECT ON SYSIBM.SYSINDEXES TO <db2_user>; GRANT SELECT ON SYSIBM.SYSKEYS TO <db2_user>; GRANT SELECT ON SYSIBM.SYSTABLESPACE TO <db2_user>;

Generate z/OS Public / Private Keys and Authorized Key FileThe Controller Daemon uses a Public / Private key mechanism to ensure component communications are valid andsecure. A key pair must be created for the SQDAEMON Job System User-ID and the User-ID's of all the Agent Jobs thatinteract with the Controller Daemon. On z/OS, by default, the private key is stored in SQDATA.NACL.PRIVATE and thepublic key in SQDATA.NACL.PUBLIC. These two files will be used by the daemon in association with a sequential filecontaining a concatenated list of the Public Keys of all the Agents allowed to interact with the Controller Daemon.The Authorized Key File must contain at a minimum, the public key of the SQDAEMON job System User-ID and isusually created with a first node matching the user name running the SQDAEMON job, in our exampleSQDATA.NACL.AUTH.KEYS.

The file will also include the Public key's of Capture Agents running on the same platform as the Controller Daemonand Engines running on zOS or other platforms. On the z/OS platform the Authorized Key List is usually maintainedby an administrator using ISPF.

JCL similar to sample member NACLKEYS included in the distribution executes the SQDUTIL utility program using thekeygen command and should be used to generate the necessary keys and create the Authorized Key File. The JCLshould be edited to conform to the operating environment and the job must be run under the user-id that will beused when the Controller Daemon job is run.

//NACLKEYS JOB 1,MSGLEVEL=(1,1),MSGCLASS=H,NOTIFY=&SYSUID //* //*--------------------------------------------------------------------//* Generate NACL Public/Private Keys and optionally AKL file //*--------------------------------------------------------------------//* Required DDNAME: //* SQDPUBL DD - File that will contain the generated Public Key //* SQDPKEY DD - File that will contain the generated private Key //* ** This file and its contents are not to be shared//* //* Required parameters: //* PARM - keygen *** In lower case *** //* USER - The system USERID or high level qualifier of the //* SQDATA libraries IF all Jobs will share Private Key. //* //* Notes: //* 1) This Job generates a new Public/Private Key pair, saves //* them to their respective files and adds the Public Key //* to an existing Authorized Key List, allocating a new //* file for that purpose if necessary. //* //* 2) An optional first step deletes the current set of files //* //* 3) Change the SET parms below for:

Page 23: Connect CDC SQData - .NET Framework

23Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

//* HLQ - high level qualifier of the CDC Libraries //* VER - the 2nd level qualifier of the CDC OBJLIB & LOADLIB //* USER - the High Level Qualifier of the NACL Datasets //*--------------------------------------------------------------------//* // SET HLQ=SQDATA // SET VER=V400 // SET USER=&SYSUID //* //JOBLIB DD DISP=SHR,DSN=SQDATA..&VER..LOADLIB//* //*------------------------------------------------------------------- //* Optional: Delete Old Instance of the NACL Files //*-------------------------------------------------------------------//*DELOLD EXEC PGM=IEFBR14 //*SYSPRINT DD SYSOUT=* //*OLDPUB DD DISP=(OLD,DELETE,DELETE),DSN=&USER..NACL.PUBLIC //*OLDPVT DD DISP=(OLD,DELETE,DELETE),DSN=&USER..NACL.PRIVATE //*OLDAUTH DD DISP=(OLD,DELETE,DELETE),DSN=SQDATA.NACL.AUTH.KEYS //*-------------------------------------------------------------------//* Allocate Public/Private Key Files and Generate Public/Private Keys//*-------------------------------------------------------------------//SQDUTIL EXEC PGM=SQDUTIL //SQDPUBL DD DSN=&USER..NACL.PUBLIC, // DCB=(RECFM=FB,LRECL=80,BLKSIZE=21200), // DISP=(,CATLG,DELETE),UNIT=SYSDA, // SPACE=(TRK,(1,1)) //SQDPKEY DD DSN=&USER..NACL.PRIVATE, // DCB=(RECFM=FB,LRECL=80,BLKSIZE=21200), // DISP=(,CATLG,DELETE),UNIT=SYSDA, // SPACE=(TRK,(1,1)) //SQDPARMS DD * keygen //SYSPRINT DD SYSOUT=* //SYSOUT DD SYSOUT=* //SQDLOG DD SYSOUT=* //*SQDLOG8 DD DUMMY //*-------------------------------------------------------------------//* Allocate the Authorized Key List File --> Used only by the Daemon //*-------------------------------------------------------------------//COPYPUB EXEC PGM=IEBGENER //SYSPRINT DD SYSOUT=* //SYSIN DD DUMMY //SYSUT1 DD DISP=SHR,DSN=&USER..NACL.PUBLIC //SYSUT2 DD DSN=SQDATA.NACL.AUTH.KEYS, // DCB=(RECFM=FB,LRECL=80,BLKSIZE=21200), // DISP=(MOD,CATLG),UNIT=SYSDA,SPACE=(TRK,(5,5))

Notes:

1. Since the Daemon and Capture Agents and zOS Apply Engines may be running in the same LPAR/system, theyfrequently run under the same System User-ID, in that case they would share the same public/private keypair.

2. Changes are not known to the daemon until the configuration file is reloaded, using the SQDMON Utility, orthe sqdaemon process is stopped and started.

Page 24: Connect CDC SQData - .NET Framework

24 Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

Configure z/OS Master ControllerThe z/OS Master Controller utility (SQDAMAST), reduces the overall number of jobs and initiators required to runmultiple Connect CDC SQData processes. Control commands can also be issued through the operator console, byreplying to an optional outstanding WTOR message. This utility also provides for Notification JCL to be submitted inthe event of an Engine, CDCStore/CDCzLog, Capture, Publisher or sqdaemon Controller failure.

The z/OS Master Controller is optional and not required for testing. It is more useful in larger Productionimplementations, particularly those utilizing many z/OS Apply Engines.

Review the z/OS Master Controller Guide, for configuration and operation of this utility.

Page 25: Connect CDC SQData - .NET Framework

25Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

Setup CDCStore Storage Agent

The Db2 Log Reader Capture utilizes the CDCStore Storage Agent to manage the transient storage of both committedand in-flight or uncommitted units-of-work using auxiliary storage. The Storage Agent must be setup beforeconfiguring the Capture Agent.

Size Transient Storage PoolThe CDCStore Storage Agent utilizes a memory mapped storage pool to speed captured change data on its way toEngines. It is designed to do so without "landing" the data, after it has been mined from a database log.Configuration of the Storage Agent requires the specification of both the memory used to cache changed data as wellas the disk storage used if not enough memory can be allocated to hold large units-of-work and other concurrentworkload.

Memory is allocated in 8MB blocks with a minimum of 4 blocks allocated or 32MB of system memory. The diskstorage pool is similarly allocated in files made up of 8MB blocks. While ideally the memory allocated would be largeenough to maintain the log generated by the longest running transaction AND all other transactions runningconcurrently, that will most certainly be impractical if not impossible.

Ultimately, there are two situations to be avoided which govern the size of the disk storage pool:

Large Units of Work - While never advisable, some batch processes may update very large amounts of data beforecommitting the updates. Often such large units of work may be unintentional or even accidental but must still beaccommodated. The storage pool must be able to accommodate the entire unit of work or a DEADLOCK condition willbe created.

Archived Logs - Depending on workload, database logs will eventually be archived at which point the data remainsaccessible to the Capture Agent but at a higher cost in terms of CPU and I/O. Under normal circumstances, captureddata should be consumed by Engines in a timely fashion making the CDCStore FULL condition one to be aware of butnot necessarily concerned about. If however the cause is a stopped Engine, the duration of the outage could result inun-captured data being archived.

The environment and workload may make it impossible to allocate enough memory to cache a worse case or eventhe average workload, therefore we recommend two methods for sizing the storage pool based on the availability oflogging information.

If detailed statistics are available:

1. Gather information to estimate the worse case log space utilization (longest running Db2 transaction AND allother Db2 transactions running concurrently) - We will refer to this number as MAX.

2. Gather information to estimate the log space consumed by an "Average size" Db2 transaction and multiply bythe number of average concurrent transactions - We will refer to this number as AVG.

3. Plan to allocate disk files in your storage pool as large as the Average (AVG) concurrent transaction Log spaceconsumed. Divide the value of AVG by 8 (number of MB in each block) - This will give you the Number-of-Blocks in a single file

4. Divide the value of MAX by 8 (number of MB in each block) and again by the Number of Blocks to calculate thenumber of files to allocate which we will refer to as Number-of-Files. Note, dividing the value of MAX byAVG and rounding to the nearest whole number should result in the same value for N.

Example:

Number-of-Blocks = AVG / 8 (MB per block)

Number-of-Files = MAX / 8 / Number-of-Blocks (which is the same as Number-of-Files = MAX / AVG)

Page 26: Connect CDC SQData - .NET Framework

26 Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

If detailed statistics are NOT available:

1. Precisely recommends using a larger number of small disk files in the storage pool and suggests beginningwith 256MB files. Dividing 256MB by the 8MB block size gives the Number-of-Blocks in a single file, 32.

2. Precisely recommends allocating a total storage pool of 2GB (2048MB) as the starting point. Divide thatnumber by 256MB to calculate the Number-of-Files required to hold 2GB of active LOG which would be 8.

Example:

Number-of-Blocks = 256MB / 8MB = 32

Number-of-Files = 2048MB / 256 = 8

Use these values to configure the CDCStore Storage Agent in the next section.

Notes:

1. Remember that it is possible to adjust these values once experience has been gained and performanceobserved. See the section "Display Storage Agent Statistics" in the Operations section below.

2. Think of the value for Number-of-Files as a file Extent, in that another file will be allocated only if theMEMORY cache is full and all of the Blocks (Number-of-Blocks) in the first file have been used and none arereleased before additional 8MB Blocks are required to accommodate an existing incomplete unit of work orother concurrent units of work.

3. While the number of Number-of-Blocks and Number-of-Files can be dynamically adjusted they will applyonly to new files allocated. It will be necessary to stop and restart the Storage Agent for changes to MEMORY.

4. Multiple Directories can also be allocated but this is only practical if the File system itself fills and a seconddirectory becomes necessary.

Apply FrequencyThe size of the transient storage area is also affected by the frequency changed data is applied to a target. Forexample, changes from a Db2 source to an Oracle target may only need to be applied once a day.

In this example the transient storage could be sized large enough to store all of the changed data accumulated duringthe one-day period. Often however, the estimated size will prove to be inadequate. When that happens the capturewill eventually stop mining the Db2 Log and wait for an Engine to connect and Publishing to resume. When theCapture does finally request the next log record from Db2, the required Db2 Archive Logs may have becomeinaccessible. This would occur if the wait period was long enough or the volume of data changing large enough thatthe Archive Log retention period was too short.

Best practices for Db2 Archive Log retention will normally ensure that the Archive Logs are accessible. In someenvironments however this can become an issue. Precisely recommends analysis of the total Db2 workload in allcases because even though only a fraction of all the existing tables may be configured for capture, the Db2 LogReader capture potentially requires access to every log Archived since the last Apply cycle.

Precisely recommends switching to a streaming Apply model for your target or raising the Apply frequency as high aspractical when capturing rapidly changing tables in high volume Db2 environments, especially if space is an issue.

Page 27: Connect CDC SQData - .NET Framework

27Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

Create zFS Transient Data FilesystemOnce you have estimated the potential size of the transient data storage pool you must create a dedicatedfilesystem. That filesystem will then be assigned a mount point that will be referenced in the CDCStoreconfiguration (.cab) file. As discussed previously this filesystem need not be large but it is critical that it be dedicatedfor this purpose because the Storage agent will create and remove files as needed based on the definition of thestorage pool. It will expect the space it has been told it can use to be available when needed and will terminate thecapture if it is not. Double the amount of space you have estimated will be required. That will allow you to adjust thenumber of blocks and files in your configuration without having to add an additional mount point and filesystem.

JCL similar to the following sample member ALLOCZFS included in the distribution should be used to create theTransient Data Filesystem for the Db2 z/OS Capture Agent and mount it at the directory createdpreviously, /home/sqdata/db2cdc/data. The JCL should be edited to conform to the operating environment.

//ALLOCZFS JOB 1,MSGLEVEL=(1,1),MSGCLASS=H,NOTIFY=&SYSUID//*//*--------------------------------------------------------------------//* Allocate Transient Storage Filesystem for DB2 z/OS Capture Agent//*--------------------------------------------------------------------//* Required parameters (set below)://* ZFS - the name of the VSAM cluster aggregate//* MB - the number of megabytes to allocate to the cluster//* DIR - the directory name of the ZFS mountpoint//* which was previously created using member ALLOCZDR//* SMS - the SMS class to be assigned to the cluster//* VOL - the DASD volume(s) used for cluster allocation//*//* Notes://* 1) You must be a UID(0) User to Run this Job//*//* 2) This job contains six (6) steps as follows://* - Unmount the existing file system - optional//* - Define the ZFS filesystem VSAM cluster aggregate//* - Format the ZFS filesystem aggregate//* - Format the ZFS filesystem aggregate//* - Create the mountpoint directory//* - Mount the ZFS filesystem//*--------------------------------------------------------------------//*// EXPORT SYMLIST=(ZFS,MB,DIR,SMS,VOL)// SET ZFS=SQDATA.TESTZFS// SET MB=2049// SET DIR='/home/sqdata/db2cdc/data'// SET SMS=DBCLASS// SET VOL=WRK101//*//*------------------------------------------------------------------//* Optional - Unmount the Existing File System//*------------------------------------------------------------------//*UNMOUNT EXEC PGM=IKJEFT01,DYNAMNBR=75,REGION=8M//*SYSPRINT DD SYSOUT=*//*SYSTSPRT DD SYSOUT=*//*SYSTERM DD DUMMY//*SYSUADS DD DSN=SYS1.UADS,DISP=SHR//*SYSLBC DD DSN=SYS1.BRODCAST,DISP=SHR//*SYSTSIN DD *,SYMBOLS=JCLONLY//* UNMOUNT FILESYSTEM('&ZFS')/*

Page 28: Connect CDC SQData - .NET Framework

28 Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

//*------------------------------------------------------------------//* Define the ZFS filesystem VSAM cluster aggregate//*------------------------------------------------------------------//DEFINE EXEC PGM=IDCAMS//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SYSIN DD *,SYMBOLS=JCLONLY DELETE &ZFS CLUSTER SET MAXCC=0 DEFINE CLUSTER (NAME(&ZFS) - VOLUME(&VOL) - STORCLAS(&SMS) - LINEAR - MB(&MB 0) - SHAREOPTIONS(3))/*//*------------------------------------------------------------------//* Format the ZFS Filesystem Aggregate//*------------------------------------------------------------------//FORMAT EXEC PGM=IOEAGFMT,REGION=0M,PARM=('-aggregate &ZFS -compat')//SYSPRINT DD SYSOUT=*//STDOUT DD SYSOUT=*//STDERR DD SYSOUT=*/*//*--------------------------------------------------------------------//* Create the Mountpoint Directory//*--------------------------------------------------------------------//CREATDIR EXEC PGM=IKJEFT01,REGION=64M,DYNAMNBR=99,COND=(0,LT)//SYSTSPRT DD SYSOUT=*//SYSTSIN DD *,SYMBOLS=JCLONLY PROFILE MSGID WTPMSG MKDIR '&DIR' MODE(7,7,5)/*//*------------------------------------------------------------------//* Mount the ZFS Filesystem//*------------------------------------------------------------------//MOUNT EXEC PGM=IKJEFT01,DYNAMNBR=75,REGION=8M,COND=(0,LT)//SYSPRINT DD SYSOUT=*//SYSTSPRT DD SYSOUT=*//SYSTERM DD DUMMY//SYSUADS DD DSN=SYS1.UADS,DISP=SHR//SYSLBC DD DSN=SYS1.BRODCAST,DISP=SHR//SYSTSIN DD *,SYMBOLS=JCLONLY MOUNT FILESYSTEM('&ZFS') - MOUNTPOINT('&DIR') - TYPE(ZFS) - MODE(RDWR)/*

Create z/OS CDCStore CAB fileThe CDCStore Storage Agent configuration (.cab) file is a binary file created and maintained by the SQDCONF utility.While this section focuses primarily on the initial configuration of the Storage agent, sequences of SQDCONFcommands to create and configure the storage agent should be saved in a shell script or a zOS PARMLIB member bothfor migration to other operating environments and for recovery.

The SQDCONF create command will be used to prepare the initial CDCStore configuration (.cab) file for the CDCStoreStorage agent used by the Db2 zOS, UDB (DB2/LUW) and Oracle Capture agents:

Syntax

sqdconf create <cab_file_name>

Page 29: Connect CDC SQData - .NET Framework

29Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

--type=store[--number-of-blocks=<blocks_per_file>][--number-of-logfiles=<number_of_files>]--data-path=<directory_name>

Keyword and Parameter Descriptions

<cab_file_name> - Path and name of the Storage Agent Configuration (.cab) file. The directory must exist andthe user-id associated with the agent must have the right to create and delete files in that directory. Thereis a one to one relationship between the CDCStore Storage Agent and Capture Agent. Preciselyrecommends including the Capture Agent alias as the third node in the directory structure and first node ofthe file name, for example, /home/sqdata/db2cdc/db2cdc_store.cab. In a windows environment .cfg maybe substituted for .cab since .cab files have special meaning in windows.

--type=store - Agent type must be "store" for the Storage Agent.

[--number-of-blocks=<blocks_per_file> | -b <blocks_per_file>] - The number of 8MB blocks allocated for eachfile defined for transient CDC storage. The default value is 32.

[--number-of-logfiles=<number_of_files> | -n <number_of_files>] - The number of files that can be allocatedin a data-path. Files are allocated as needed one full file at a time, during storage agent operation.CDCStore recycles blocks when possible before new storage is allocated. The default value is 8.

--data-path=<directory_name> - The path and directory name, where the storage agent will create transientstorage files. The directory must exist and the user-id associated with the agent must have the right tocreate and delete files in that directory. In our example, /home/sqdata/db2cdc/data

Example

Create the Connect CDC SQData CDCStore Storage Agent configuration for a Db2 zOS capture using JCL similar tosample member SQDCONDS included in the distribution with the appropriate storage pool parameters. Whilethose parameters are included in-line below we recommend that they be maintained separately in a filereferenced by the SQDPARM DD:

//SQDCONDS JOB 1,MSGLEVEL=(1,1),MSGCLASS=H,NOTIFY=&SYSUID//*//*--------------------------------------------------------------------//* Create CAB File for DB2 CDCStore Storage Agent//*--------------------------------------------------------------------//* Note: 1) Parameter Keywords must be entered in lower case//*//* 2) Parameters Values are Case Sensitive.//*//* 3) The transient storage directory(s) should be sized//* to hold the largest anticipated unit-of-work in//* addition to any concurrent inflight transactions//*//* Steps: 1) (optional) delete the existing Storage CAB File//* 2) Create a new Storage CAB File//* 3) Display the contents of the new CAB File//*//*********************************************************************//*//JOBLIB DD DISP=SHR,DSN=SQDATA.V400.LOADLIB//*//*-------------------------------------------//* Optional - Delete existing CAB File//*-------------------------------------------//*DELETE EXEC PGM=IEFBR14

Page 30: Connect CDC SQData - .NET Framework

30 Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

//*SYSPRINT DD SYSOUT=*//*CDCSTORE DD PATHDISP=(DELETE,DELETE),//* PATH='/home/sqdata/db2cdc/db2cdc_store.cab'//*//*-------------------------------------------//* Create a New Capture CDCStore CAB File//*-------------------------------------------//CRSTORE EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * create /home/sqdata/db2cdc/db2cdc_store.cab --type=store --number-of-blocks=32 --number-of-logfiles=8 --data-path=/home/sqdata/db2cdc/data//*//*----------------------------------------------//* Display the Contents of the CDCStore CAB File//*----------------------------------------------//DISPLAY EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * display /home/sqdata/db2cdc/db2cdc_store.cab --details/*

Notes:

1. See the SQDCONF Utility Reference for more details.

2. The SQDCONF create command defines the .cab file name and the location and size of the transient datastore. Once created, this command should never be run again unless the storage agent is being recreated.

3. Unlike the Capture/Publisher configuration files, changes to the CDCStore configuration file take effectimmediately and do not require the usual stop/apply/start sequence.

4. The Directory path references, in the example /home/sqdata/<type>cdc/ can be modified to conform to theoperating environment but must match the Connect CDC SQData Variable Directory created in the PrepareEnvironment Section for the Capture.

Page 31: Connect CDC SQData - .NET Framework

31Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

Setup Log Reader Capture Agent

The Db2 Log Reader Capture performs three functions: Capturing changed data by mining the Db2 Log, managing thecaptured data, and publishing committed data directly to Engines using TCP/IP. The Publishing function manages thecaptured data until it has been transmitted and consumed by Engines, ensuring that captured data is not lost untilthe Engines, which may operate on other platforms, signal that data has been applied to their target datastores.

Setup and configuration of the Capture Agent include:

· Configure Db2 tables for capture

· Create Db2 Capture Agent CAB file

· Encryption of Published Data

· Prepare Db2 Capture JCL

Configure Db2 Tables for CaptureIn order for the Db2 Capture Agent to be able to extract the changed data for Db2 tables from the recovery log, thesource Db2 table must be altered to allow for change data capture.

Syntax

ALTER TABLE <schema.tablename> DATA CAPTURE CHANGES;

Keyword and Parameter Descriptions

<schema.tablename> is the fully qualified name of the source table for which changes are to be captured.

Note, Enabling change data capture will increase the amount of data written to the Db2 recovery log for each updateto the source data table. Depending on the size of the tables and the volume of updates made to the table, the sizeof the active Db2 logs may have to be adjusted to accommodate the increase data.

Create Db2 Capture CAB FileThe Db2 Log Reader Capture Agent configuration (.cab) file is created and maintained by the sqdconf utility using JCLsimilar to sample member SQDCONDC included in the distribution. While this section focuses primarily on the initialconfiguration of the Capture Agent, sequences of SQDCONF commands to create and configure the capture agent canand should be stored in parameter files in case they need to be recreated. See SQDCONF Utility Reference for a fullexplanation of each command, their respective parameters and the utility's operational considerations.

Syntax

sqdconf create <cab_file_name>--type=db2--ssid=<db2_system_id>[--ccsid=<coded_character_set_identifier>][--plan=<sqdata_plan>][--exclude-plan=<value>][--auto-exclude-plan=<y or n>][--exclude-user=<user_id>][--exclude-correlation-id=<value>][--encryption][--auth-keys-list="<name>"][--exclude-program=<plan_name>]--store=<store_cab_file_name>

Keyword and Parameter Descriptions

Page 32: Connect CDC SQData - .NET Framework

32 Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

<cab_file_name> - Path and name of the Capture/Publisher Configuration (.cab) file. The directory must existand the user-id associated with the agent must have the right to create and delete files in that directory.Precisely recommends including the Capture Agent alias as the third node in the directory structure andfirst node of the file name, for example, /home/sqdata/db2cdc/db2cdc.cab.

--type=db2 - Agent type, in the case of the Db2 Log Reader Capture, is db2.

--ssid=<db2_system_id> - The Db2 System ID or the Data Group Name. In our example DBBG.Sharing

--ccsid=<coded_character_set_identifier> - The coded character set identifier or code page number of the Db2Subsystem, default is 1047.

[--plan=<sqdata_plan>] - Plan name used to connect to the DB2 subsystem. The default Plan is namedSQDV4000 and need not be explicitly specified. This is an optional parameter that can be used to specifyanother Plan as needed.

[--exclude-plan=<name>] - Exclude transactions associated with the given plan name from capture. Thisparameter can be repeated multiple time.

[--auto-exclude-plan=<y | n] - Optionally exclude from capture Data that has been updated by an Apply Enginerunning under Connect CDC SQData's default Db2 Plan SQDV4000. Default is No (n).

[--exclude-user=<user_id>] - Rarely used, will exclude database updates made by the specified User.

[--exclude-correlation-id=<value>] - Exclude transactions with the given correlation id value from capture. Thisparameter can be repeated multiple time

[--encryption] - Enables NACL encryption of published CDC record payload. See Encryption of Published Datafor more details. On z/OS you may also specify --ziip when the Capture is started for enhanced CPU cycleefficiency.

[--auth-keys-list="<name>"] - Required for encrypted CDC record payload. File name must be enclosed inquotes and must contain public key(s) of only the subscribing Engines requiring encryption of the CDCrecord payload. See --encryption option.

--store=<store_cab_file_name> - Path and name of the Storage Agent Configuration (.cab) file. In ourexample, /home/sqdata/db2cdc/db2cdc_store.cab

Once the configuration file has been created, it must be updated by adding an entry for each Table to be capturedusing the add command. Note, only one command can be executed at a time. Precisely highly recommends keepinga Recover/Recreate configuration file Job available should circumstances require recovery from a specific Start LSN.

Syntax

sqdconf add <cab_file_name>--schema=<name> --table=<name> | --key=<name> --datastore=<url> [--active | --inactive][--pending]

Keyword and Parameter Descriptions

<cab_file_name>= Must be specified and must match the name specified in a previous create command.

--schema=<name> Schema name, owner, or qualifier of a table. Different databases use different semantics,but a table is usually uniquely identified as S.T where S is referenced here as schema. This parametercannot be specified with --key.

Page 33: Connect CDC SQData - .NET Framework

33Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

--table=<name> A qualified table name in the form of schema.name that identifies the source. This may beused in place of two parameters, --schema and --table. Both cannot be specified. In our example the firsttable is SQDATA.EMP

--key=<name> Same as --table

--datastore=<url> | -d <url> - While most references to the term datastore describe physical entities, adatastore URL represents a target subscription and takes the form: cdc://[localhost]/[<agent_alias>]/<subscriber_name> where:

· <host_name> - Optional, typically omitted with only a / placeholder. If specified must match the[<localhost_name> | <localhost_IP>] of the server side of the socket connection.

· <agent_alias> - Optional, typically omitted with only a / placeholder. If specified must match the<capture_agent_alias> or <publisher_agent_alias> assigned to the Capture/Publisher agent in theController Daemon sqdagents.cfg configuration file.

· <subscriber_name> The name presented by a requesting target agent. Also referred to as the Enginename. Connection requests by Engines or the sqdutil utility must specify a valid <subscriber_name> intheir cdc://<host_name>/<agent_alias>/<subscriber_name> connection url. In our example we haveused DB2TODB2.

[--active | --inactive] This parameter marks the added source active for capture when the change is appliedand the agent is (re)started. If this parameter is not specified the default is --inactive.

[--pending] This parameter allows a table to be added to the configuration before it exists in the databasecatalog.

Example

Create a Capture configuration for the Db2 IVP tables and display the current content of the configuration file:

//SQDCONDC JOB 1,MSGLEVEL=(1,1),MSGCLASS=H,NOTIFY=&SYSUID//*//*--------------------------------------------------------------------//* Create CAB File for the Db2 Log Reader Capture Agent//*--------------------------------------------------------------------//* Note: 1) Parameter Keywords must be entered in lower case//*//* 2) Parameters Values are Case Sensitive.//*//* 3) Engine Name should be in Upper Case for z/OS JCL//*//* Steps: 1) (optional) delete the existing Capture CAB File//* 2) Create a new Capture CAB File//* 3) Add Tables to the new capture CAB File//* 4) Display the contents of the new CAB File//*//*********************************************************************//*//JOBLIB DD DISP=SHR,DSN=SQDATA.V400.LOADLIB//*//*-------------------------------------------//* Optional - Delete existing CAB File//*-------------------------------------------//*DELETE EXEC PGM=IEFBR14//*SYSPRINT DD SYSOUT=*//*CONFFILE DD PATHDISP=(DELETE,DELETE),//* PATH='/home/sqdata/db2cdc/db2cdc.cab'//*

Page 34: Connect CDC SQData - .NET Framework

34 Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

//*-----------------------------------------------//* Create Db2 Capture Configuration CAB File//*-----------------------------------------------//CRCONF EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD *create /home/sqdata/db2cdc/db2cdc.cab --type=db2 --ssid=DBBG --store=/home/sqdata/db2cdc/db2cdc_store.cab//*//*--------------------------------------------------------------------//* Add Tables to the Capture CAB File//*--------------------------------------------------------------------//* Modify to specify the Table(s) to be Captured initially.//* Tables can be added later using a modified version of this Job//* or using the SQDATA ISPF panel interface//*--------------------------------------------------------------------//*//*-----------------------------------------------------

//* Publish Table SQDATA.EMP to Subscription DB2TODB2//*-----------------------------------------------------//ADDEMP EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * add /home/sqdata/db2cdc/db2cdc.cab --table=SQDATA.EMP

--datastore=cdc:////DB2TODB2 --active//*//*-----------------------------------------------------

//* Publish Table SQDATA.DEPT to Subscription DB2TODB2//*-----------------------------------------------------//ADDDEPT EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * add /home/sqdata/db2cdc/db2cdc.cab --table=SQDATA.DEPT

--datastore=cdc:////DB2TODB2 --active//*//*-------------------------------------------//* Display configuration file//*-------------------------------------------//DISPLAY EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * display /home/sqdata/db2cdc/db2cdc.cab//*//

Notes:

1. The sqdconf create command defines the location of the Capture agent's configuration file. Once created,this command should never be run again unless you want to destroy and recreate the Capture agent.

Page 35: Connect CDC SQData - .NET Framework

35Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

2. Destroying the Capture agent cab file means that the current position in the log and the relative location ofeach engine's position in the Log will be lost. When the Capture agent is brought back up it will start from thebeginning of the oldest active log and will resend everything. After initial configuration, changes in the formof add and modify commands should be used instead of the create command. Note: You can not delete a cabfile if the Capture is mounted. And a create on an existing configuration file will fail.

3. There must be a separate ADD step in the Job for every source table to be captured.

4. The Job will fail if the same table is added more than one time for the same Target Datastore/Engine. Seesection below "Adding/Removing Output Datastores".

5. The <subscriber_name> is case sensitive in that all references should be either upper or lower case. Becausereferences to the "Engine" in z/OS JCL must be upper case, references to the Engine in these examples are allin upper case for consistency.

6. The display step when run against an active configuration (.cab) file will include other information including:

· The current status of the table (i.e. active, inactive)

· The starting and current point in the log where data has been captured

· The number of inserts, updates and deletes for the session (i.e. the duration of the capture agent run)

· The number of inserts, updates and deletes since the creation of the configuration file

Encryption of Published DataPrecisely highly recommends the use of VPN or SSH Tunnel connections between systems both to simplify theiradministration and because the CPU intensive encryption task can be performed by dedicated network hardware.

In the event that encryption is required and a VPN or SSH Tunnel cannot be used, Connect CDC SQData provides forencryption by the Publisher using the same NaCl Public / Private Key used for authentication and authorization.While Captures and Publishers are typically initiated by by the same USER_ID as the Capture Controller Daemon,those jobs explicitly identify the public / private key pair files in JCL DD statements. Precisely recommends that asecond NACL Key pair is generated for the Capture / Publisher. A second authorized Key List will also be required bythe Capture / Publisher containing the public keys for only those Engines subscribing to that Capture / Publisher andwhose payload will be encrypted. Once the Controller Daemon passes the connection request to to the Capture /Publisher a second handshake will be performed with the Engine and the CDC payload will be encrypted beforebeing published and decrypted by the receiving Engine.

Syntax

sqdconf create <cab_file_name>[--encryption | --no-encryption][--auth-keys-list="<name>"]

Keyword and Parameter Descriptions

<cab_file_name> - This is where the Capture Agent configuration file, including its path is first created. Thereis only one CAB file per Capture Agent. In our example /home/sqdata/db2cdc/db2cdc.cab

[--encryption | --no-encryption] - Turns on and off encryption of the published CDC record payload.

[--auth-keys-list="<name>"] - Required for encrypted CDC record payload. File name must be enclosed inquotes and must contain public key(s) of only the subscribing Engines requiring encryption of the CDCrecord payload. See --encryption option.

Example 1

Page 36: Connect CDC SQData - .NET Framework

36 Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

Turn on encryption

//*-----------------------------------------------//* Turn on Encryption for DB2 Capture//*-----------------------------------------------//MODCONF EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD *modify /home/sqdata/db2cdc/db2cdc.cab --encryption --auth-keys-list="NACL.AUTH.KEYS"//*

Next, stop and restart the DB2 Capture Agent.

Example 1

Turn off encryption

//*-----------------------------------------------//* Turn on Encryption for DB2 Capture//*-----------------------------------------------//MODCONF EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD *modify /home/sqdata/db2cdc/db2cdc.cab --no-encryption//*

Finally, stop and restart the DB2 Capture Agent.

Note, Customers utilizing NACL encryption for z/OS based Captures/Publishers are encouraged to utilize zIIPprocessors for enhanced CPU cycle efficiency and to reduce the CPU cost associated with software encryption.Enabling zIIP processing requires one additional parameter when starting the Capture / Publisher:

1. Stop the DB2 capture agent.

2. Restart the agent and include the following --ziip option, as follows: --apply --start --ziip /home/sqdata/db2cdc/db2cdc.cab

Prepare Db2 Capture Runtime JCLOnce the DB2 Capture configuration (.cab) file has been created, JCL similar to sample member SQDDB2C included inthe distribution is used to Mount and optionally Start the DB2 Capture Agent process.

Example

//SQDDB2C JOB 1,MSGLEVEL=(1,1),MSGCLASS=H,NOTIFY=&SYSUID//*//*--------------------------------------------------------------------//* Execute (Mount) DB2 CDCStore Capture Agent - SQDDB2C//*--------------------------------------------------------------------//* Required parameters (lower case)://* config_file - Specifies the fully qualified name of the//* predefined DB2 capture agent configuration//* file (see sample JCL SQDCONDC)//*

Page 37: Connect CDC SQData - .NET Framework

37Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

//* Optional parameters (lower case)://* --apply - Specifies that ALL pending changes to the config (.cab)//* file should be Applied before the Capture Agent is//* Started//* ** NOTE - This is normally NOT used in this job//* ** REMOVE if the status of pending changes is NOT//* ** known. Instead use SQDCONF to apply changes//*//* --start - Instructs the Capture Agent to Start processing//* ** NOTE - This is often used in this job but can//* ** be performed by a separate SQDCONF command//*//* Note: 1) The Relational CDCStore Capture Agents include//* a second Publisher thread that manages Engine//* subscriptions//*--------------------------------------------------------------------//*//JOBLIB DD DISP=SHR,DSN=SQDATA.V400.LOADLIB// DD DISP=SHR,DSN=CSQ901.SCSQAUTH// DD DISP=SHR,DSN=CSQ901.SCSQANLE// DD DISP=SHR,DSN=DSNC10.SDSNLOAD//*//SQDDB2C EXEC PGM=SQDDB2C,REGION=0M//*SQDDB2C EXEC PGM=XQDDB2C,REGION=0M//SQDPUBL DD DSN=SQDATA.NACL.PUBLIC,DISP=SHR//SQDPKEY DD DSN=SQDATA.NACL.PRIVATE,DISP=SHR//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//CEEDUMP DD SYSOUT=*//SQDLOG DD SYSOUT=*//*SQDLOG 8 DD DUMMY//*SQDPARMS DD DISP=SHR,DSN=SQDATA.V400.PARMLIB(DB2CDC)//SQDPARMS DD * --apply --start --ziip /home/sqdata/db2cdc/db2cdc.cab //*

Notes:

1. While the SQDCONF utility is used to create the Capture Agent configuration and perform most of theother management and control tasks associated with the capture agent, on z/OS it cannot perform thefunction of the MOUNT command. On platforms other than z/OS, the MOUNT command brings an Agenton-line. On z/OS that function must be performed with an agent specific JOB or Started Task. Once theCapture Agent has been "mounted" on z/OS, the sqdconf utility can and should be used to perform allother functions as documented.

2. The first time this Job is run you may choose to include a special form of the sqdconf apply and startcommands. After the initial creation, --apply should not be used in this JCL, unless all changes made sincethe agent was last Stopped are intended to take effect immediately upon the Start. The purpose of applyis to make it possible to add/modify the configuration while preparing for an implementation of changeswithout affecting the current configuration. Note, apply and start can and frequently will be separatedinto different SQDCONF jobs.

3. The Controller Daemon uses a Public / Private key mechanism to ensure component communications arevalid and secure. While it is critical to use unique key pairs when communicating between platforms, it iscommon to use the same key pair for components running together on the same platform. Consequently,the key pair used by a Log Reader Capture agent may be the same pair used by it's Controller Daemon.

Page 38: Connect CDC SQData - .NET Framework

38 Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

Setup Capture Controller Daemon

The Controller Daemon enables command and control features as well as use of the browser based Control Center.The Controller Daemon plays a special role on platforms running Data Capture Agents, managing securecommunication between Capture, Storage, Publisher Agents and Engines usually running on other platforms.

The primary difference between Controller Daemons on Capture platforms and Engine only platforms is that theAuthorized Key File of the Capture Controller Daemon must contain the Public keys of Engines that will berequesting connections to its Storage/Publisher agents. See the Secure Communications Guide for more detailsregarding the Controller Daemon's role in security.

Setup and configuration of the Capture Controller Daemon, SQDAEMON, includes the following steps:

· Create the Access Control List

· Create the Agent Configuration File

· Prepare Controller Daemon JCL

Create Access Control ListThe Controller Daemon requires an Access Control List (ACL) that assigns privileges (admin, query) by user or groupof users associated with a particular client / server on the platform. While the ACL file name is not fixed it is typicallynamed acl.cfg and it must match the name specified in the SQDagents.cfg file by the acl= <location/file>. The filecontains 3 sections. Each section consists of key-argument pairs. Lines starting with # and empty lines are interpretedas comments. Section names must be bracketed while keywords and arguments are case-sensitive:

Syntax

Global section - not identified by a section header and must be specified first.

allow_guest=no - Specify whether guest is allowed to connect. Guests are clients that can process a NaClhandshake, but whose public key is not in the server's authorized_keys_list file. If guests are allowed, theyare granted at least the right to query. The default value is No.

guest_acl=none - Specify the ACL for a guest user. This should be specified after the allow_guest parameter,otherwise guests will be able to query, regardless of this parameter. The default value is "query".

default_acl=query - Specify the default ACL which is used for authenticated clients that do not have an ACLrule explicitly associated to them, either directly or via a group.

Group section - allows the definition of groups. The only purpose of the group section is to simplify theassignment of ACLs to groups of users.

[groups]

<group_name>=<sqdata_user> [, sqdata_user…] - Defining an ACL associated with the group_name in the ACLSsection, will propagate that ACL to all users in the group. Note, ACLs are cumulative, that is, if a userbelongs to two or more groups or has an ACL of it’s own, the total ACL of the user is the union of the ACLsof itself and all the groups to which the user belongs.

ACLS section - assigns one or more "rights" to individual users or groups and has the following syntax:

[acls]

<sqdata_user> | <group_name> =acl_list -When an acl_list is assigned to a group_name, the list will propagateto all users in the group. The acl_list is a comma separated list composed of one or more of the followingterms:

Page 39: Connect CDC SQData - .NET Framework

39Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

none | query |,read |,write |,exec |,sudo |,admin

a. If none is present in the ACL list, then all other elements are ignored. The terms query, read, write andexec, grant the user the right to respectively query, read from, write to, and execute agents.

b. If admin is used it grants admin rights to the user. Sudo will allow a user with Admin rights to execute arequest in the name of some other user. In such a case the ACL of the assumed user is tested todetermine if the requested action is allowable for the assumed user. Note, this functionality is presentfor users of the Connect CDC SQData Control Center. See the Control Center documentation for moreinformation.

c. If sudo is granted to an admin user, then that user can execute a command in the name of anotheruser, regardless of the ACL of the assumed user.

Example

allow_guest=yesguest_acl=nonedefault_acl=query

[groups]admin=<sqdata_user1>cntl=<sqdata_user2>,<sqdata_user3>status=<sqdata_user4>,<sqdata_user5>

[acls]admin=admin,sudocntl=query,read,writestatus=query,read

Note: Changes are not known to the daemon until the configuration file is reloaded, using the SQDMON Utility, orthe sqdaemon process is stopped and started.

The acl.cfg file can be directly edited or the JCL can be edited and the files recreated using JCL similar to samplemember CRDAEMON included in the distribution. That JCL includes steps to create both the Access Control List andthe Agent Configuration file. The JCL should be edited to conform to the operating environment and in particular thezFS directory structure created by the ALLOCZDR Job run as part of the Prepare Environment Checklist.

//*-----------------------------------------------------------------//* CREATE AND POPULATE THE ACL.CFG FILE //*-----------------------------------------------------------------//CRACL EXEC PGM=IEBGENER //SYSPRINT DD SYSOUT=* //SYSIN DD DUMMY //SYSUT2 DD PATH='//home/sqdata/daemon/cfg/acl.cfg', // PATHOPTS=(OWRONLY,OCREAT,OTRUNC), // PATHMODE=SIRWXU, // PATHDISP=(KEEP,DELETE), // FILEDATA=TEXT //* //SYSUT1 DD * allow_guest=yesguest_acl=nonedefault_acl=query

[groups]admin=<sqdata_user>cntl=<user_name1>,<user_name2>status=<user_name3>,<user_name4>

Page 40: Connect CDC SQData - .NET Framework

40 Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

[acls]admin=admin,sudocntl=query,read,writestatus=query,read/*

Create Agent Configuration FileThe Agent Configuration File lists alias names for agents and provides the name and location of agent configurationfiles. It can also defines startup arguments and output file information for agents that are managed by the ControllerDaemon. The sqdagents.cfg file begins with global parameters followed by sections for each agent controlled by thedaemon.

Syntax

Global section - not identified by a section header and must be specified first.

acl= Location (fully qualified path or relative to the working directory) and name of the acl configuration fileto be used by the Controller Daemon. While the actual name of this file is user defined, we stronglyrecommend using the file name acl.cfg.

authorized_keys= (Non-z/OS only) Location of the authorized_keys file to be used by the Controller Daemon.On z/OS platforms, specified at runtime by DD statement.

identity= (Non-z/OS only) Location of the NaCl private key to be used by the Controller Daemon. On z/OSplatforms, specified at runtime by DD statement.

message_level= Level of verbosity for the Controller Daemon messages. This is a numeric value from 0 to 8.Default is 5.

message_file= Location of the file that will accumulate the Controller Daemon messages. If no file isspecified, either in the config file or from the command line, then messages are send to the syslog.

service= Port number or service name to be used for the Controller Daemon. Service can be defined using theSQDPARM DD on z/OS, on the command line starting sqdaemon, in the config file described in this sectionor, on some platforms, as the environment variableSQDAEMON_SERVICE, in that order of priority. Absentany specification, the default is 2626. If for any reason a second Controller Daemon is run on the sameplatform they must each have a unique port specified.

Agent sections - Each section represents an individual agent in square brackets and heads a block of propertiesfor that agent. Section names must be alphanumeric and may also contain the underscore "_" character

[<capture_agent_alias>] | [<publisher_agent_alias>] Must be unique in the configuration file of thedaemon on the same machine as the Capture / Publisher process. Will be referenced by the Engineconnect string. Must be associated with the cab=<*.cab> file name specified in the sqdconf createcommand for the capture or publisher Agent setup in the previous section.

[<engine_agent_alias>] Only present in the configuration file of the daemon on the same machine as theapply Engine process. Also known as the Engine name, the <engine_agent_alias> provided here does notneed to match the one specified in the sqdconf add command --datastore parameter, however there isno reason to make them different. Will also be used by sqdmon agent management and displaycommands. In our example we have used DB2TODB2

Page 41: Connect CDC SQData - .NET Framework

41Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

[<program_alias> ] | [<process_alias>] Only present in the configuration file of the daemon on the samemachine where the program or process associated with the alias will execute. Any string of characters maybe used, examples include process names and table names.

type= Type of the agent. This can be engine, capture or publisher. It is not necessary to specify the type forEngines, programs, scripts or batch files.

program= The name of the program (or nix shell script / or windows batch file) to invoke in order to start anagent. This can be a full path or a simple program name. In the latter case, the program must be accessiblevia the PATH of the sqdaemon - context. The value must be "SQDATA" for Engines but may also be any otherexecutable program, shell script (nix) or batch file (Windows).

args= Parameters passed on the command line to the the program=<name> associated with the agent onstartup. In the case of an Connect CDC SQData Engine, it must be the "parsed" Engine script name ie<engine.prc>. This is valid for program= entries only.

working_directory= Specify the working directory used to execute the agent.

cab=<*.cab> Location and name of the configuration (.cab) file for capture and publisher agent entries. Thefile is created by sqdconf and required by sqdconf commands. In a windows environment .cfg may besubstituted for .cab since ".cab" files have special meaning.

stdout_file= File name used for stdout for the agent. If the value is not a full path, it is relative to the workingdirectory of the agent. The default value is agent_name.stdout. Using the same file name for stdout_fileand stderr_file is recommended and will result in a concatenation of the two results, for example<engine_name.rpt>. This is valid for engine entries only.

stderr_file= File name used for as stderr for the agent. If the value is not a full path, it is relative to theworking directory of the agent. The default value is agent_name.stderr. Using the same file name forstdout_file and stderr_file is recommended and will result in a concatenation of the two results, forexample <engine_name.rpt>. This is valid for engine entries only.

report= Synonym of stderr_file. If both are specified, report takes precedence.

comment= User specified comment associated with the agent. This is only used for display purposes.

auto_start= A boolean value (yes/no/1/0), indicating if the associated agent should be automatically startedwhen sqdaemon is started. This also has an impact in the return code reported by sqdmon when an agentstops with an error. If an agent is marked as auto_start and it stops unexpectedly, this will be reported asan Error in the sqdaemon log, otherwise it is reported as a Warning. This is valid for engine entries only.

Notes:

1. Directories and paths specified must exist before being referenced. Relative names may be included and arerelative to the working directory of the sqdaemon "-d" parameter or as specified in the file itself.

2. While message_file is not a required parameter we generally recommend its use or all messages, includingauthentication and connection errors, will go to the system log. On z/OS however the system log may bepreferable since other management tools used to monitor the system, use the log as their source ofinformation.

3. All references to .cab files names must be fully qualified.

Page 42: Connect CDC SQData - .NET Framework

42 Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

Example

A sample sqdagent.cfg file for the Controller Daemon running on z/OS created using JCL similar sample memberCRDAEMON included in the distribution. The JCL should be edited to conform to the operating environment andthe zFS directories previously allocated. Once created, the sqdagent.cfg file can be directly edited or the JCL canbe edited and the files recreated. Changes are not known to the daemon until the configuration file is reloaded(see SQDMON Utility) or the daemon process is stopped and started.

//*--------------------------------------------------------------------//* Create and populate the sqdagents.cfg file//*--------------------------------------------------------------------//CRAGENTS EXEC PGM=IEBGENER//SYSPRINT DD SYSOUT=*//SYSIN DD DUMMY//SYSUT2 DD PATH='//home/sqdata/daemon/cfg/sqdagents.cfg',// PATHOPTS=(OWRONLY,OCREAT,OTRUNC),// PATHMODE=SIRWXU,// PATHDISP=(KEEP,DELETE),// FILEDATA=TEXT//*//SYSUT1 DD * acl=acl.cfg message_file=../logs/acl.log

[db2cdc] type=capture cab=/home/sqdata/db2cdc/db2cdc.cab

/*

Prepare z/OS Controller Daemon JCLJCL similar to the sample member SQDAEMON included in the distribution can be used to start the ControllerDaemon. The JCL must be edited to conform to the operating environment.

//SQDAEMON JOB 1,MSGLEVEL=(1,1),MSGCLASS=H,NOTIFY=&SYSUID//*//*-----------------------------------------------------------------//* Execute the z/OS SQDAEMON Controller in Batch//*-----------------------------------------------------------------//* Parms Must be Entered in lower case//*//* --service=port_number//* Where port_number is the number of a TCP/IP port that will be//* used to communicate to the Controller Daemon//* ** Note: If this parm is omitted, here and in the//* sqdagents.cfg file, the default port will be 2626 **//*//* -d zfs_dir//* Where zfs_dir is the predefined working directory used by//* the controller//* EXAMPLE://* /home/sqdata/daemon - the controller's working directory//* and its required cfg and optional logs sub-directories://*//* /home/sqdata/daemon/cfg - must contain 1 file://* sqdagents.cfg - contains a list of//* capture/publisher/engine agents to//* be controlled by the daemon//*//* - and optionally://* acl.cfg - used for acl security

Page 43: Connect CDC SQData - .NET Framework

43Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

//* /home/sqdata/daemon/logs - used to store log files used by the//* controller daemon//*//*********************************************************************//*//JOBLIB DD DISP=SHR,DSN=SQDATA.V400.LOADLIB//*//SQDAEMON EXEC PGM=SQDAEMON//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDLOG DD SYSOUT=*//*SQDLOG8 DD DUMMY//*//SQDPUBL DD DISP=SHR,DSN=SQDATA.NACL.PUBLIC//SQDPKEY DD DISP=SHR,DSN=SQDATA.NACL.PRIVATE//SQDAUTH DD DISP=SHR,DSN=SQDATA.NACL.AUTH.KEYS//*SQDPARMS DD DISP=SHR,DSN=SQDATA.V400.PARMLIB(SQDAEMON)//SQDPARMS DD * --service=2626 --tcp-buffer-size=262144 -d /home/sqdata/daemon/*//

Note: This JCL can be simplified by placing all optional parameters in the sqdagents.cfg file described above ratherthan specifying them in the JCL. The exception to this recommendation is when multiple Controller Daemon's arerunning on the same machine. In that case --service=port_number must be specified for at least one of the Daemons.

Page 44: Connect CDC SQData - .NET Framework

44 Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

Configure Apply Engine

The function of an Engine may be one of simple replication, data transformation, event processing or a moresophisticated active/active data replication scenario. The actions performed by an Engine are described by an EngineScript the complexity of which depends entirely on the intended function and business rules required to describethat function. Engines may receive data from a variety of sources for application to target datastores.

The most common function performed by an Apply Engine is to process data from one of the Change Data Capture(CDC) agents, applying business rules to transform that data so that it can be applied or efficiently replicated to aTarget datastore of any type on any operating platform.

The following steps should be followed to configure an Apply Engine:

1. Determine requirements

Identify the type of the target datastore; the platform the Apply Engine will run on; and finally the datatransformations required, if any, to map the source data to the target data structures.

2. Prepare Apply Engine Environment

Once the platform and type of target datastore are known, the environment on that platform must be preparedincluding the installation of Connect CDC SQData and any other components required by the target datastore.Connect CDC SQData will also utilize your existing native TCP/IP network for publishing data captured on oneplatform to Engines running on any another platform. Factors including performance requirements and networklatency should be considered when selecting the location of the system on which the Apply Engine will execute.

3. Create Apply Engine Script

Similar to SQL, the scripting language is capable of a wide range of operations, from replication of identicalsource and target structures using a single command to complex business rule based transformations. ConnectCDC SQData commands and functions provide full procedural control of data filtering, mapping andtransformation including manipulation of data at its most elemental level if required.

The final step will be the end-to-end Component Verification from Change Data Capture through target datastorecontent validation.

Note, see Db2 Straight Replication for an sample engine script and the Engine Reference for a full explanation of thecapabilities provided by Engine scripts.

Page 45: Connect CDC SQData - .NET Framework

45Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

Component Verification

This section describes the steps required to verify that the Connect CDC SQData Db2 Log Reader Data Capture Agentis working properly. If this is your first implementation of the Connect CDC SQData Db2 Log Reader Capture werecommend a review with Precisely support before commencing operation.

Start z/OS Controller DaemonThe JCL configured previously in sample member SQDAEMON can be used to start the Controller Daemon.

Note, once the Controller Daemon has been started, Implementing changes made to any of the Controller Daemon'sconfiguration files (acl.cfg, sqdagents.cfg, nacl.auth.keys) can be accomplished using the SQDMON Utility reloadcommand without killing and re-starting the Controller Daemon.

Start Db2 Log Reader Capture AgentThe JCL configured previously in sample member SQDDB2C can be used to to Mount (execute) the DB2 Log ReaderCapture Agent on z/OS.

It is important to realize that the return code, and message from SQDDB2C indicating that the start command wasexecuted successfully, does not necessarily mean that the agent is still in started state. It only means that the startcommand was accepted by the capture agent and that the initial setup necessary to launch a capture thread weresuccessful. These preparation steps involve connecting to Db2 and setting up the necessary environment to start alog mining session.

The capture agent posts warnings and errors in the system log. The program name for the Db2 Log Reader Capture isSQDDB2C. If there is a mechanism in place to monitor the system log, it is a good idea to include the monitoring ofsqdatalogm messages. This will allow you to detect when a capture agent is mounted, started, or stopped - normallyor because of an error. It will also contain, for most usual production error conditions, some additional informationto help diagnose the problem.

Start EngineStarting an Engine on the target platform may require only the submission of JCL similar to sample memberSQDATAD included in the distribution and specifying the parsed Engine script, in our example a DB2TODB2.

//SQDATA JOB 1,MSGLEVEL=(1,1),MSGCLASS=H,NOTIFY=&SYSUID//*//*--------------------------------------------------------------------//* Execute the Connect CDC SQData Engine under Db2//*--------------------------------------------------------------------//* Note: 1) This Job may require specification of the Public/Private//* Key pair in order to connect to a Capture/Publisher//* running on another platform//*//* 2) To run the Connect CDC SQData Engine as a started task, refer to//* member SQDAMAST//*//* Required DDNAME://* SQDFILE DD - File that contains the Parsed Engine Script//*//*********************************************************************//*//JOBLIB DD DISP=SHR,DSN=SQDATA.V400.LOADLIB// DD DISP=SHR,DSN=DSNB10.SDSNLOAD//*//SQDATA EXEC PGM=SQDATA,REGION=0M//SQDPUBL DD DISP=SHR,DSN=SQDATA.NACL.PUBLIC//SQDPKEY DD DISP=SHR,DSN=SQDATA.NACL.PRIVATE//SYSPRINT DD SYSOUT=*

Page 46: Connect CDC SQData - .NET Framework

46 Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

//SQDLOG DD SYSOUT=*//*SQDLOG 8 DD DUMMY//CEEDUMP DD SYSOUT=*//*//*---- PARSED ENGINE SCRIPT FILE ----//SQDFILE DD DISP=SHR,DSN=SQDATA.V400.SQDOBJ(DB2TODB2)

See the Apply and Replicator Engine reference for other use cases involving the Connect CDC SQData z/OS Db2 LogReader Capture

Db2 Test Transactions1. Execute an online transaction or Db2 SQL statement, updating the candidate tables that are to be captured

2. Execute a batch program, updating the candidate tables that are to be captured.

3. Examine the results in the target datastore using the appropriate tools.

Page 47: Connect CDC SQData - .NET Framework

47Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

Operation

The sections above described the initial installation and configuration process. Once completed Connect CDC SQDatais operationally ready to become active; Db2 data that is changed will be logged by Db2, then Captured andPublished to a subscribing Engine, after authentication by the Controller Daemon. CDC data will stream via TCP/IP tothe Engine which will apply the data to the target datastore.

This section covers a few of the common day to day operational activities and other situations that you must beprepared to handle. See the Operating Scenarios section for other examples of real world situations and issues thatyou are sure to encounter, how they affect Change Data Capture and how to handle them. The first topic in thatsection is titled Capture New Db2 Data, besides reviewing the steps required for Change Data Capture, it discussesoptions for performing the initial propagation or load of the target Datastores.

There are three methods supported for interacting with Connect CDC SQData components on z/OS:

1. ISPF panel Interface - The DB2 Quickstart Guide provides detailed step by step instructions.

2. z/OS JCL - Traditional z/OS Jobs that execute the SQDCONF and SQDMON utilities and their full range of optionsfor managing and controlling the SQDAEMON Controller Daemon and the z/OS IMS Log Reader Capture and zLogPublisher.

3. z/OS Console Commands - Duplicate many of the commands associated with the SQDCONF and SQDMON utilityprograms. Note, in order to issue any z/OS commands from TSO (including SDSF) the user must have TSOCONSOLE authority and possibly SDSF command authority.

P <task_name> - Stops and unmounts the agent immediately

F <task_name>,PAUSE - Pauses the agent

F <task_name>,RESUME - Resumes the agent after a pause

F <task_name>,DISPLAY - Display of the agent cab file with the output being written to SYSPRINT in the running STC

F <task_name>,STOP - Stops the agent but leaves it mounted

F <task_name>,STOP,UNMOUNT - Stops and unmounts the agent (same as P command)

F <task_name>,STOP,FLUSH - Stops the agent after flushing out any UOWs that began before the command wasissued, then unmounts.

F <task_name>,STOP,FLUSH,FAILOVER - Same as STOP,FLUSH, except the agents instruct downstream engines to tryto reconnect for up to 10 minutes

F <task_name>,START - Starts an agent that was previously stopped, but still mounted

F <task_name>,APPLY - Applies pending cab file changes to the agent's cab file. Agent must be mounted, stopped inorder to apply

Start / Reload Controller DaemonThe JCL configured previously in sample member SQDAEMON can be used to start the Controller Daemon.

Note, once the Controller Daemon has been started, Implementing changes made to any of the Controller Daemon'sconfiguration files (acl.cfg, sqdagents.cfg, nacl.auth.keys) can be accomplished using the SQDMON Utility reloadcommand without killing and re-starting the Controller Daemon.

z/OS Console commands that may also be issued, with the proper authority include:

P <task_name> - Stops the daemon

Page 48: Connect CDC SQData - .NET Framework

48 Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

F <task_name>,RELOAD - Refreshes the SQDagents.cfg file - required when you add new captures/publishers ordelete/recreate a capture/publisher cab file

F <task_name>,INVENTORY - List the tasks registered in SQDagents.cfg and their current status (i.e. started, stopped,not mounted, etc.)

F <task_name>,SHUTDOWN - Stops the daemon (same as P command)

Setting the Capture Start PointThe first time the Db2 Capture agent is started, it uses the "current" Db2 LSN/RBA as starting point by Default.Capture can also be started for the first time at a specific point-in-time by explicitly specifying the start LSN. Thecurrent LSN can be determined using the Db2 -DISPLAY LOG command and then selecting a starting LSN based on theBegin Time of a logged transaction. Other transactions that started before that LSN but not already been committedare considered in-flight units-of-work and will be ignored along with all other transactions that committed prior tothat LSN.

The starting Db2 LSN/RBA is specified in the capture .cab configuration file. The log point LSN can be set at a globalcapture agent level (i.e. all tables in the configuration file or for individual tables). An LSN of 0 indicates that captureshould start from the current point in the Db2 log. This is used when starting the capture agent for the first time orwhen adding a new table to the configuration file. Once the LSN is established, it would typically never be alteredfor normal operation. The capture agent continuously updates the global LSN and the individual table LSNs in theconfiguration file as changed data is being processed. Each time the capture agent starts, data capture is resumedfrom the last LSN processed.

Example

Set the LSN to 0 (zero means current time) at the capture agent level for all tables) with the SQDCONF modifycommand.

//*---------------------------------------- //*- SET LSN AT GLOBAL LEVEL//*----------------------------------------//SETLSN EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * modify /home/sqdata/db2cdc/db2cdc.cab --lsn=0 //*

Restart / Remine Db2Precisely's Connect CDC SQData Db2 Data Capture was designed to be both robust and forgiving in that it takes asimple and conservative approach when deciding where in the log to start capturing data when first started or afterbeing restarted following either a scheduled interruption of the capture or an unplanned disruption in a productionenvironment. The capture agent continuously updates the global LSN and the individual table LSNs in theconfiguration file as changed data is being processed.

Normal RestartEach time the capture agent starts, data capture is resumed from the last LSN processed. It knows exactly how farback in time to re-mine to guarantee the re-capture of all in-flight transactions for all transactions where the "begin"transaction record was previously seen by the Capture. While re-mining, the Capture may encounter Log recordsbelonging to a transaction for which the begin transaction record was not seen previously. These records are countedas orphan records unless a rollback for the transaction is subsequently seen in the log. If the transaction is rolledback, the orphaned records of that transaction are voided (that is not counted as orphan record). It is possible thatthe statistics, immediately after a re-start, show some orphan records, but does not show them later.

Example

Page 49: Connect CDC SQData - .NET Framework

49Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

Resume capture using JCL similar to the following to Mount and Start the Capture Agent.

//SQDDB2C EXEC PGM=SQDDB2C,REGION=0M//*SQDDB2C EXEC PGM=XQDDB2C,REGION=0M//SQDPUBL DD DSN=SQDATA.NACL.PUBLIC,DISP=SHR//SQDPKEY DD DSN=SQDATA.NACL.PRIVATE,DISP=SHR//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//CEEDUMP DD SYSOUT=*//SQDLOG DD SYSOUT=*//*SQDLOG8 DD DUMMY//*SQDPARMS DD DISP=SHR,DSN=SQDATA.V400.PARMLIB(DB2CDC)//SQDPARMS DD * --start /home/sqdata/db2cdc/db2cdc.cab //*

Point-in-time RecoveryThere may be times when a point-in-time recovery is required, where changes made from a few hours or even daysearlier must be or recaptured, perhaps because an Engine script was modified. The appropriate LSN <value> can bedetermined by using the Db2 -DISPLAY LOG command and/or running the Db2 Log Print Utility DSNJ004. The firstthing to determine is if there is more than one Engine subscribed to the capture and if there is, whether recapturedchanges should be published to one all of the subscribed Engines.

Example 1

Only one Engine is subscribed or changes should be re-published to all subscriptions, set the LSN to theappropriate value as follows:

//*---------------------------------------- //*- SET LSN AT GLOBAL LEVEL//*----------------------------------------//SETLSN EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * modify /home/sqdata/db2cdc/db2cdc.cab --lsn=<value>

Example 2

Changes are to be re-published to only a specific Engine, the datastore must be specified.

//*---------------------------------------- //*- SET LSN FOR SPECIFIC Engine//*----------------------------------------//SETLSN EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD *

modify /home/sqdata/db2cdc/db2cdc.cab --datastore=cdc:///DB2TODB2 --lsn=<value>

Notes:

1. A separate SQDCONF must be run for each Engine subscription

2. If the capture is running it must be Stopped before the changes can be Applied to the Capture configurationfile.

Page 50: Connect CDC SQData - .NET Framework

50 Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

3. Finally, after the modifications are complete the capture must be restarted with an additional --safe-restart=<value> parameter specifying the starting LSN for the re-capture.

//SQDDB2C EXEC PGM=SQDDB2C,REGION=0M//*SQDDB2C EXEC PGM=XQDDB2C,REGION=0M//SQDPUBL DD DSN=SQDATA.NACL.PUBLIC,DISP=SHR//SQDPKEY DD DSN=SQDATA.NACL.PRIVATE,DISP=SHR//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//CEEDUMP DD SYSOUT=*//SQDLOG DD SYSOUT=*//*SQDLOG8 DD DUMMY//*SQDPARMS DD DISP=SHR,DSN=SQDATA.V400.PARMLIB(DB2CDC)//SQDPARMS DD * --apply --start --safe-restart=<value> /home/sqdata/db2cdc/db2cdc.cab//*

Note, if the capture is running it must be Stopped before the Engine subscription changes can be Applied to theCapture configuration file

Apply Capture CAB File ChangesChanges made to a capture agent configuration are not effective until they are applied. The apply operation instructsthe capture agent to process the SQDCONF commands that have been previously issued but not yet actually in use bythe capture agent itself. The reason for formally separating the apply step from the add/modify/remove steps is toprepare for the deployment of a production change in advance, without taking the risk of accidentally impactingproduction. For example, imagine that a new application will be rolled out next weekend:

· The application requires the capture of a couple of new tables and drops the capture of another table.

· If changes were effective immediately or automatically at the next start, the risk exists that the capture agentmay go down for unrelated production issues (like an unexpected disconnection from Db2), and the newchanges would be activated prematurely.

· If the changes could not be staged then they could not be prepared until the production capture is stopped forthe weekend migration.

· Staging the changes allows capture agent maintenance to be done outside of the critical upgrade path.

· Requiring the explicit Apply step, insures that such changes can be planned and prepared in advance, withoutputting current production replication in jeopardy.

The current operating "State" of the capture agent determines what actions can be taken including the application ofchanges. The following table illustrates the state combinations that restrict or permit changes:

Capture StateCombinations

Description

Unmounted Not on-line (on z/OS, no active job), changes cannot be applied.

Mounted, Paused Running, Capture is paused, BUT Publishing continues until all previously captured datais consumed, changes cannot be applied.

Mounted, Stopped Running, both Capture and Publishing are suspended, changes can be applied.

Mounted, Started Running, both Capture and Publishing are active, changes cannot be applied.

Page 51: Connect CDC SQData - .NET Framework

51Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

Capture StateCombinations

Description

Mounted, Started, Stalled Running, both Capture and Publishing are active, no Engine is connected, changescannot be applied..

In summary, while changes to the capture agent configuration can and should be staged, the following rules must befollowed:

1. The capture agent must be Started and Stopped to permit additions and/or modifications to theconfiguration to be Applied.

2. A create on an existing configuration file will fail.

3. After initial configuration, changes in the form of add and modify commands must be used instead of thecreate command.

4. The .cab file cannot be deleted if the Capture agent is mounted.

Notes:

· Once created the Capture agent configuration .cab file should not be deleted because the current position inthe log and the relative location of each engine's position in the Log will be lost. When the Capture agent isbrought back up it would start from the current log point, skipping all database activity that occurred after thecapture was Stopped and Unmounted or Canceled.

· There are a few exceptions:--retry is effective immediately--mqs-max-uncommitted is automatically effective at the next start.

· Parameters controlling the Transient Storage Pool can be modified dynamically and are effectiveimmediately.

See the section Modify z/OS Transient Storage Pool.

· JCL similar to the following can be used to Stop, Apply and Start the Capture Agent to fully implement thestaged configuration changes:

//*---------------------------------------- //*- STOP THE CAPTURE AGENT //*----------------------------------------//STOP EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * stop /home/sqdata/db2cdc/db2cdc.cab//* //*---------------------------------------- //*- APPLY UPDATED CONFIGURATION FILE //*---------------------------------------- //APPLY EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * apply /home/sqdata/db2cdc/db2cdc.cab //* //*---------------------------------------- //*- START UPDATED CONFIGURATION FILE

Page 52: Connect CDC SQData - .NET Framework

52 Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

//*---------------------------------------- //START EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * start /home/sqdata/db2cdc/db2cdc.cab //*

· IF the Db2 Capture Agent task ended or was Canceled and therefore Unmounted, the Apply and Start steps canbe combined as is usually the case when starting the Capture Agent for the first time. See section Starting theDb2 Capture Agent above.

Displaying Capture Agent StatusCapture agents keep track of statistical information for the last session and for the lifetime of the configuration file.These statistics can be accessed with the display action of SQDCONF.

//*---------------------------------------- //*- DISPLAY CONFIG FILE //*---------------------------------------- //DISPLAY EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * display /home/sqdata/db2cdc/db2cdc.cab --sysout=CAPOUT//*

SQDF901I Configuration file : /home/sqdata/db2cdc/db2cdc.cab SQDF902I Status : MOUNTED,STARTED SQDF903I Configuration key : cab_9A4A9EB10ECCB679 SQDF904I Allocated entries : 31 SQDF905I Used entries : 3 SQDF906I Active Database : DB9G SQDF907I Start Log Point : 0x0 SQDF908I Last Log Point : 0x1aff2621f SQDF940I Last Log Timestamp : SQDF987I Last Commit Time : 2013-01-07 21:03:12.617536 (1eb45bf000000000)SQDF981I Safe Restart Point : 0x1aff2569b SQDF986I Safe Remine Point : 0x1aff1d000 SQDF910I Active User : SQDF913I Fix Flags : RETRY SQDF914I Retry Interval : 30 SQDF919I Active Flags : CDCSTORE,RAW LOG SQDF915I Active store name : /home/sqdata/db2cdc/db2cdc_store.cab SQDF916I Active store id : cab_964960E67A564AA3 SQDF920I Entry : # 0 SQDF930I Key : SQDATA.DEPT SQDF923I Active Flags : ACTIVE SQDF928I Last Log Point : 0x1af474000 SQDF950I session # insert : 0 SQDF951I session # delete : 0 SQDF952I session # update : 0 SQDF960I cumul # of insert : 0 SQDF960I cumul # of insert : 0 SQDF961I cumul # of delete : 0 SQDF962I cumul # of update : 27

SQDF925I Active Datastore : cdc:///db2cdc/DB2TODB2 SQDF920I Entry : # 1 SQDF930I Key : SQDATA.EMP SQDF923I Active Flags : ACTIVE

Page 53: Connect CDC SQData - .NET Framework

53Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

SQDF928I Last Log Point : 0x1aff1f000 SQDF950I session # insert : 544 SQDF951I session # delete : 544 SQDF952I session # update : 35811 SQDF960I cumul # of insert : 544 SQDF961I cumul # of delete : 544 SQDF962I cumul # of update : 35919

SQDF925I Active Datastore : cdc:///db2cdc/DB2TODB2 SQDF920I Entry : # 2

SQDF930I Key : cdc:///db2cdc/DB2TODB2SQDF842I Is connected : Yes SQDF932I Ack Log Point : 0x1aff2309e SQDF843I Last Connection : 2013-01-07 21:03:33 SQDF844I Last Disconnection : 2013-01-04 19:34:35 SQDF987I Last Commit Time : 2013-01-07 21:03:12.482800 (cabc970f621f0000)SQDF953I session # records : 24936 SQDF954I session # txns : 1086 SQDF963I cumul # records : 25026 SQDF964I cumul # txns : 1098 SQDF794I Storage Usage SQDF795I Used : 5 MB SQDF796I Free : 59 MB SQDF797I Unallocated : 0 MB SQDF798I Memory Cache : 8 MB SQDC017I sqdconf(pid=0x30) terminated successfully

Notes:

1. An Entry exists for each source and target specified in the configuration.

2. The Last Log Point for each individual entry of the configuration, indicates the LSN of the commit point of themost recent transaction that impacted this table. The rest of the statistics are fairly self-explanatory.

3. In Storage Usage section: Used is the amount of the Transient Storage Pool currently in use; Free is theamount of currently unused Storage Pool; Unallocated is the portion of the currently defined Storage Poolthat has not yet been allocated; Memory Cache is the amount of allocated MEMORY currently in use.

Displaying Storage Agent StatisticsThe Storage Agent maintains statistics about the logs that have been mined as well as Transient Storage Poolutilization. These statistics can help to determine if the Storage Agent is sized correctly. These statistics are alsoaccessed with the display action of SQDCONF and the Storage Agent configuration .cab file:

//*---------------------------------------- //*- DISPLAY CONFIG FILE //*---------------------------------------- //DISPLAY EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//STOREOUT DD SYSOUT=*//SQDPARMS DD * display /home/sqdata/db2cdc_store.cab --stats --sysout=STOREOUT//*

SQDC017I sqdconf(pid=0x99) terminated successfullySQDF801I Configuration name : cdcstoreSQDF802I Configuration key : cab_964960E67A564AA3SQDF850I Session Statistics -SQDF851I Txn Max Record : 44SQDF852I Txn Max Size : 42386124

Page 54: Connect CDC SQData - .NET Framework

54 Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

SQDF853I Txn Max Log Range : 8207SQDF855I Max In-flight Txns : 0SQDF856I # Txns : 608SQDF857I # Effective Txns : 104SQDF861I # Commit Records : 82SQDF858I # Rollbacked Txns : 21SQDF859I # Data Records : 2775SQDF860I # Orphan Data Records : 0SQDF862I # Rollbacked Records : 924SQDF863I # Compensated Records : 0SQDF866I # Orphan Txns : 0SQDF867I # Mapped Blocks : 0SQDF868I # Accessed Blocks : 2SQDF869I # Logical Blocks : 0SQDF870I Life Statistics -SQDF871I Max Txn Record : 44SQDF872I Max Txn Size : 42386124SQDF873I Max Txn Log Range : 24582SQDF875I Max In-flight Txns : 0SQDF876I # Txns : 76576SQDF877I # Effective Txns : 27701SQDF881I # Commit Records : 22154SQDF878I # Rollbacked Txns : 5546SQDF879I # Data Records : 752586SQDF880I # Orphan Data Records : 0SQDF882I # Rollbacked Records : 243513SQDF883I # Compensated Records : 0SQDF886I # Orphan Txns : 0SQDF887I # Mapped Blocks : 48SQDF888I # Accessed Blocks : 2006SQDF889I # Logical Blocks : 48SQDC017I sqdconf(pid=0x99) terminated successfully

Note:

The following fields give an indication of the storage need for the current workload.

· Txn Max Record :This indicates the maximum number of records contained in any given transaction. Here thebiggest transaction had 44 records.

· Txn Max Size: This is the maximum size of the payload associated with any given transaction. Here the totalamount of data carried by the biggest transaction we’ve seen was a little more than 40MB.

· Txn Max Log Range: This indicates the largest difference in LSN from start to the end of a transaction.

Interpreting Capture/Storage StatusThe status of the Capture/Storage Agents can be found in the second line of the output from both the SQDCONF andsqdmon display commands and provide information about the current operational status as well as an indication ofthe state of the Transient Storage Pool.

SQDF901I Configuration file : /home/sqdata+/<type>cdc_store.cabSQDF902I Status : MOUNTED,STARTED

Possible values include combinations of the following:

Status Console

Description

MOUNTED On-line and Ready

NOTMOUNTED Off-line, No active process (on z/OS, no active Job or Task)

Page 55: Connect CDC SQData - .NET Framework

55Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

STARTED Capture and Publishing are active

PAUSED Capture Paused and waiting on a command, Publishing continues until all transientdata is consumed

STOPPED Capture and Publishing are suspended

STALLED One or more target Engines not connected or not responding

FULL Transient Storage Pool Full. Normal, but high latency likely due to target sideperformance. Capture will stop reading logs long enough for space to be freed.

DEADLOCK Y CRITICAL CONDITION, Storage Pool TOO SMALL, Cannot hold existing Unit of Work(UOW), MUST expand Storage Pool before Capture and Publishing can continue,which can be done while Capture is running.

Notes:

1. Each line of the display is prefixed with a Message Number corresponding to the information on that line. Inthe case of Status, message number SQDF902I is displayed.

2. Certain messages, particularly those that require intervention will also be directed to the Operator Consoleon z/OS and the system log on Linux and AIX,including the DEADLOCK condition:

SQDF207E DEADLOCK CONDITION DETECTED FOR <type>cdc_store.cab

3. The Deadlock state requires intervention. See the section, Size the Transient Storage Pool and adjust theNumber of Blocks or Files allocated to the Store Pool or add another mount point/directory where additionalfiles can be allocated, by modifying the CDCSTORE configuration file as described in Modify z/OS TransientStorage Pool.

Modifying z/OS Transient Storage PoolThe parameters controlling the Storage Pool can be modified dynamically, without stopping the Storage Agent, usingthe SQDCONF utility. Like the initial configuration of the Storage Agent, sequences of SQDCONF commands tomodify the storage agent can/should be stored in parameter files and referenced by SQDPARM DD. See SQDCONFUtility for a full explanation of each command, their respective parameters and the utility's operationalconsiderations.

Syntax

add | modify <cab_file_name>--data-path=<directory_name)--number-of-blocks=<blocks_per_file>--number-of-logfiles=<number_of_files>

Keyword and Parameter Descriptions

<cab_file_name> - This is where the Storage Agent configuration file is stored. There is only one CAB file perStorage Agent. In our example /home/sqdata/db2cdc_store.cab

<directory_name> - The ZFS directory(s) previously created for the transient storage files. In our example theoriginal directory was: /home/sqdata/data

<blocks_per_file> - The number of 8MB blocks that will be allocated for each File defined for transient CDCstorage. In our example we started with the default of 32.

<number_of_files> - The number of files that may be allocated in the <directory_name> for transient CDCstorage. In our example we started with the default of 8.

Example 1

Expand the CDCStore by raising the number of files that may be allocated in the existing <directory_name>

Page 56: Connect CDC SQData - .NET Framework

56 Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

Execute the SQDCONF modify command with syntax similar to the JCL SQDCONDS included in the distribution:

//*-------------------------------------------//* STEP 1: MODIFY A CDCSTORE CAB FILE//*-------------------------------------------//MODIFY EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * modify /home/sqdata/db2cdc_store.cab--data-path=/home/sqdata/data--number-of-blocks=40--number-of-logfiles=16//*//*-------------------------------------------//* STEP 2: DISPLAY THE CDCSTORE CAB FILE//*-------------------------------------------//DISPLAY EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * display /home/sqdata/db2cdc_store.cab --details/*

Notes:

1. Modifying the value of --number-of-blocks=<blocks_per_file> will only affect new files allocated.

2. Changes to the value of --number-of-logfiles=<number_of_files> take affect immediately

3. Storage Pool Directories added using --data-path=<directory_name) will be used only after all --number-of-logfiles=<number_of_files> have been created and filled.

4. No files or Directories once allocated and used will be freed or released by the Storage Agent while it isrunning.

Stopping the Db2 Capture AgentCapture Agents may be Stopped simply to apply changes to the configuration file as described earlier or Unmounted,which will completely terminate the capture.

Example

Stop the agent and then, using the SQDCONF Unmount command, terminate the Capture Agent which performs acomplete shutdown of the address space. JCL is included in the distribution similar to the following and can beedited to conform to the operating environment and then used to execute the SQDCONF Configuration Manager.

//*---------------------------------------- //*- STOP CAPTURE AGENT //*---------------------------------------- //STOP EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * stop /home/sqdata/db2cdc/db2cdc.cab //*---------------------------------------- //*- UNMOUNT CAPTURE AGENT //*---------------------------------------- //UNMOUNT EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*

Page 57: Connect CDC SQData - .NET Framework

57Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

//SQDPARMS DD * unmount /home/sqdata/db2cdc/db2cdc.cab //*

Page 58: Connect CDC SQData - .NET Framework

58 Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

Operating Scenarios

This section covers several operational scenarios likely to be encountered after the initial Capture has been placedinto operation, including changes in the scope of data capture and additional use or processing of the captured databy downstream Engines.

One factor to consider when contemplating or implementing changes in an operational configuration is theimplementation sequence. In particular processes that will consume captured data must be tested, installed andoperational before initiating capture in a production environment. This is critical because the volume of captureddata can overwhelm transient storage if the processes that will consume the captured data are not enabled in atimely fashion.

While the examples in this section will generally proceed from capture of changed data to the population of thetarget by an Engine, it is essential to fully understand the expected results before configuring the data capture, forexample:

· A new column added to a target table is populated from an existing table by an existing engine; while theEngine would be changed to accommodate the new column, NO CHANGES would be required in the existingcapture configuration.

· A new table maintained by a new transactions that will be the source of data for a new data warehouse targettable will require configuration changes from one end to the other.

The examples below include some common scenarios encountered after an initial implementation has provensuccessful. Review the examples in sequence when thinking about how to implement your own new scenario:

· Capture New Db2 Data

· Send Existing Data to New Target

· Filter Captured Data

· Straight Replication

· Active/Active Replication

· Implement TLS Support

· Upgrading Db2 to 10 Byte LSN

Capture New Db2 DataWhether initiating Change Data Capture for the first time, expanding the original selection of data or including a newsource of data from the implementation of a new application, the steps are very similar. The impact on new orexisting Capture and Apply processes can be determined once the source of the data is known, precisely what data isrequired from the Source, whether business rules require filters or data transformations and finally where the Targetof the captured data will reside.

Note, while this example assumes that an existing Capture and Apply configuration is being modified it also appliesto an entirely new implementation.

Example

SQDATA.dept table has been added to an existing Db2 Database that will be a new Source for an existing Enginethat will apply the captured changed data to a new Target table.

In order to capture changes made to a Db2 table, the source table must be ALTERED to allow for change datacapture. This action is performed using the SQL ALTER TABLE statement as follows:

Page 59: Connect CDC SQData - .NET Framework

59Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

ALTER TABLE SQDATA.dept DATA CAPTURE CHANGES;

Next the Capture configuration must be updated to include the new table. The sample member SQDCONDC,used earlier, includes the following step for adding a table to the capture that will be published to the existingtarget Engine.

//*---------------------------------------- //*- ADD TABLES TO CONFIG FILE //*---------------------------------------- //ADDTBL EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * add /home/sqdata/db2cdc/db2cdc.cab --key=dept --datastore=cdc:////DB2TODB2 --active //*

Finally, determine how the new data will affect the the existing Engine and modify the Engine script accordingly.See the Engine Reference for all the options available through Engine Script commands and functions.

Note:

Whenever a new source is to be captured for the first time, consideration must be given to the existing state of thesource datastore when capture is first initiated. The most common situation is that the source already contains datathat would have qualified to be captured and applied to the target, if the CDC and Apply process had already been inplace.

Depending on the type of source and target datastore the following solutions that can insure source and target are insync when Change Data Capture is implemented:

1. While utilities may be available to unload the source datastore and load the target datastore, they willgenerally be restricted to both the same type (RDBMS, IMS, etc) of source and target datastore.

2. Those utilities generally also require the source and target datastores to have identical structure (columns,fields, etc). Precisely recommends the use of utility programs if those two constraints are acceptable.

3. If however, the source and target are not identical, Precisely recommends that a special version of thealready tested Engine script be used for the initial load of the target datastore. This approach has theadditional benefit of providing a mechanism for "refreshing" target datastores if for some reason an "out ofsynchronization" situation occurs because of an operational problem or a business rule change affectingfilters or transformations. Contact Precisely support to discuss both the benefits and techniques forimplementing and perhaps more importantly maintaining, a load/refresh Engine solution.

Send Existing Db2 Data to New TargetOur example began with the addition of a new Db2 Table to the data capture process and publishing the captureddata to an existing Engine. Often however, a change results from recognition that a new downstream process orapplication can benefit from the ability to capture changes to existing data. Whether the scenario is event processingor some form of straight replication the implementation process is essentially the same.

Our example continues with the addition of a new Engine DB2TOORA), that will populate a new Oracle table withcolumns corresponding to a subset of the columns from the SQDATA.dept Db2 source table.

While no changes are required to the Storage agent to support the new Engine, the Capture agent will requireconfiguration changes and the new subscribing Engine must be added.

Page 60: Connect CDC SQData - .NET Framework

60 Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

Reconfigure Log Reader CaptureOne or more output Datastores, also referred to as Subscriptions, may be specified for each Source table in theconfiguration file. Once the initial configuration file has been created, Datastores are added or removed using theSQDCONF modify command.

The following example adds a subscription for a second Target Engine DB2TOORA for changes to the SQDATA.depttable:

//*----------------------------------------------- //*- ADD SECOND TARGET FOR A TABLE TO CONFIG FILE //*----------------------------------------------- //ADDTBL2 EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * modify /home/sqdata/db2cdc/db2cdc.cab

--key=<sqd.dept> --datastore=cdc:////DB2TOORA --active //*

Note, the configuration file changes must be followed by an apply in order to have the capture agent recognize theupdated configuration file.

Add New EngineAdding a new Engine on a new target platform begins with the installation of the Connect CDC SQData product. Seethe Installation Section for the applicable platform Installation instructions. Once the product is installed, the stepsto configure the new Engine parallel those required to configure the existing Engine. In our example the new Engineis named DB2TOORA. The Engine script will specify only simple mapping of columns in a DDL description to columnsin the target relational table. See the Engine Reference for all the options available through Engine Script commandsand functions.

While not as simple as straight replication due to potentially different names for corresponding columns, the mostimportant aspect of the script will be the DATASTORE specification for the source CDC records:

Syntax

DATASTORE cdc://<host><:port>/<capture_agent_alias>/<engine_agent_alias>

Keyword and Parameter Descriptions

<host> Location of the Capture Controller Daemon.

<:port> Optional, required only if non-standard port is specified by the service parameter in the ControllerDaemon configuration.

<capture_agent_alias> must match the alias specified in the Controller Daemon agents configuration file. Theengine will connect to the Controller Daemon on the specified host and request to be connected to thatagent.

<engine_agent_alias> Must match the alias provided in the "sqdconf add" command "--datastore" parameterwhen the Publisher is configured to support the new target Engine.

Example:

DATASTORE cdc://<host><:port>/db2cdc/DB2TOORA OF UTSCDC AS CDCIN

Page 61: Connect CDC SQData - .NET Framework

61Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

DESCRIBED BY <schema_name>.<table_name> ;

Add Engine Controller DaemonThe Controller Daemon manages secure communication between Connect CDC SQData components running on otherplatforms, enabling command and control features as well as use of the browser based Control Center. See theSecure Communications Guide for more details regarding the Controller Daemon's role in security and how it isaccomplished.

The primary difference between a Controller Daemon on Capture platforms and Engine only platforms is that theAuthorized Key File of the Engine Controller Daemon need only contain the Public keys of the Control Center and orusers of the SQDMON utility on other platforms. Setup and configuration of the Engine Controller Daemon,SQDAEMON, includes:

· Generate Public / Private keys

· Creating the Authorized Key File

· Create the Access Control List

· Create the Engine Agent Configuration File

· Preparing the Controller Daemon JCL or shell script

Example

A sample sqdagent.cfg file for a Controller Daemon containing the Engine DB2TOORA follows. Changes are notknown to the daemon until the configuration file is reloaded, using the SQDMON Utility, or the sqdaemonprocess is stopped and started.

acl=<SQDATA_VAR_DIR>/daemon/cfg/acl.cfgauthorized_keys=<SQDATA_VAR_DIR>/daemon/nacl_auth_keysidentity=<SQDATA_VAR_DIR>/id_naclmessage_file=../logs/daemon.logservice=2626

[DB2TOORA]type=engineprogram=SQDATAargs=DB2TOORA.prcworking_directory=<SQDATA_VAR_DIR>message=<SQDATA_VAR_DIR>stderr_file=<SQDATA_VAR_DIR>/DB2TOORA.rptstdout_file=<SQDATA_VAR_DIR>/DB2TOORA.rptauto_start=yes

See the Setup Controller Daemon section for a detailed description of these activities.

Update Capture Controller DaemonIn our example, the Data Capture Agent and its controlling structures existed prior to the addition of this new Engine.Consequently the only modification required to the Capture Controller Daemon is the addition of the new Engine'sPublic Key to the Authorized Key File.

On the z/OS platform this is usually done by an administrator using ISPF.

Page 62: Connect CDC SQData - .NET Framework

62 Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

Applying the Configuration File changesChanges made to the Capture Agent Configuration (.cab) file are not effective until they are applied. For example,assume that a new set of Db2 tables will be rolled out next weekend:

If changes were effective immediately or automatically at the next start, then these changes could not be performeduntil the production capture is stopped for the migration during the weekend. Otherwise, the risk exists that thecapture agent may go down for unrelated production issues, and the new change would be activated prematurely.

Forcing a distinct and explicit apply step, insures that such changes can be planned and prepared in advance, withoutputting the current production replication in jeopardy. This allows capture agent maintenance to be done outside ofthe critical upgrade path.

In order to apply changes, the agent must first be stopped. This operation in effect pauses the agent task and permitsthe additions and/or modifications to the configuration to be applied. Once the agent is restarted, the updatedconfiguration will be active.

//*---------------------------------------- //*- STOP THE AGENT //*----------------------------------------//STOP EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * stop /home/sqdata/db2cdc/db2cdc.cab//* //*---------------------------------------- //*- APPLY UPDATED CONFIGURATION FILE //*---------------------------------------- //APPLY EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * apply /home/sqdata/db2cdc/db2cdc.cab//* //*---------------------------------------- //*- START THE AGENT //*----------------------------------------//START EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * start /home/sqdata/db2cdc/db2cdc.cab//*

Filter Captured DataThe introduction of Data Capture necessarily adds some overhead to the processing of the originating transaction bythe source database / file manager. For that reason it is customary to perform as little additional processing of thedata during the actual capture operation as possible. Filtering data from the capture process is therefore broken intotwo types:

· Capture Side Filters

· Engine Filters

Page 63: Connect CDC SQData - .NET Framework

63Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

Capture Side FiltersIn addition to controlling which tables are captured, it is also possible to add and remove items to be excluded fromcapture based on other parameters including: User, Program, Transaction, Correlation ID, Plan, etc. The basic syntaxvaries slightly based on the current state of the configuration and can be can be specified one or more times percommand line.

Syntax

Create state: --exclude-<item>=<variable>

Modify state: --auto-exclude-plan=NO | --add-excluded-<item>=<variable> | --remove-excluded-<item>=<variable>

Where item <variable> can be any one of the following list of optional keywords:

Item Variable Description

user User IDThe user-id associated with the transaction or programmaking Db2 data changes.

correlation-id Correlation IDThe Correlation ID of the program / transaction making Db2data changes.

plan Plan Name If non-blank, this is the Plan making Db2 data changes.

Note, the wild-card character "*" can be used as part of the variable associated with any of the exclusionkeywords, for example -correlation-id=db2tran*

Example

Disable the automatic exclusion of the Connect CDC SQData Default plan (SQDV4000) to enable capture ofupdates that are normally excluded to prevent cyclic updates in an Active/Active Replication configuration.

//*----------------------------------------------- //*- ADD USER EXCLUSION TO CONFIG FILE //*----------------------------------------------- //ADDEXCL EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * modify /home/sqdata/db2cdc/db2cdc.cab --auto-exclude-plan=NO

Notes: As with all modifications made to a Capture configuration file, the following steps must be followed toimplement the change:

1. Capture should be paused to allow subscribed Engines to consume all previously captured data

2. The Capture must then be Stopped

3. The changes must be Applied to the capture .CAB file to have the capture agent recognize the updatedconfiguration

4. The Capture must be Started

Page 64: Connect CDC SQData - .NET Framework

64 Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

Engine FiltersBoth record and field level evaluation of data content including cross-reference to external files not part of theoriginal data capture. See the Engine Reference for all the options available through Engine script commands andfunctions.

Implement TLS SupportSecurity Concerns are increasingly leading to the implementation of security protocols between customer datacenters and in some cases even within those data centers. Transport Layer Security (TLS) is supported between allcomponents on zOS and with this release between Connect CDC SQData clients on Linux and Change Data Capturecomponents on zOS, only.

Connect CDC SQData already operates transparently on zOS under IBM's Application Transparent Transport LayerSecurity (AT-TLS). Under AT-TLS no changes were required to the base code and only port numbers in theconfiguration need to be changed, as described below. For more information regarding AT-TLS see your zOS SystemsProgrammer.

Once IBM's AT-TLS has been implemented on zOS the following steps are all that is required by Daemon, Captureand Publisher components and on zOS only, the SQDATA Apply Engine and SQDUTIL, to be in compliance with TLS:

1. Request the new secure port to be used by the Daemon

2. Request Certificates for MASTER, Daemon and APPLY Engine Tasks

3. Stop all SQDATA tasks

4. Update APPLY Engine source scripts with the new Daemon port. Note, port's are typically implemented usinga Parser parameter so script changes may not be required.

5. Update SQDUTIL JCL and/or SQDPARM lib member's, if any with the new Daemon port.

6. Run Parse Jobs to update the Parsed Apply Engines in the applicable library referenced by Apply Engine Jobs.

7. Update the Daemon tasks with new port

8. If using the zOS Master Controller, update the SQDPARM Lib members for the MASTER tasks with newDaemon port

9. Start all the SQDATA tasks

Note, there are no changes to connection URL's for clients on zOS.

Initial Target Load and RefreshOne additional and significant activity is the initial load of the target datastores and the methods employed toachieve full synchronization of the source and target. The same methods can be used as needed to refresh all or asubset of the normal CDC/Apply targets. Various methods should be considered based on all applicable factorsincluding performance, ease of configuration and operational impact:

1. The use of native database unload/reload utilities

2. Connect CDC SQData Capture Based Refresh

3. Third party remote disk mirroring, often the only practical solution when large scale disaster type replicationsystems are being implemented

Page 65: Connect CDC SQData - .NET Framework

65Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

4. A special Connect CDC SQData Unload engine that reads the source datastore locally and writes records to beloaded by a database utility or one that reads the source datatastore remotely and writes directly to anothertarget like Kafka.

The method selected for the initial load of the target datastore must also consider concurrent source databaseactivity. The source capture and target apply process will ensure that source and target synchronization is achieved,often with a "catch-up" phase during which Connect CDC SQData will perform compensation.

Capture Based RefreshThe implementation of Db2 change data capture frequently requires the initial load of the target datastores in orderfor replication to achieve full synchronization of the source and target.

Connect CDC SQData provides the ability to refresh an entire table during replication. This allows for loading small tomedium sized tables within the replication flow vs having to perform an utility unload/load. Larger tables may stillrequire a utility unload/load.

Refresh works automatically with no-map replication scripts. Other apply scripts that perform mapping,transformation, filtering, etc. are not supported automatically. If a refresh is initiated and the records are publishedto a non-replicate engine, the engine will fail unless you provide for the special refresh CDC records in the applyscript. Please contact Connect CDC SQData Support directly for assistance with this.

If you have a table that is published to both a no-map replication engine and another type of apply engine, you candisable the refresh for the 2nd apply engine as described below.

Notes:

1. Before you can refresh a table during replication, you must create the Connect CDC SQData refresh controltable (SQDATA.REFRESH_REQUEST_LOG) on the source system.  Job CRDREFR in the Connect CDC SQData CNTLlibrary can be used to create this control table.

2. The refresh event is tracked in the refresh control table and you can browse this table to see your scheduledtable refresh activity.

3. The capture must be running in order to perform a refresh.

4. The refresh is treated as a single transaction or unit-of-work (UOW), so consideration must be given to anyrestrictions that the target database may have with regard to large units-of-work.

5. A large refresh transaction will impact the throughput of other changes for the table that are occurring.

6. The current release of Connect CDC SQData only supports table refresh for no-map replication engines

The table refresh can be accomplished with a batch sqdconf job or using the ISPF panels.

Syntax

--refresh --key=<TABLENAME> <cab_file_name>

Keyword and Parameter Descriptions

--key= <TABLE_NAME> - Specifies Source object Db2 table name. In our example the first file is SQDATA.EMP

<cab_file_name>= Must be specified and must match the name specified in a previous create command.

Example 1

Page 66: Connect CDC SQData - .NET Framework

66 Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

Refresh table SQDATA.EMP to all replication engines defined in the Capture CAB file db2cdc.cab using JCL similarto this:

//*---------------------------------------------- //* Request Refresh of a target Db2 Table//*---------------------------------------------- //REFRESH EXEC PGM=SQDCONF //SYSPRINT DD SYSOUT=* //SYSOUT DD SYSOUT=* //SQDPARMS DD * --refresh --key=SQDATA.EMP /home/sqdata/db2cdc/db2cdc.cab

/*

Example 2

To refresh the same table via the Connect CDC SQData ISPF panels, perform the following:

1. Select option 3 from the main menu to list your capture/publishers

2. Select the Db2 capture/publisher that is capturing the table of interest

3. Type a S next to Sources and press ENTER

4. Type a R next to the table that you want to refresh

Disabling a table refresh

As noted above, all engines configured to received CDC data for a table will receive the published unit-of-work. Toprevent a specific apply engine from receiving a table refresh unit-of-work, you can run an sqdconf batch job to blockthe refresh from that engine.

Syntax

modify --block-refresh --target=<engine_name>

Example 1

Block the SQDATA.EMP table refresh from being published to DB2TOORA using JCL similar to this:

//*----------------------------------------------- //*- Block Table Refresh from an Apply Engine //*----------------------------------------------- //BLOCK EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * modify /home/sqdata/db2cdc/db2cdc.cab

--block-refresh --target=DB2TOORA //*

Example 1

Un-Block all refreshes published to DB2TOORA using JCL similar to this:

//*----------------------------------------------- //*- Un-Block Table Refresh from an Apply Engine //*----------------------------------------------- //UNBLOCK EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*

Page 67: Connect CDC SQData - .NET Framework

67Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

//SYSOUT DD SYSOUT=*//SQDPARMS DD * modify /home/sqdata/db2cdc/db2cdc.cab

--allow-refresh --target=DB2TOORA //*

DB2 Unload EnginesThere are two basic types of unload engines:

A special version of the an Apply Engine that reads the source datastore locally and writes records that can be loadedto the same type or even a different type of target database, using a database load utility. Often this engine can useunmodified versions of the "mapping" PROCS used by the Apply Engine and only the target datastore type will bemodified to output comma separated records to one or more individual files that then are used as input to thechosen database load utility. The engine will often have to be parametrized to allow specification of the particularsource descriptions (Tables, segments, records, etc). Then the parameter driven engine will be run as many times asneeded to generate files for each of the specified descriptions.

The second type of unload engine typically runs on the same platform as the Apply Engine. Instead of connecting to aremote Publisher it connects remotely to the source database, reads the source from top to bottom and then usingeither the same mapping PROCS as the Apply engine or a simple REPLICATE script, writes directly to the Targetdatastore, be it a traditional RDBMS or Kafka or HDFS.

Examples: TBD

Upgrading Db2 to 10 Byte LSNDb2 Version 12 requires that your databases be upgraded to use a 10 Byte Log Sequence Number (LSN). While thattask falls into the domain of the Db2 Database Administrator, the Connect CDC SQData Db2 Change Data Capturerequires the following steps to be performed at the time the LSN length is changed:

1. Find the RBA that you want to start from in the DB2 MSTR address space, once the migration to NFM (new-function mode) is complete. Look for the following messages from the Db2 recovery manager in the systemlog that indicate the progress of Db2 through a restart process. You are looking for the RBA from the priorcheckpoint. You may need the assistance of the Db2 DBA assigned to the migration of a System operator.

DSNR001I -DBBG RESTART INITIATED DSNR003I -DBBG RESTART...PRIOR CHECKPOINT RBA=000000000000CFB28090

2. With the capture down completely, run the following JCL to set the global LSN and target engine LSN, usingthe RBA the value identified in step 1.

//*----------------------------------------------//*- Modify DB2 Capture CAB File Global LSN/RBA//*----------------------------------------------//STOP EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * modify --lsn=000000000000CFB28090 /home/sqdata/db2cdc/db2cdc.cab/*//*//*----------------------------------------------//*- Modify DB2 Capture CAB File Target LSN/RBA//*----------------------------------------------//STOP EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD *

Page 68: Connect CDC SQData - .NET Framework

68 Connect CDC SQData DB2 Capture Reference

Db2 Log Reader Capture

modify --lsn=000000000000CFB28090 --target=cdc:////DB2REPL1 --datastore=cdc:////DB2TODB2 /home/sqdata/db2cdc/db2cdc.cab//*

3. In the capture startup parm, modify the safe restart point. Note that you will need to remove the --safe-restart after the capture has been started and is running because you do not want it used again accidentallyfollowing another restart of the Capture.

--apply --start --safe-restart=000000000000CFB28090 /home/sqdata/db2cdc/db2cdc.cab

Page 69: Connect CDC SQData - .NET Framework

69Connect CDC SQData DB2 Capture Reference

Db2 Straight Replication

Simple replication is often used when a read only version of an existing datastore is needed or a remote hot backupis desired. The Apply Engine provides an easy to implement simple replication solution requiring very fewinstructions. It will also automatically detect out of sync conditions that have occurred due to issues outside ofSQData's control and perform compensation by converting updates to inserts (if the record does not exist in thetarget), inserts to updates (if the record already exists in the target) and drop deletes, if the record does not exist inthe target.

Note, this section assumes two things: First, that the environment on the target platform fully supports the type ofdatastore being replicated. Second, that an Connect CDC SQData Change Data Capture solution for the sourcedatastore type has been selected, configured and tested.

Page 70: Connect CDC SQData - .NET Framework

70 Connect CDC SQData DB2 Capture Reference

Db2 Straight Replication

Target Implementation Checklist

This checklist covers all the tasks required to preparing the target operating environment and configure Straight Db2Replication. It assumes two things: First that a Db2 subsystem exists on the target platform. Second, that the Db2 LogReader Capture has been configured and tested on the source platform, see that Implementation Checklist.

# Task Sample JCL z/OS ControlCenter

Prepare Environment

1 Perform the base product installation on the Target System Various

2 Modify Procedure Lib (PROCLIB) Members N/A

3 Verify that the Connect CDC SQData product has been Linked SQDLINK

4 Bind the Db2 Package BINDSQD/

5 Verify APF Authorization of LOADLIB N/A

6 Create ZFS directories if running a Controller Daemon on Target System ALLOCZDR

7 Identify/Authorize Operating User(s) and Started Task(s) N/A

Environment Preparation Complete

Engine Configuration Tasks

1 Collect DDL for tables to be replicated and Create Target tables N/A

2 Generate Public/Private Keys for Engine, Update Auth Key File on SourceSystem

NACLKEYS

3 Create Straight Replication Script SQDPARSE *

4 Prepare Engine JCL SQDATA *

Engine Configuration Complete

Verification Tasks

1 Start the Db2 Capture agent and SQDAEMON on Source System Various *

2 Start the Engine on Target System SQDATA

3 Apply changes to the source tables using SPUFI or other means. N/A

4 Verify that changes were captured and processed by Engine N/A

Verification Complete

The following sections focus on the Engine Configuration and communication with the Source platform ControllerDaemon. Detailed descriptions of the other steps required to prepare the environment for Connect CDC SQDataoperation are described in previous sections.

Page 71: Connect CDC SQData - .NET Framework

71Connect CDC SQData DB2 Capture Reference

Db2 Straight Replication

Create Target Tables

Using SPUFI or other means and DDL or DCLGENS from the source system, create duplicates of the Source tables onthe Target system.

Page 72: Connect CDC SQData - .NET Framework

72 Connect CDC SQData DB2 Capture Reference

Db2 Straight Replication

Generate Engine Public / Private Keys

As previously mentioned, Engines usually run on a different platform than the Data Capture Agent. The ControllerDaemon on the Capture platform manages secure communication between Engines and their Capture/PublisherAgents. Therefore a Public / Private Key pair must be generated for the Engine on the platform where the Engine isrun. The SQDUTIL program must be used to generate the necessary keys and must be run under the user-id that willbe used by the Engine.

Syntax

$ sqdutil keygen

On z/OS, JCL similar to the sample member NACLKEYS included in the distribution executes the SQDUTIL programusing the keygen command and generates the necessary keys.

The Public key must then be provided to the administrator of the Capture platform so that it can be added to thenacl.auth.keys file used by the Controller Daemon.

Note, there should also be a Controller Daemon on the platform running Engines to enable command and controlfeatures and the browser based Control Center. While it is critical to use unique key pairs when communicatingbetween platforms, it is common to use the same key pair for components running together on the same platform.Consequently, the key pair used by an Engine may be the same pair used by it's Controller Daemon.

Page 73: Connect CDC SQData - .NET Framework

73Connect CDC SQData DB2 Capture Reference

Db2 Straight Replication

Create Straight Replication Script

A Simple Replication script requires DESCRIPTIONS for each Source and Target DATASTORE as well as either a straightmapping procedure for each table or use of the REPLICATE Command as shown in the sample script below. In theexample Descriptions are in-line rather than through external files. In the sample script, a CDCzLog type Publisheruses TCP/IP to publish and transport data to the target Apply Engine. The Main Select section contains onlyreferences to the Source and Target Datastore aliases and the REPLICATE Command. Individual mapping proceduresare not required in this case.

The sample script, DB2TODB2, is listed below. Note how the same table descriptions are used for both Source andTarget environments, how the Schema, which may have been present in the descriptions is overridden and how asingle function, REPLICATE performs all the work. See the Engine Reference for more details regarding the use of theREPLICATE Command.

If you choose to exercise this script, which is based on IBM's Db2 IVP tables, it will be necessary to create two copiesof the DEPT and EMP tables as referenced in the script on the target system. Once that is complete, the script can beparsed and exercised.

--------------------------------------------------------------- DB2 REPLICATION SCRIPT FOR ENGINE: DB2TODB2 --------------------------------------------------------------- SUBSTITUTION VARS USED IN THIS SCRIPT: -- %(ENGINE) - ENGINE / REPORT NAME -- %(HOST) - HOST OF Capture -- %(PORT) - TCP/IP PORT of SQDAEMON -- %(PUBN) - Capture/Publisher alias in sqdagents.cfg -- %(SSID) - DB2 Subsystem ID --------------------------------------------------------------- CHANGE LOG: -- 2018/01/01: INITIAL RELEASE -------------------------------------------------------------JOBNAME %(ENGINE); RDBMS NATIVEDB2 %(SSID); OPTIONS CDCOP('I','U','D'); --------------------------------------------------------------- DATA DEFINITION SECTION---------------------------------------------------------------------------------------- -- Source Data Descriptions --------------------------- BEGIN GROUP SOURCE_DDL; DESCRIPTION DB2SQL DD:DB2DDL(EMP) AS S_EMP; DESCRIPTION DB2SQL DD:DB2DDL(DEPT) AS S_DEPT; END GROUP; --------------------------- -- Target Data Descriptions --------------------------- -- None required for Straight Replication --------------------------- -- Source Datastore(s) --------------------------- DATASTORE cdc://%(HOST):%(PORT)/%(PUBN)/%(ENGINE) OF UTSCDC AS CDCIN RECONNECT DESCRIBED BY GROUP SOURCE_DDL

Page 74: Connect CDC SQData - .NET Framework

74 Connect CDC SQData DB2 Capture Reference

Db2 Straight Replication

; --------------------------- -- Target Datastore(s) --------------------------- DATASTORE RDBMS OF RELATIONAL AS TARGET FORCE QUALIFIER TGT DESCRIBED BY GROUP SOURCE_DDL FOR CHANGE ; --------------------------- -- Variables --------------------------- -- None required for Straight Replication --------------------------- -- Procedure Section --------------------------- -- None required for Straight Replication --------------------------------------------------- Main Section - Script Execution Entry Point -------------------------------------------------PROCESS INTO TARGET SELECT { -- OUTMSG(0,STRING(' TABLE=',CDC_TBNAME(CDCIN) -- ,' CHGOP=',CDCOP(CDCIN) -- ,' TIME=' ,CDCTSTMP(CDCIN))) -- Source and Target Datastores must have the same table names REPLICATE(TARGET) } FROM CDCIN;

Page 75: Connect CDC SQData - .NET Framework

75Connect CDC SQData DB2 Capture Reference

Db2 Straight Replication

Prepare z/OS Engine JCL

The parsed replication Engine script in this example, DB2TODB2 will run on a z/OS platform. In this case JCL similar tosample member SQDATA included in the distribution can be edited to conform to the operating environment,including the necessary Public / Private key files.

//sqdata JOB 1,MSGLEVEL=(1,1),MSGCLASS=H,NOTIFY=&SYSUID//*//*--------------------------------------------------------------------//* Execute the SQDATA Engine under DB2//*--------------------------------------------------------------------//* Note: 1) This Job may require specification of the Public/Private//* Key pair in order to connect to a Capture/Publisher//* running on another platform//*//* 2) To run the SQDATA Engine as a started task, refer to//* member SQDAMAST//*//* Required DDNAME://* SQDFILE DD - File that contains the Parsed Engine Script//*//*********************************************************************//*//JOBLIB DD DISP=SHR,DSN=SQDATA.V400.LOADLIB// DD DISP=SHR,DSN=DSNB10.SDSNLOAD//*//sqdataD EXEC PGM=SQDATA,REGION=0M//SQDPUBL DD DISP=SHR,DSN=SQDATA.NACL.PUBLIC//SQDPKEY DD DISP=SHR,DSN=SQDATA.NACL.PRIVATE//SYSPRINT DD SYSOUT=*//SQDLOG DD SYSOUT=*//*SQDLOG8 DD DUMMY//CEEDUMP DD SYSOUT=*//*//*---- PARSED ENGINE SCRIPT FILE ----//SQDFILE DD DISP=SHR,DSN=SQDATA.V400.SQDOBJ(DB2TODB2)

Note: The Controller Daemon (both Capture and Engine) uses a Public / Private key mechanism to ensure componentcommunications are valid and secure. While it is critical to use unique key pairs when communicating betweenplatforms, it is common to use the same key pair for components running together on the same platform.Consequently, the key pair used by an Engine may be the same pair used by it's Controller Daemon.

Precisely recommends use of the z/OS Master Controller. Review the z/OS Master Controller Guide, for configurationand operation of this utility.

Page 76: Connect CDC SQData - .NET Framework

76 Connect CDC SQData DB2 Capture Reference

Db2 Straight Replication

Verify Straight Replication

Verification begins with the Capture Agent and the specific steps depend on the type of Capture being used. Followthe verification steps described previously depending on which Capture has been implemented. Then start theEngine on the target system.

Using SPUFI or other means, perform a variety of insert, update and delete activities against source tables. Then onthe Target system, again using SPUFI or other means, verify that the content of the target tables match the source.

Page 77: Connect CDC SQData - .NET Framework

77Connect CDC SQData DB2 Capture Reference

Db2 Active/Active Replication

An overview of Active/Active Replication is provided in the Change Data Capture Guide. Implementing such aconfiguration for Db2 is a two step process:

1. Create a single Straight Replication Engine script that can be reused on each system.

2. Creation of Capture and Apply configurations on each of the systems.

add --active | --inactive --key=<TABLE_NAME>--datastore=cdc:////<engine_agent_alias><cab_file_name>

Note: The Db2 Log Reader Capture, by default excludes from capture all updates made by an Apply Engine runningunder the same plan name used by the capture to connect to the Db2 subsytem including the default Db2 Plan,SQDDB2D. The reason for this is to avoid circular replication.

Page 78: Connect CDC SQData - .NET Framework

78 Connect CDC SQData DB2 Capture Reference

Db2 Operational Issues

This section describes some common operational issues that may be encountered while using the Db2 Log ReaderCapture agent.

Page 79: Connect CDC SQData - .NET Framework

79Connect CDC SQData DB2 Capture Reference

Db2 Operational Issues

Db2 Source Database Reorgs and Load Replace

Before the Connect CDC SQData Db2 capture agents can read the Db2 recovery log records, Db2 must decompress thelog records of any table that is stored in a compressed table space. Db2 uses the current compression dictionary fordecompression. If the CDC process is run asynchronously, for some reason gets behind or is configured to recaptureolder logs, the proper Compression Dictionary may be unavailable if a database Reorg or Load Replace has occurred.

The solution to this problem is to specify the KEEPDICTIONARY=YES parameter when using the Db2 REORG or LOADutility. In Precisely's experience, customers have already made this parameter a standard operational setting but itshould be confirmed prior to implementation of the Db2 Change Data Capture.

Alternatively, ensure that the Connect CDC SQData Db2 Capture has processed all log records for a table beforeperforming any activity that affects the compression dictionary for that table. The following activities can affect thecompression dictionaries:

1. Altering a table space to change its compression setting.

2. Using DSN1COPY to copy compressed table spaces from one subsystem to another, including from datasharing to non-data-sharing environments.

3. Running the REORG or LOAD utility on the table space.

Page 80: Connect CDC SQData - .NET Framework

80 Connect CDC SQData DB2 Capture Reference

Db2 Operational Issues

Data Sharing Environments

For capture in a data sharing environment, select one data sharing group name to capture from. Capture againstmultiple members in a data sharing group will result in capturing the same data multiple times.

Page 81: Connect CDC SQData - .NET Framework

81Connect CDC SQData DB2 Capture Reference

Db2 Operational Issues

Changes Not Being Captured

The common culprits for changed data not being captured are:

· The table has not been altered to activate the capture. See Configure Db2 Tables for Capture above forinformation on activating a table for capture.

· The table has not been added to the capture configuration CAB file. See Create Db2 Capture CAB file andCapture New Db2 Data.

· The capture agent is not active. Even though the capture agent was MOUNTED and STARTED successfully, itmay have terminated due to the unavailability of an archived log.

· Switch to the diagnostic (dev) version of the executable code by switching to to the XQDDB2C version of theCapture program and un-commenting //*SQDLOG8 DD DUMMY, see Prepare Db2 Capture Runtime JCL.

Page 82: Connect CDC SQData - .NET Framework

82 Connect CDC SQData DB2 Capture Reference

Db2 Operational Issues

Db2 Table Names

Fully qualified table names in Db2 frequently contain very long fully qualified schemas. In order to shorten the namewhen it is used to qualify columns in Apply Engine scripts an ALIAS is often used.

In the example database one table is named HumanResources.EmployeeDepartmentHistory which contains thecolumn StartDate. The ALIAS parameter can be used to shorten the table name to something more manageable usingthe syntax below:

DESCRIPTION DB2 ALIAS(HumanResources.EmployeeDepartmentHistory_StartDate AS HR.Dept_StartDateHumanResources.EmployeeDepartmentHistory_EmployeeID AS HR.Dept_EmployeeIDHumanResources.EmployeeDepartmentHistory_DeptID AS HR.Dept_DeptIDHumanResources.EmployeeDepartmentHistory_ShiftID AS HR.Dept_ShiftID)

Then in a subsequent procedure statement in the script a reference to StartDate could look like this:

If HR.Dept_StartDate < V_Current_Year

Rather than:

If HumanResources.EmployeeDepartmentHistory_StartDate < V_Current_Year

Page 83: Connect CDC SQData - .NET Framework

83Connect CDC SQData DB2 Capture Reference

Db2 Operational Issues

Flush DB2 Log Buffer to Reduce Delays

The Connect CDC SQData Db2 Log Reader Capture constantly monitors the Db2 transaction log for new data.However, in some environments with low transaction activity, the DB2 log buffer may not be flushed frequently,which can delay the capture of recent data changes. To reduce the delay of captured data, the DB2 Log ReaderCapture can force a DB2 flush more often by creating a SQDCDC.CHURNING table that will pick up all recent logchanges.

To create a SQDCDC.CHURNING table, use the following schema and the DB2 Log Reader Capture will automaticallypick up all DB2 log changes.

CREATE TABLE SQDCDC.CHURNING(COMMENT VARCHAR(128) NOT NULL);ALTER TABLE SQDCDC.CHURNING DATA CAPTURE CHANGES;

Next, stop and restart the DB2 capture agent.

Page 84: Connect CDC SQData - .NET Framework

84 Connect CDC SQData DB2 Capture Reference

Db2 Operational Issues

Upgrading Db2 to 10 Byte LSN

Db2 Version 12 requires that your databases be upgraded to use a 10 Byte Log Sequence Number (LSN). While thattask falls into the domain of the Db2 Database Administrator, the Connect CDC SQData Db2 Change Data Capturerequires the following steps to be performed at the time the LSN length is changed:

1. Find the RBA that you want to start from in the DB2 MSTR address space, once the migration to NFM (new-function mode) is complete. Look for the following messages from the Db2 recovery manager in the systemlog that indicate the progress of Db2 through a restart process. You are looking for the RBA from the priorcheckpoint. You may need the assistance of the Db2 DBA assigned to the migration of a System operator.

DSNR001I -DBBG RESTART INITIATED DSNR003I -DBBG RESTART...PRIOR CHECKPOINT RBA=000000000000CFB28090

2. With the capture down completely, run the following JCL to set the global LSN and target engine LSN, usingthe RBA the value identified in step 1.

//*----------------------------------------------//*- Modify DB2 Capture CAB File Global LSN/RBA//*----------------------------------------------//STOP EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * modify --lsn=000000000000CFB28090 /home/sqdata/db2cdc/db2cdc.cab/*//*//*----------------------------------------------//*- Modify DB2 Capture CAB File Target LSN/RBA//*----------------------------------------------//STOP EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * modify --lsn=000000000000CFB28090 --target=cdc:////DB2REPL1 --datastore=cdc:////DB2TODB2 /home/sqdata/db2cdc/db2cdc.cab//*

3. In the capture startup parm, modify the safe restart point. Note that you will need to remove the --safe-restart after the capture has been started and is running because you do not want it used again accidentallyfollowing another restart of the Capture.

--apply --start --safe-restart=000000000000CFB28090 /home/sqdata/db2cdc/db2cdc.cab

Page 85: Connect CDC SQData - .NET Framework

85Connect CDC SQData DB2 Capture Reference

Db2 Operational Issues

Compensation Analysis and Elimination

If your use case involves replication of identical source/target tables then you will have generated an Apply EngineReplication script as described in the Db2 Quickstart. If you have confirmed that primary indexes have been createdon the target tables then the apply engine can use the target database catalog to identify the primary keys.Alternatively you may have decided to specify the target keys in the replication script which would be required bythose source tables that had no primary keys. This is often true with "audit" type tables.

In either case you are likely to see "compensation" occur on one or more of the target tables. Compensation is anoptional feature of the Apply Engine which automatically detects out of sync conditions between source and target.This condition frequently occurs during the initial implementation of replication either because there was no initialload of the target from the source or because the initial load was performed while the source was still subject toconcurrent application database activity. In both cases there will be a period of "catch up" replication where a CDCUpdate or Delete for a missing row or a CDC insert for an existing row was captured, while the initial load or refreshwas in progress.

Why do compensation you may ask?

Compensation in these cases will drop the deletes if the record does not exist in the target, convert an update to aninsert and convert an insert to an update. Without compensation, all of these situations would normally cause a SQLerror when an CDC Update or Delete for a missing row is processed or CDC inserts for an existing row are attempted.Compensation processing is optional but nearly always utilized in Apply Engines performing Replication.Compensation not only eliminates the impact of these timing related errors during the initial implementation but itensures that once the initial "catch up" phase of replication is complete that the source and target are fullysynchronized. During this time, out of sequence inserts, updates and deletes should reduce in number andeventually reach zero. Once the initial replication has "caught up" with processing occurring on the source side,compensation should no longer occur.

There are three consequences to the use of compensation:

1. Write to Operator (WTO) messages are generated by the Apply Engine to identify their occurrence and werecommend using existing console monitoring tools to alert staff as they typically indicate an unexpected butnon-catastrophic event after the initial "catch up" implementation phase of replication.

2. Performing compensation requires Inserts, updates and deletes to be preceded by a select using the Key ofthe CDC record. This additional operation adds some overhead but not enough to be concerned about inmost cases. Options provide for the elimination of the pre-insert select on tables known to have largevolumes of insert activities, like an audit table.

3. Compensations that continue past the initial synchronization period indicates a problem. Either there wereerrors introduced by the initial load process or the keys of the target do not match those defined for thesource or they do not provide for identification of a unique row.

The remainder of this section addresses the diagnosis and remediation of errors related to incorrect target keys. It isimportant to acknowledge that full resolution of the problem will often require repeating the initial load and "catchup" processing but only for those targets affected.

1. Identify the engine(s) that continue to produce compensation messages, the tables being compensated andcollect statistics to identify the relative number of compensations occurring for each target table in order toprioritize the order in which to correct them.

2. Gather the DDL for the source tables from the DBA including their primary key and index specifications.

Page 86: Connect CDC SQData - .NET Framework

86 Connect CDC SQData DB2 Capture Reference

Db2 Operational Issues

3. Using the prioritized list of target tables, examine each table one-by-one comparing the primary keys definedin the source database with the keys defined in the target database and the columns listed in the script, if theKEY IS clause has been specified, and make the necessary corrections.

4. Reparse the engine script and resume processing.

5. Monitor the output of the engine to confirm compensation has ceased for the target table and if it has not,reconfirm the source and target key specifications for the table and make further corrections.

6. Determine if the nature of the table requires re-load and resynchronization. There may be tables that areperiodically emptied or purged in the course of source application processing. Depending on the purpose ofthe target table and the method used to empty it of rows, it may be possible to simply wait for the source andtarget, to be emptied through replication and natural resynchronization to occur. Alternatively it may bepossible to simply drop and re-create the target table and allow replication to slowly bring the table current.

7. If the table contains data that will have to be reloaded to achieve synchronization. Determine the best timeand method to perform the individual table reload as previously outlined in the introductory Quick StartApproach.

If possible use the Capture Based Refresh to reload the target table since it requires the least intervention.

If another method of refresh will be used it will be necessary to disconnect the Apply engine that normally processeschange data for the target table. This should be done immediately before the unload/reload process is initiated toensure that concurrent changes made by applications to the table are captured but not processed until after the loadrefresh has been completed. Once the load has been completed the Apply engine can be restarted.

With either method of Refresh, there will again be some expected compensation while the resynchronizationoccurs.

Repeat these steps with the next table in the same or another engine and continue the process for each table thatwas identified with compensation until all are eliminated.

Page 87: Connect CDC SQData - .NET Framework

87Connect CDC SQData DB2 Capture Reference

Db2 Operational Issues

Adding Uncataloged Tables

Occasionally it may be convenient to add a table to the Capture Agent Configuration (.cab) file before it is present inthe Db2 Catalog. This scenario might occur when a new Db2 Business application is being implemented. The tableswill have been created in the test environment for the new application and because a downstream application willrequires that data to be captured, the capture configuration in the test environment has also been updated so thedownstream application can also be tested.

While the normal capture configuration maintenance process supports adding tables and marking them Inactive andthen subsequently changing them to Active, they must be in the Db2 Catalog even when they are marked Inactive.Because the scale of implementation is large it would be desirable to create or more likely update the productioncapture configuration in advance. That can be accomplished by adding the table with a Pending status (--pending)rather than Inactive, which would cause the capture to immediately fail because the production catalog did not yetcontain those tables.

The SQDCONF Utility will be used to add the Pending tables and later to modify the configuration when it is time forthem to be activated.

Syntax

sqdconf add <cab_file_name>--schema=<name> --table=<name> | --key=<name> --datastore=<url> [--pending]

Keyword and Parameter Descriptions

<cab_file_name>= Must be specified and must match the name specified in a previous create command.

--schema=<name> Schema name, owner, or qualifier of a table. Different databases use different semantics,but a table is usually uniquely identified as S.T where S is referenced here as schema. This parametercannot be specified with --key.

--table=<name> A qualified table name in the form of schema.name that identifies the source. This may beused in place of two parameters, --schema and --table. Both cannot be specified. In our example the firsttable is SQDATA.EMP

--key=<name> Same as --table

--datastore=<url> | -d <url> - While most references to the term datastore describe physical entities, adatastore URL represents a target subscription and takes the form: cdc://[localhost]/<agent_alias>/<target_name> where:

o <host_name> - Optional, typically specified as either cdc:///... or cdc://[localhost | localhost IP]... sincewe are describing the server side of the socket connection.

o <agent_alias> The alias name assigned to the Capture/Publisher agent and must match the<agent_name> defined in the Controller Daemon sqdagents.cfg configuration file. Engine scripts willuse the <agent_alias> when specifying the source "URL" and also on sqdmon <agent_name> displaycommands.

o <target_name> The subscriber name presented by a requesting target agent. Also referred to as theEngine name, the name provided here does not need to match the one specified in a local ControllerDaemon sqdagents.cfg configuration file. In our example we have used DB2TODB2.

Page 88: Connect CDC SQData - .NET Framework

88 Connect CDC SQData DB2 Capture Reference

Db2 Operational Issues

[--pending] This parameter allows a table to be added to the configuration before it exists in the databasecatalog.

Notes, Like any table being added, if there are multiple target datastores (Engines), an add command must beprocessed for each individual table/datastore pair.

Finally, when it is time to begin to capture the new tables, use the modify command to change the status of thetables from Pending to Active (or Inactive).

Syntax

sqdconf modify <cab_file_name>--schema=<name> --table=<name> | --key=<name> [--active | --inactive]

Keyword and Parameter Descriptions

<cab_file_name>= Must be specified and must match the name specified in a previous create command.

--schema=<name> Schema name, owner, or qualifier of a table. Different databases use different semantics,but a table is usually uniquely identified as S.T where S is referenced here as schema. This parametercannot be specified with --key.

--table=<name> A qualified table name in the form of schema.name that identifies the source. This may beused in place of two parameters, --schema and --table. Both cannot be specified. In our example the firsttable is SQDATA.EMP

--key=<name> Same as --table

[--active | --inactive] This parameter marks the added source active for capture when the change is appliedand the agent is (re)started. If this parameter is not specified the default is --inactive.

Notes:

1. The sqdconf modify command only needs to be run once for each Pending table regardless of the numberof datastores (Engines) subscribed.

2. Like all modifications to the Capture Agent Configuration (.cab) file the must be activated, see Applyingthe Configuration File changes.

Page 89: Connect CDC SQData - .NET Framework

89Connect CDC SQData DB2 Capture Reference

Db2 Operational Issues

Signal Errors

A Signal error indicates that an internal error has occurred within the Connect CDC SQData. Signal errors areaccompanied by a return code of 16. Each signal error is accompanied by a list of values that are used for diagnosticpurposes by Precisely support personnel.

When you encounter a signal error, please save the runtime report from the component and contact Preciselytechnical support.

Page 90: Connect CDC SQData - .NET Framework

90 Connect CDC SQData DB2 Capture Reference

Index Ind

ex

AALIAS 82allow-refresh 65ALTER TABLE 31Apply 50, 62

Archived Logs 25

Bblock-refresh 65

CCapture Based Refresh 85CDCSTORE 25compensation 85Controller Daemon 38

CRDREFR 65

Ddaemon 45data sharing environment 80data sharing group 80Datastores 60

DB2 Catalog 87DB2 LOAD 79DB2 Package 16DB2 Plan 16

DB2 REORG 79Db2 Version 12 67, 84Disabling table refresh 65display 53

display action 52

E--exclude 63

Iinitial load 65

KKEY IS 85

LLog Reader Capture 12LSN 48--lsn 48LSN/RBA 48

MMASTER 45

modify 60

NNACL encryption 35NACLKEYS 22, 72

OOperator (WTO) messages 85

Pprimary keys 85Private 22, 72Public 22, 72

Public / Private key 22, 72

Rrefresh 65refresh control table 65reload 47re-load 85

reorg 79resynchronization 85

SSignal error 89SQDAEMON 38sqdconf 53, 56

SQDDB2C 12SQDDDB2D 16sqdmon 53SQDZLOGC 36

start 45statistics 53stop 56

Page 91: Connect CDC SQData - .NET Framework

91Connect CDC SQData DB2 Capture Reference

Index Ind

ex

Straight replication 69syncronization 65

UUn-Block 65Uncataloged 87

Unit-of-Work 9unmount 56

Zz/OS Master Controller 24zFS SQDATA Variable Directory 16zIIP 35

zIIP processors 35

Page 92: Connect CDC SQData - .NET Framework

2 Blue Hi l l PlazaPearl River, NY 10965USA

precisely.com

© 2001, 2021 SQData. Al l rights reserved.