Expanding clustered topologies for WebSphere Process Server and

84
Expanding clustered topologies for WebSphere Process Server and WebSphere Enterprise Service Bus Configuration patterns and design decisions 7 January 2009 Eric Herness, WebSphere Business Integration Chief Architect, IBM Graham Wallis, Senior Technical Staff Member, IBM Charlie Redlin, WebSphere Process Server Architect, IBM Karri Carlson-Neumann, Advisory Software Engineer, IBM

Transcript of Expanding clustered topologies for WebSphere Process Server and

Page 1: Expanding clustered topologies for WebSphere Process Server and

Expanding clustered topologies for WebSphere Process Server and WebSphere Enterprise Service Bus Configuration patterns and design decisions 7 January 2009 Eric Herness, WebSphere Business Integration Chief Architect, IBM Graham Wallis, Senior Technical Staff Member, IBM Charlie Redlin, WebSphere Process Server Architect, IBM Karri Carlson-Neumann, Advisory Software Engineer, IBM

Page 2: Expanding clustered topologies for WebSphere Process Server and

Notices and trademarks IBM and WebSphere are trademarks or registered trademarks of IBM Corporation in the United States, other countries, or both. Microsoft and Windows are registered trademarks of Microsoft Corporation in the United States, other countries, or both. Java and all Java-based trademarks and logos, and Solaris, are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Other company, product, and service names may be trademarks or service marks of others.

2

Page 3: Expanding clustered topologies for WebSphere Process Server and

Abstract Learn how and when to grow clustered topologies that use IBM® WebSphere® Process Server and WebSphere Enterprise Service Bus (ESB). When new business process management (BPM) and ESB applications are deployed, you may need to expand the initial topology to take advantage of increased IT resources (such as memory) or to isolate applications. This article describes a standard "golden topology" widely used in production deployments, and then examines how to grow the topology from within the cluster, or by adding new clusters. It describes good and bad design patterns, what to consider, and the costs and limitations of each approach. The article also describes how to plan for service integration bus connectivity and desired messaging engine behavior.

3

Page 4: Expanding clustered topologies for WebSphere Process Server and

Contents INTRODUCTION ........................................................................................................................................ 5 EXAMPLE DEPLOYMENT TOPOLOGY............................................................................................... 7

DISTRIBUTED PLATFORMS AND Z/OS WITHOUT A FULL EXPLOITATION OF A SYSPLEX................................ 7 SAME TOPOLOGY ON Z/OS USING A SYSPLEX ............................................................................................. 8 POINTS OF INTEREST FOR ALL ENVIRONMENTS ........................................................................................... 8

GROWTH WITHIN EXISTING CLUSTERS ........................................................................................ 12 GROWING THE TOPOLOGY BY CREATING NEW CLUSTERS................................................... 15

REASONS FOR GROWTH ............................................................................................................................ 15 Isolation .............................................................................................................................................. 15 Growth ................................................................................................................................................ 16 Simplicity ............................................................................................................................................ 17

HOW THE NEW EXPANDED TOPOLOGY RELATES TO THE ORIGINAL TOPOLOGY ......................................... 17 COSTS AND LIMITATIONS ASSOCIATED WITH CREATING ADDITIONAL CLUSTERS ...................................... 23 GENERAL TASKS FOR CREATING ADDITIONAL CLUSTERS.......................................................................... 23

COMMON PATTERNS FOR CREATING ADDITIONAL CLUSTERS............................................ 24 PATTERN 1: NEW PROCESS SERVER AND WEBSPHERE ESB DEPLOYMENT TARGET WITH REMOTE MESSAGING............................................................................................................................................... 25 PATTERN 2: NEW DEPLOYMENT TARGET WITH REMOTE MESSAGING AND REMOTE SUPPORT .................... 26 CONFIGURATIONS THAT SHOULD NOT BE USED ........................................................................................ 29

Discouraged pattern #1: one application deployment target with multiple messaging targets......... 29 Discouraged pattern #2: multiple application deployment targets with a single messaging target .. 31

MESSAGING ENGINE STARTUP TIME.............................................................................................. 33 HOW TO DETERMINE STARTUP TIME ......................................................................................................... 33 NUMBER OF DESTINATIONS IMPACTS STARTUP TIME ................................................................................ 34

SERVICE INTEGRATION BUS CONNECTIVITY ............................................................................. 37 SERVICE INTEGRATION BUS BACKGROUND .............................................................................................. 37 TARGET PROPERTIES................................................................................................................................. 37

The example application: BusinessProcessApplication1................................................................... 39 Example 1: BusinessProcessApplication1 in a non-expanded topology............................................ 40 Example 2: BusinessProcessApplication1 is entirely deployed to a single target in a multi-clustered topology .............................................................................................................................................. 41 Example 3: BusinessProcessApplication1 is deployed across multiple targets in a multi-clustered topology .............................................................................................................................................. 48 Summary of Target Properties Examples ........................................................................................... 54 When are target properties required?................................................................................................. 54 Consuming messages .......................................................................................................................... 54 Order of message arrival .................................................................................................................... 54 Event Sequencing (ES)........................................................................................................................ 55 Isolation .............................................................................................................................................. 55 Summary ............................................................................................................................................. 55

CONTROLLING BUS CONNECTION PATTERNS BY ADDING LOCAL MESSAGING ENGINES ............................. 58 Additional messaging engines and consuming messages.................................................................... 61 Additional messaging engines and order of message arrival ............................................................. 62 Additional messaging engines and Event Sequencing (ES) ................................................................ 63 Additional messaging engines and isolation....................................................................................... 63 Summary of additional messaging engines ......................................................................................... 63

4

Page 5: Expanding clustered topologies for WebSphere Process Server and

SUMMARY................................................................................................................................................. 65 APPENDIX A: REFERENCES ............................................................................................................... 67

CONFIGURING EFFICIENT MESSAGING IN MULTICLUSTER WEBSPHERE PROCESS SERVER CELLS.............. 67 BUILDING WEBSPHERE PROCESS SERVER AND WEBSPHERE ESB TOPOLOGIES....................................... 67 WEBSPHERE APPLICATION SERVER ENVIRONMENTS ............................................................................... 67 WEBSPHERE APPLICATION SERVER SIBUS TOPICS................................................................................... 67 WEBSPHERE APPLICATION SERVER CORE GROUP POLICIES .................................................................... 68 EVENT SEQUENCING ................................................................................................................................. 68

APPENDIX B: VOCABULARY.............................................................................................................. 69 APPENDIX C: FIXES AND ENHANCEMENTS.................................................................................. 74

TIPS FOR REDUCING STARTUP TIME FOR MES ........................................................................................... 74 FIXES........................................................................................................................................................ 74

APPENDIX D: TARGET SIGNIFICANCE PROPERTIES................................................................. 75 SERVICE COMPONENT ARCHITECTURE..................................................................................................... 75 BUSINESS PROCESS CHOREOGRAPHY ....................................................................................................... 77 COMMON EVENT INFRASTRUCTURE ......................................................................................................... 79 OTHER...................................................................................................................................................... 80

APPENDIX E: BASIC INFORMATION ABOUT THE COMMON EVENT INFRASTRUCTURE...................................................................................................................................................................... 81

5

Page 6: Expanding clustered topologies for WebSphere Process Server and

Introduction WebSphere Process Server (hereafter referred to as Process Server) and WebSphere Enterprise Service Bus (hereafter referred to as WebSphere ESB) offer a robust and scalable platform for hosting Business Process Management (BPM) and Enterprise Service Bus (ESB) based solutions. Initial deployment topologies accommodate high availability and workload management, all leveraging the underlying capabilities of WebSphere Application Server. Often, when new BPM and ESB applications are put into production, the existing topology can handle the load and be reused as is by simply installing the new solution onto the existing topology. However, in some cases, you need to attain scalability beyond what is afforded by the first production topology. In some cases you need scalability in terms of the IT resources (memory or throughput, for example) and in other cases you need application isolation. Application isolation can happen at many levels, but for maximum administrative efficiency, an entire new deployment environment is not always needed. The same cells and some of the same clusters might be reused, and the scaling can occur in conjunction with isolation. This article describes a deployment environment configured with a standard topology. In a distributed environment (including Linux on IBM System z®, Linux®, UNIX®, Microsoft® Windows®, Solaris™ environments, and z/OS environments without a full exploitation of a sysplex), the standard topology is the “golden topology”. This topology is also known as ND7, and in WebSphere Application Server V6.1 it was known as remote messaging and remote support. Using the “golden topology” as a baseline, the article examines how the topology can be expanded by growth within the existing clusters and by adding new clusters. There are a number of configuration patterns for new clusters. The basic design decisions are similar to the decisions that are faced for the original application deployment and topology. You must choose a topology pattern that meets the functional and non-functional run time requirements of the application to be deployed. Further, the scaling requirements that lead to scaling by adding new clusters are different than the scaling requirements that lead to scaling by adding more members to existing clusters. In addition to cluster growth, the article describes how you can use target properties to control the connections to the service integration bus and to alter the routing path used for the messaging. It also addresses event sequencing. The maintenance of event order has implications for the maintenance of message order, which in turn has implications for the messaging topology and connectivity patterns. This article is intended for architects, practitioners, and system administrators who are using WebSphere Process Server and WebSphere ESB at V6.0.2 or later. This article assumes familiarity with these products and their associated common run time topologies. If you currently have applications deployed on a run time topology, and are planning to

6

Page 7: Expanding clustered topologies for WebSphere Process Server and

deploy new applications into the same cell, this article guides you in choosing the most appropriate configurations that meet your requirements and expectations. The article is largely based on a point of view of distributed platforms. In general, the concepts described here also apply to the WebSphere z/OS platform. However, because the z/OS scalable server design enables you to add servant regions, the starting point for creating and expanding Process Server or WebSphere ESB clustered topologies is slightly different. Process Server and WebSphere ESB are both horizontally and vertically scalable. This article provides guidance and understanding about how to get the most out of this middleware platform.

7

Page 8: Expanding clustered topologies for WebSphere Process Server and

Example deployment topology This section describes a baseline topology, which is used as a starting point to illustrate growth patterns. For details on the purpose of and how to build this topology, see Appendix A: References.

Distributed platforms and z/OS without a full exploitation of a sysplex Figure 1 illustrates a standard deployment environment for distributed platforms, including Windows, i5 and all UNIX platforms. This configuration could also apply to z/OS platforms without a full exploitation of a sysplex. This deployment environment is configured in the “golden topology” style. This pattern is also close to what is referred to in V6.1 terminology as a remote messaging and remote support pattern.

Service Integration Buses

SCA

.SYSTEM

SCA

.APP

BPC

CEI

Sup_member1

SupportCluster

CEI

MECluster

AppTargetCluster

Sup_member2CEI

ME_member1 ME_member2

ATC_member1

BPC/HTM/SCA

ATC_member2

BPC/HTM/SCA

DMGR

AdminConsole

ME ME

ME ME

WPRCSDB

MEDB

BPEDB

Figure 1: Example deployment environment in the "golden topology" style

8

Page 9: Expanding clustered topologies for WebSphere Process Server and

Same topology on z/OS using a sysplex For Process Server and WebSphere ESB on z/OS, the WebSphere Application Server z/OS scalable server adds a natural dimension for expandability, unique to z/OS. The scalable server introduces the notion of n servant regions, which are usually statically defined on server startup. This presents another dimension that should be considered for expanding the Process Server or WebSphere ESB topology. The difference between the topology configuration for distributed platforms versus z/OS with full use of a sysplex is the method of scalability. In the distributed environment, WebSphere scalability is achieved by creating additional cluster members in the existing clusters, while on z/OS with a full sysplex environment, scalability is achieved by creating additional regions. Of course, the two different scalability methods result in two different usage patterns of system functionalities.

Points of interest for all environments For topologies in all environments, the fundamental pieces of Process Server and WebSphere ESB are always similar. In all Process Server and WebSphere ESB cells, the deployment manager is the central point of administration for the cell. There are three sets of database tables:

• The Common Database contains sets of tables that are shared on a cell-wide basis for multiple Process Server capabilities, such as business rules and relationships. This group of tables is commonly referred to as WPRCSDB.

• Each messaging engine requires a unique set of database tables. The set of all of these tables is commonly referred to as MEDB.

• Each deployment target that is configured for Business Process Choreographer (BPC) requires a set of tables for BPC. This set of tables is commonly referred to as BPEDB.

A fourth set of tables is are “EVENT” tables, used for the persistence of Common Base Events for the CEI. However, due to the potential size and volume of persisted event data it is not advised to use this capability of the CEI in a production environment, and therefore it is not included here.1 In the distributed environment, functionality is spread across multiple task-specific clusters:

1 If you have gone to the trouble to cause events to be emitted, but the events are not stored in a database, then how are the events to be obtained for use? The recommended usage pattern for CEI is to take advantage of event distribution, meaning that the CEI matches the events into specific Event Groups and then pushes the events to each Event Group’s JMS destinations. [0] Custom event consumers can be developed to read the events from the JMS destinations and handle them appropriately. The event distribution pattern is the same pattern that the WebSphere Business Monitor uses.

9

Page 10: Expanding clustered topologies for WebSphere Process Server and

• The MECluster hosts the messaging engines. The MECluster is a member of each of the four service integration buses.

• The AppTargetCluster is the deployment target for customer applications. This cluster is configured to provide functionality for Business Process Choreographer, human tasks, and SCA.

• The SupportCluster hosts the applications that provide some utility for, but should not have to contribute to the workload of, the AppTargetCluster. For example, the SupportCluster may host the CEI EventService application and the Business Rules Manager. In this example, Common Base Events that are generated via the execution of applications in the AppTargetCluster will be “emitted” over to the SupportCluster, where the CEI will match the CBE into the appropriate EventGroup(s), and from there the CBEs are distributed to the Event Groups’ JMS destination(s).

In all WebSphere environments, there can be or are service integration buses. Process Server uses four service integration buses:

• SCA.SYSTEM. <CellName>.Bus • SCA.APPLICATION. <CellName>.Bus • BPC. <CellName>.Bus • CommonEventInfrastructure_Bus2

At least one server or cluster will be a member of each of the SIBuses. On distributed platforms, the MECluster is a member of each of the SIBuses. The bus member hosts a single messaging engine associated with each SIBus. The messaging engines in the diagrams have the following names:

• <BusMemberName>.000-SCA.SYSTEM.<CellName>.Bus • <BusMemberName>.000-SCA.APPLICATION.<CellName>.Bus • <BusMemberName>.000-BPC.<CellName>.Bus • <BusMemberName>.000-CommonEventInfrastructure_Bus

Messaging Engines are singleton objects. Figure 1 shows that each of the four messaging engines is active in one of the cluster members (ME_member1). By not explicitly showing the messaging engines in the other cluster member, the diagram is conveying that the messaging engines are at standby in the other cluster member. In other words, the other cluster member itself is open for e-business, but the messaging engines within it are not active. All messaging engines are not required to be active in the same cluster member. In Figure 1, all of the messaging engines are shown as active in the same cluster member to underscore the fact that they are singleton objects. In practice, it is generally useful to spread the active cluster members across the members of the 2 In our example Process Server topology, the SCA, BPC/HTM, and CEI are all configured in the cell. In a subset topology, it is possible that either the BPC/HTM or the CEI is not required. The SIBus for Process Choreography is necessary only if the Business Process Choreography and Human Task Manager are configured within the cell. The SIBus for CEI is necessary only if the CEI has been configured within the cell.

10

Page 11: Expanding clustered topologies for WebSphere Process Server and

MECluster with the use of Core Group policies (sometimes referred to as HA Manager policies). To learn more about Core Group Policies, please see Appendix A: References. The BPC, SCA, and CEI use messaging engines that are hosted by the bus member. The messaging engine <BusMemberName>.000-SCA.SYSTEM.<CellName>.Bus hosts destinations with names like:

• WBI.FailedEvent.<AppDeploymentTargetName> • sca/<ModuleName>/*

where <ModuleName> is the name of a deployed SCA module. The destinations used internally by the BPC engine are hosted by the messaging engine <BusMemberName>.000-BPC.<CellName>.Bus:

• BPEIntQueue_<AppDeploymentTargetName> • BPEHldQueue_< AppDeploymentTargetName> • BPERetQueue_< AppDeploymentTargetName> • BFMJMSAPIQueue_< AppDeploymentTargetName> • BFMJMSReplyQueue_< AppDeploymentTargetName> • HTMIntQueue_< AppDeploymentTargetName> • HTMHldQueue_< AppDeploymentTargetName>

The CEI EventServer application uses a queue destination for incoming CBE messages and then distributes the CBE messages to at least one topic destination. The messaging engine <BusMemberName>.000-CommonEventInfrastructure_Bus is hosting the following destinations:

• <BusMemberName>.CommonEventInfrastructureQueueDestination3 • <BusMemberName>.CommonEventInfrastructureTopicDestination

We have now established a starting environment for both distributed platforms (including z/OS without a sysplex), and for z/OS with a sysplex. The following sections discuss ways in which to grow the starting environment. The general principles outlined here apply equally to all distributed and z/OS platforms. However, a z/OS system with a sysplex has additional factors that distributed environments do not have. Those factors are the use of multiple regions and the ability to categorize transactions. These factors add considerations to the growth path, and will not be covered in this article. From this point forward, this article focuses on the example environment of distributed platforms.

3 In Process Server and WebSphere ESB V6.0.x, the names of the CEI destinations were simply CommonEventInfrastructureQueueDestination and CommonEventInfrastructureTopicDestination.

11

Page 12: Expanding clustered topologies for WebSphere Process Server and

Growth within existing clusters This section discusses what it means to grow within the existing clusters. Specifically, growing a cluster refers to the creating additional cluster members. It may be desirable to scale up an existing configuration in order to improve message throughput and application throughput. Figure 2 illustrates that additional cluster members have been added to the MECluster to improve message throughput and to the AppTargetCluster to improve application throughput. In addition, some Core Group policies have been created for the messaging engines on the MECluster. The same four messaging engines shown in Figure 1 are shown here, but now they are dispersed among the cluster members. One member has been added to the AppTargetCluster and one member has been added to the MECluster. You are not required to extend both clusters at the same time, nor to create the same number of new members for both clusters. The decision to extend each cluster depends on the type of need to be addressed. The AppTargetCluster should be extended to meet the requirements for throughput and failover for the applications deployed on it. The expansion of the AppTargetCluster is ultimately bound by the ability of the rest of the cell to provide adequate support, such as messaging engine throughput. As for the MECluster, because each messaging engine is a singleton, it makes sense to extend the MECluster to match the number of cluster members to the number of active messaging engines. It is valid to create additional cluster members for the SupportCluster. However, that is not being illustrated here. In Figure 2, the new member of the MECluster is ME_member3, and the new member of the AppTargetCluster is ATC_member3. After the new member of the MECluster is created, you can create or modify a Core Group policy in order to pin an existing messaging engine to the process, or to give it a preference for that process.

12

Page 13: Expanding clustered topologies for WebSphere Process Server and

SIBuses

Sup_member1

SupportCluster

CEI

MECluster

AppTargetCluster

Sup_member2CEI

ME_member1 ME_member3

ATC_member1

BPC/HTM/SCA

ATC_member2

BPC/HTM/SCA

DMGR

AdminConsole

ME ME ME

WPRCSDB

MEDB

BPEDB

ATC_member3

BPC/HTM/SCA

ME_member2

ME

Figure 2 : Creating additional members of existing clusters

There are many reasons for growing within existing clusters, even if new applications are not yet to be deployed. For example, the system is facing increased volume of usage for the existing applications, due to seasonal requirements. Or, additional capacity must be prepared for failover, migration, or other maintenance purposes. If new applications are to be deployed into an existing cluster, then it may be necessary to create additional cluster members in order to handle the increased workload. However, before that can happen, care must be taken to ensure that the new applications are a good fit to be deployed into the existing clusters. The new applications must have the same run time requirements as the previously deployed applications. Even if both applications are generally well-behaved in independent environments, when the applications are collocated, both the new and the old applications should be trusted to be well behaved. Finally, make sure that installing and using the new applications does not push the entire system beyond the limits of its capabilities.

13

Page 14: Expanding clustered topologies for WebSphere Process Server and

There are some circumstances in which growing within an existing cluster is not recommended. For example, the new applications that are to be deployed may not be for the same business purposes as the previously deployed applications. As a result, you should query your company’s governance rules to determine if you are permitted to extend the cluster. Isolation is a key factor. There may be business reasons for enforcing isolation between certain applications or groups of applications, and this level of isolation may include separate deployment targets. For example, imagine that a new group of applications is being introduced to your deployment environment. For these new applications, imagine that you have a lower confidence level that they will behave 100% correctly, and you do not want to risk disrupting your existing production applications. In Figure 2, the MECluster has been expanded by creating an additional cluster member, and by creating Core Group policies to disperse the active messaging engines across all of the members of the MECluster. Some benefits can be obtained by dispersing the workload as such. However, if one of those messaging engines themselves turns out to be a bottleneck, then additional members of the MECluster will not resolve the bottleneck. Because the messaging engine is a singleton, the work of a single messaging engine will be fully contained within the single cluster member in which it is active. To expand the existing clusters, first determine if sufficient capacity exists on the current hardware. If additional hardware is required, obtain the hardware, install the products, create a custom node, and federate it to the deployment manager. At this point, additional cluster members can be created on the new custom node. If the current hardware provides sufficient capacity, and the current arrangement of nodes allows for node-level failure and future migration (meaning there are at least two or more nodes), then it is sufficient to create the additional cluster members in the existing nodes. When growing an existing cluster, the existing cluster members can remain active. Furthermore, as the new cluster members are brought online, then new work requests can begin to be routed to the new cluster members. For example, if the new work requests are inbound HTTP, then the router (which is spraying the HTTP requests to cluster members) must be updated to include the new cluster members.

14

Page 15: Expanding clustered topologies for WebSphere Process Server and

Growing the topology by creating new clusters This section discusses the rationale for using multiple sets of clusters, and presents common patterns and considerations for creating additional clusters in an existing topology. We begin by addressing the concepts that prompt the use of multiple sets of clusters. Then, in order to relate an expanded topology back to the original “golden topology”, we walk through a common example of additional clusters, showing how the configuration of the new clusters relates to the configuration of the original clusters. Then we discuss tradeoffs regarding the creation of additional clusters and close with a discussion of the multiple patterns of additional clusters.

Reasons for growth There are multiple reasons for using an expanded topology: isolation, growth, and simplicity.

Isolation The first reason for using additional clusters in a topology is to satisfy isolation requirements. These requirements may be for business purposes, or for technical or operational reasons. For example, the new applications might be on a unique schedule for maintenance and updates, and that schedule may not be acceptable to the original application running on the original environment. Another reason for isolation is that the new applications may not be known to be well behaved. It may be vital to minimize any risk of errors caused by the new application having any impact on the execution of the original applications. In addition to logical cluster-only isolation, the new clusters can be created on separate hardware, thereby allowing for dedication of physical machine capacity. Another reason for isolation stems from functional necessity: the applications may have different functional requirements. For example, there may be different failover requirements, or different requirements for qualities of service, or even different requirements for run time capabilities. For example, suppose that an application is initially deployed and it uses SCA and business rules. Because this application did not require process choreography, neither the BPC nor the Human Task Manager (HTM) had been configured. Later, a second application is to be deployed, but this second application requires Process Choreography. Before the second application can be deployed, either the original deployment target must be configured to host BPC and HTM, or a new deployment target must be created and configured.

15

Page 16: Expanding clustered topologies for WebSphere Process Server and

Growth The second reason for expanding a topology is growth. One motivation for using additional clusters, specifically with a 1-to-1 ratio of application deployment targets to messaging targets, is to get into a habit that offers a sustainable pattern for growth. As the number of application deployment targets increases, so does the number of messaging engines. This pattern avoids overburdening an individual application deployment target and an individual messaging engine, which could result in memory utilization issues (application deployment target), or increased startup time (application deployment target and messaging engines) or increased failover times (messaging engines). Ultimately there will be limits to the capacity of a single MECluster with a single AppTargetCluster. If there are many modules deployed, and running high volumes, with many different large objects in memory, then the ApplicationTargetCluster may become memory- or resource-constrained. For example, run time memory may be used up by having too much static XSD type information. Shared pool sizes for thread pools and activation specifications may not be optimally tuned if there are many modules sharing the pool. At that point, it may be advisable to selectively divide the modules to be deployed on separate targets. The normal case is to adhere to the 1-to-1 ratio of application deployment targets to messaging targets, and this division of modules then implies a division of the destinations for those modules. In addition, there may be some growth over time based on versioning of applications. If the old versions of the application remain while the new versions of the application are deployed, then the number of applications and destinations is increasing. This eventually leads to an overburdened situation. Overburdened situations brings us to another growth-related reason for using additional clusters: to avoid a bottleneck. For example, a new application deployment target and messaging deployment target may be created in order to disperse the number of JMS destinations across multiple messaging engines. The purpose of this example action is that as more and more modules are deployed, which create more and more SIBus destinations. At some point the line is crossed from acceptable to unacceptable messaging engine restart (failover) time or for messaging throughput. See Configuration Considerations in Addition to New Clusters for details. Another bottleneck that should be avoided is overuse of a single set of database tables. For example, if there are many deployed Business Process Execution Language (BPEL) applications, and there are many process instances being started and tasks being claimed, then there heavy utilization of the BPEDB tables. Having too many applications accessing a single set of BPEDB tables increases the chances that those BPEDB tables can become a performance bottleneck.

16

Page 17: Expanding clustered topologies for WebSphere Process Server and

Simplicity The third reason for expanding a topology is simplicity. Growing a topology by creating additional application deployment target and messaging target pairs is a repeatable and proven process: create the new clusters, configure each cluster, and include settings for SIBus connectivity. After all, we are talking about an application deployment target and its specific messaging target. It is much easier to plan for messaging engine capacity and SIBus connectivity than it is to retrofit an existing topology. Growing a topology by creating additional application deployment target and messaging target pairs is also simple because it is a simple choice: this path always provides a broadly capable level of functionality. On the other hand, it is possible to grow the topology by creating only a new application deployment target and reusing an existing messaging target. Be aware that this alternate path is limited in the number of times it can be repeated. Eventually there is a danger of overburdening the single messaging target. With this in mind, the decision to maintain a 1:1 ratio or to allow multiple application deployment targets to share a single messaging target is impacted by the amount of, and the level of isolation desired for, the messaging target.

How the new expanded topology relates to the original topology This section illustrates a common example of additional clusters, and shows how the configuration of the new clusters relates to the configuration of the original clusters. In Figure 3, a new application deployment target (AppTargetCluster2) and a new messaging target (MECluster2) have been added into the existing Cell. The new AppTargetCluster2 could be configured to use the existing MECluster. This may be suitable if the messaging engines have plenty of capacity but the AppTargetCluster is having memory issues, and there won’t be much versioning of the modules which require that old destinations remain in the system. However, such shared configuration could lead to over-utilization of the MECluster’s messaging engines. It is generally advised to adhere to a 1-to-1 relationship: create a new messaging target for every new application deployment target.

17

Page 18: Expanding clustered topologies for WebSphere Process Server and

Service Integration Buses

SCA

.SYSTEM

SCA

.APP

BPC

CEI

Sup_member1

SupportCluster

CEI

MECluster

AppTargetCluster

Sup_member2

CEI

ME_member1 ME_member2

ATC_member1

BPC/HTM/SCA

ATC_member2

BPC/HTM/SCA

DMGR

AdminConsole

ME ME

ME ME

WPRCSDB

MEDB BPEDB

MEDB2 BPEDB2

MECluster2

AppTargetCluster2

ME2_member1 ME2_member2

ATC2_member1

BPC/HTM/SCA

ATC2_member2

BPC/HTM/SCA

ME ME

ME

Figure 3: A new deployment target (AppTargetCluster2) and a new messaging cluster (MECluster2) are created in the existing cell

18

Page 19: Expanding clustered topologies for WebSphere Process Server and

Figure 3 illustrates that the existing cell contains three clusters (MECluster, SupportCluster, and AppTargetCluster), and that two new clusters are created (MECluster2 and AppTargetCluster2). The new clusters require additional database tables. Because the AppTargetCluster2 is configured for Business Process Choreographer, a new set of BPEDB tables are required. The BPEL applications that are installed to AppTargetCluster will have their process template information, and for long running processes, their process instance information, stored in the BPEDB tables. The BPEL applications that are installed to AppTargetCluster2 will have their process template information, and for long running processes, their process instance information, stored in the BPEDB2 tables.

Figure 4: Each deployment target for Business Process Choreographer requires a unique set of database tables

Each configuration of BPC/HTM has its own BPC Explorer application. The BPC Explorer on AppTargetCluster will only allow you to administer processes and tasks of applications that are deployed to AppTargetCluster and use the BPEDB tables. The BPC Explorer on AppTargetCluster2 will only allow you to administer processes and tasks of applications that are deployed to AppTargetCluster2 and use the BPEDB2 tables. Because the BPC Explorer is a web application, if a single web server is being used, then take care to give each of the BPC Explorer web applications a unique context root. The MECluster2 is hosting three new messaging engines. Each of the messaging engines requires a unique set of database tables. In this example, they are grouped together in MEDB2.

AppTargetCluster

ATC_member1

BPC/HTM/SCA

ATC_member2

BPC/HTM/SCA

AppTargetCluster2

ATC2_member1

BPC/HTM/SCA

ATC2_member2

BPC/HTM/SCA

ProcessB Templates ProcessB Instances

ProcessA Templates ProcessA Instances

BPEDB

BPEDB2

19

Page 20: Expanding clustered topologies for WebSphere Process Server and

Figure 5: Each messaging engine requires a set of database tables

The MECluster2 is a member of only three of the four service integration buses. It is not a member of the CommonEventInfrastructure_Bus because the existing CEI configuration can be used by the new deployment target. The Common Base Events that are generated via application execution in the new AppTargetCluster2 can be emitted to the existing CEI resources. The CEI’s queue and topic destinations already exist and are hosted in the messaging engine in the original MECluster.

MECluster

ME_member1 ME_member2 ME ME

ME ME

MECluster2

ME2_member1 ME2_member2ME ME

ME

MEDB

MEDB2

Schema1 Schema2

Schema3 Schema4

Schema5 Schema6

Schema7

The MECluster2 is a member of the BPC SIBus, and therefore has a messaging engine named MECluster2.000-BPC. <CellName>.Bus. The destinations created to support the BPC configuration on AppTargetCluster2 are distinct from the destinations used to support the BPC configuration on AppTargetCluster. This is very important. Take note of the names assigned to the destinations.

20

Page 21: Expanding clustered topologies for WebSphere Process Server and

Figure 6: The queue destinations required for the BPC configurations on AppTargetCluster and AppTargetCluster2 are separate and distinct

The MECluster2 is also a member of the SCA SIBuses. Similar to how the destinations for the BPC configurations are distinct from each other, so also are the destinations used for the SCA.

BPC SIBus

MECluster

ME_member1

Messaging engine MECluster.000-BPC<CellName>..Bus

MECluster2

ME2_member1

• BPEIntQueue_AppTargetCluster • BPEHldQueue_AppTargetCluster • BPERetQueue_AppTargetCluster • BFMJMSAPIQueue_AppTargetCluster • BFMJMSReplyQueue_AppTargetCluster • BFMJMSCAllbackQueue_AppTargetCluster • HTMIntQueue_AppTargetCluster • HTMHldQueue_AppTargetCluster

Messaging engine MECluster2.000-BPC<CellName>..Bus

• BPEIntQueue_AppTargetCluster2 • BPEHldQueue_AppTargetCluster2 • BPERetQueue_AppTargetCluster2 • BFMJMSAPIQueue_AppTargetCluster2 • BFMJMSReplyQueue_AppTargetCluster2 • BFMJMSCAllbackQueue_AppTargetCluster2 • HTMIntQueue_AppTargetCluster2 • HTMHldQueue_AppTargetCluster2

21

Page 22: Expanding clustered topologies for WebSphere Process Server and

SCA.SYS SIBus

MECluster

ME_member1

Messaging engine MECluster.000-SCA.SYSTEM.<CellName>..Bus

MECluster2

ME2_member1

• WBI.FailedEvent.AppTargetCluster • sca/<ModuleName> • sca/<ModuleName>/component/* • sca/<ModuleName>/export/* • sca/<ModuleName>/import/*

Messaging engine MECluster2.000-SCA.SYSTEM.<CellName>..Bus

• WBI.FailedEvent.AppTargetCluster2 • sca/<Module2Name> • sca/<Module2Name>/component/* • sca/<Module2Name>/export/* • sca/<Module2Name>/import/*

Figure 7: The destinations used to support the SCA on the AppTargetCluster and the AppTargetCluster2 are distinct from each other. Note that in this figure, <ModuleName> is the name of an SCA module deployed to AppTargetCluster, and that <Module2Name> is the name of an SCA module deployed to AppTargetCluster2. The destinations used for Failed Events are also distinct. However, one different aspect of the Failed Event messages is that the messages are ultimately picked up by the Failed Event Manager’s Message-Driven Beans and stored in a single set of database tables. This allows a single Failed Event Manager application to handle all of the Failed Events for the entire cell.

In summary, this example illustrates that the creation of additional clusters provides additional capacity and provides isolation. Each cluster uses separate databases and separate JMS destinations and can be used to increase scalability without compromising isolation. The use of additional clusters should be considered as a first option when scaling up deployment environments.

22

Page 23: Expanding clustered topologies for WebSphere Process Server and

Costs and limitations associated with creating additional clusters Creating additional clusters incurs costs: costs for creating and maintaining new database tables, for adding memory for each of the new server processes, and for the full battery of performance testing and tuning required for the new set of clusters. Further, you must continue to observe cell-scoped limitations. Creating an additional set of clusters does not provide a way around them. For example, a current SCA limitation is that every module deployed in a cell is uniquely named. If a module having name <ModuleName> is already deployed to AppTargetCluster, then another module with the same <ModuleName> cannot be deployed to AppTargetCluster, AppTargetCluster2, or any deployment target in the cell. In addition, a module having name <ModuleName> can only be deployed to a single target, meaning that a module <ModuleName> can be deployed to either AppTargetCluster or AppTargetCluster2, but definitely not AppTargetCluster and AppTargetCluster2. This cell-wide SCA limitation on module names can be visualized via the module’s destinations. When an application containing an SCA module is deployed, then the JMS destinations for SCA are automatically created. The names of those JMS destinations are based on <ModuleName>. Therefore, if a module with <ModuleName> has already been deployed, then destinations based on that specific module name already exist. A subsequent deployment of an application with that same module name will then fail, because destinations based on that module name already exist.

General tasks for creating additional clusters If your current hardware has sufficient capacity, then the new clusters can be created on the existing hardware, product installation, and nodes. If additional hardware is required, first obtain the hardware, install the products on the hardware, create custom nodes and federate them to the deployment manager. At this point, additional clusters can be created on the new custom nodes. Once the new clusters have been created, configure them for their intended purposes. Examples of configurations that may need to be done to the cluster include:

• SCA • Business Process Choreographer • CEI

23

Page 24: Expanding clustered topologies for WebSphere Process Server and

Depending on the types of configuration that are necessary for each cluster, additional database considerations exist. A number of new tables and new schemas may be required. The following section provides more specific information on the various common patterns for creating additional clusters.

Common patterns for creating additional clusters There are a number of patterns for creating additional clusters. This section addresses the two most important patterns to recognize. Growing topologies is the same for both Process Server and WebSphere ESB topologies. The process of choosing the correct pattern for additional clusters is exactly the same process as choosing the correct pattern for the original deployment. This includes meeting the run time requirements, failover requirements, and scalability and volume requirements for the application. You must also determine how much isolation is necessary between the old and new applications. Is there any configuration that the old and the new clusters may wish to share? For example, depending on utilization, it may be useful to have a single CEI configuration that can handle the Common Base Events emitted from both the old clusters and the new clusters. Another consideration is service integration bus connectivity. An additional bus member introduces another connection point to the bus. If you want to control how the connection points are used, then you may need to specify guidance for how the connection factories and activation specifications actually connect to the SIBus. This “guidance” can be specified via target properties on the connection factories and activation specifications. Target properties are discussed in detail later in this article. Finally, the patterns presented here do not make specifications for a particular arrangement of hardware or nodes. There are many valid arrangements. The separation of the logical architecture enabled by WebSphere Application Server through the idea of separating nodes, deployment targets and servers allows maximum use of hardware resources no matter what the scaling and isolation requirements might demand.

24

Page 25: Expanding clustered topologies for WebSphere Process Server and

Figure 8 provides an overview of two common patterns. These are not the only patterns that are possible, but they are the most recommended.

# new clusters

New BPC/ HTM

New Messaging Engines

Common Event Inf.

Pick this because…

Pattern

1. New WPS/WESB deployment target with remote messaging

2 Yes for WPS

Yes, remote (non-partitioned destinations)

Share existing

Most common. Pick this if you can utilize a central CEI or do not require CEI.

2. New WPS/WESB deployment target with remote messaging and remote support

3 Yes for WPS

Yes, remote (non-partitioned destinations)

New Must maintain isolation of CEI events, or to relieve a CEI-based bottleneck

Figure 8: This table presents 2 common patterns of additional clusters.

Pattern 1: new Process Server and WebSphere ESB deployment target with remote messaging Pattern 1 (Figure 9) is expected to be the most common pattern of growth (it was used in How the new expanded topology relates to the original topology). Note that within the newly-expanded cell, some items are unique per deployment target, while some items are shared throughout the cell. The shared items include:

• The Common Database (WPRCSDB) is cell-scoped and is shared among all deployment targets.

• The support applications in the SupportCluster are shared among all deployment targets. Both the AppTargetCluster and the AppTargetCluster2 have been configured to route their Common Base Events to this instance of the CEI. In addition, the Business Rules Manager administers the business rules included in all deployment targets.

The unique items include:

• New deployment target for applications. • Each of the new messaging engines has a unique set of database tables . • For Process Server, the new BPC deployment target has a unique set of BPEDB

tables.

25

Page 26: Expanding clustered topologies for WebSphere Process Server and

Service Integration Buses

SCA

.SYSTEM

SCA

.APP

BPC

CEI

Sup_member1 SupportCluster

BRM CEI

MECluster

AppTargetCluster

Sup_member2

BRM CEI

ME_member1 ME_member2

ATC_member1 BPC/HTM/SCA

ATC_member2BPC/HTM/SCA

DMGR

AdminConsole

ME ME ME ME

WPRCSDB

MEDB BPEDB

MEDB2 BPEDB2

MECluster2

AppTargetCluster2

ME2_member1 ME2_member2

ATC2_member1 BPC/HTM/SCA

ATC2_member2BPC/HTM/SCA

ME ME ME

Figure 9: Pattern 1: new Process Server deployment target with remote messaging

Pattern 2: new deployment target with remote messaging and remote support This pattern is similar to Pattern 1, with the addition of a cluster for support functions such as the CEI. The shared items include:

26

Page 27: Expanding clustered topologies for WebSphere Process Server and

• The Common Database (WPRCSDB) is cell-scoped and is shared among all deployment targets.

• The Business Rules Manager administers the business rules included in all deployment targets. (The BRM reads data from the Common Database.)

The unique items include:

• New deployment target for applications. • Each of the new messaging engines has a unique set of database tables. • The new BPC deployment target has a unique set of BPEDB tables. • The Common Base Events emitted from the new deployment target are routed to

a new CEI configuration.

27

Page 28: Expanding clustered topologies for WebSphere Process Server and

Service Integration Buses

SCA

.SYSTEM

SCA

.APP

BPC

CEI

Sup_member1 SupportCluster

BRM CEI

MECluster

AppTargetCluster

Sup_member2

BRM CEI

ME_member1 ME_member2

ATC_member1 BPC/HTM/SCA

ATC_member2BPC/HTM/SCA

DMGR

AdminConsole

ME ME ME ME

WPRCSDB

MEDB BPEDB

MEDB2 BPEDB2

MECluster2

AppTargetCluster2

ME2_member1 ME2_member2

ATC2_member1 BPC/HTM/SCA

ATC2_member2BPC/HTM/SCA

ME ME ME

Sup2_member1 SupportCluster2

CEI

Sup2_member2

CEI

Figure 10: Pattern 2: new deployment target with remote messaging and remote support

28

Page 29: Expanding clustered topologies for WebSphere Process Server and

For the distributed operating system environments, choosing between Pattern 1 and Pattern 2 depends on whether or not the CEI infrastructure can be shared or not. Of course, there are patterns not illustrated here. In general, any pattern can be generalized by which functionalities and roles are shared, and which functionalities and roles are unique for the new clusters.

Configurations that should not be used This section illustrates one example pattern that should always be avoided and one that has limited applicability and is not recommended.

Discouraged pattern #1: one application deployment target with multiple messaging targets In later sections, this article discusses challenges created when there are too many destinations hosted in a single messaging deployment target. It may be tempting to retain a single application deployment target while creating an additional messaging deployment target. This would create a ratio of application deployment targets to messaging targets of 1 to n. Figure 11 illustrates such a configuration; do not use it.

29

Page 30: Expanding clustered topologies for WebSphere Process Server and

Service Integration Buses SCA

.SYSTEM

SCA

.APP

BPC

MECluster

AppTargetCluster

ME_member1

ATC_member1 BPC/HTM/SCA

DMGR

AdminConsole

ME ME ME ME

WPRCSDB

MEDB BPEDB MEDB2

MECluster2 ME2_member1

ME ME ME

Figure 11: A single application deployment target using two messaging deployment targets

This pattern cannot work because a single Process Server or WebSphere ESB deployment target can only configure its destinations on a single member of an SIBus. This means that if you have one application deployment target and two messaging deployment targets, then you cannot choose to put half of the destinations on one messaging target and the other half on the other messaging target. Even if you were able to force this configuration, you eventually develop problems. Every time you want to install or uninstall an application, you must reconfigure the specification of the location of the destinations used by the application deployment target. This becomes impossible to manage.

30

Page 31: Expanding clustered topologies for WebSphere Process Server and

Discouraged pattern #2: multiple application deployment targets with a single messaging target Previously, we recommended that the ratio of application deployment targets to messaging targets be 1:1. Pattern 1 has a ratio that must be avoided: one application deployment target to many messaging targets. Pattern 2 has a ratio of many application deployment targets to a single messaging target, and should also be avoided. It is possible to configure a single messaging target as the host of the messaging engines used by many application deployment targets. Figure 12 shows configuration that has limited applicability and limited growth potential. Multiple application deployment targets are using a single messaging deployment target.

Service Integration Buses SCA

.SYSTEM

SCA

.APP

BPC

AppTargetCluster

MECluster ME_member1

ATC_member1 BPC/HTM/SCA

DMGR

AdminConsole

ME ME ME ME

WPRCSDB

MEDB BPEDB BPEDB2

AppTargetCluster2

ATC2_member1 BPC/HTM/SCA

Figure 12: Multiple application deployment targets using a single messaging deployment target.

This pattern may be tempting in several situations:

31

Page 32: Expanding clustered topologies for WebSphere Process Server and

• If the deployed applications are not explicitly making heavy use of messaging engines (for example, they are using only microflows and synchronous SCA communication), then sharing messaging engines among all application deployment targets may seem acceptable.

• If there is a hardware limitation such that you need to limit the total number of JVMs. For example, there is not enough memory remaining on the existing hardware to support additional JVMs.

This pattern suffers from limited applicability and limited growth potential. When considering this pattern, ask the following:

(1) Will these applications, and all future versions of these applications, continue to utilize only microflows and synchronous interaction styles?

(2) How frequently are the applications versioned, and are the prior versions of the applications left on the system for some time?

(3) Are shared messaging engines acceptable to isolation requirements? (4) How is the configuration of clusters for future new applications determined?

This pattern makes a bold assumption that messaging engines will not be heavily used by any individual application. However, SIBus destinations for SCA do still exist for every application, and SIBus destinations for BPC and HTM do still exist for every business process container and human task container configuration. Therefore, if a single messaging target is hosting the messaging engines for many application deployment targets, then there may be a large number of destinations on the SIBuses. The resulting slowed startup time and failover time of the messaging engines may not seem particularly important if no applications are using asynchronous communications. However, perhaps a future version of an application is reworked and then will need to use some asynchronous communication. If multiple future versions of applications are reworked to require asynchronous communication, then suddenly there are many applications depending on, and competing for, messaging resources. Depending on versioning strategy, the old version of the application may remain even after the new version of the application is deployed. In this case, the new version of the application uses a unique module name. When the new version of the application was deployed, then SCA destinations were created for this unique module name. Be aware that versioning of deployed applications will result in an increasing number of SCA destinations. See Messaging Engine Startup Time for details onSS the large number of destinations per an individual messaging engine. Are shared messaging engines acceptable to isolation requirements? The destinations used by applications deployed to separate application deployment targets are uniquely named. In other words, the destinations used by an application on AppDeploymentCluster1 are not the same destinations used by an application on AppDeploymentCluster2. The destinations have different names. However, in this

32

Page 33: Expanding clustered topologies for WebSphere Process Server and

deployment pattern, those separate destinations are hosted by a single messaging engine. This will increase the load on the single messaging engine. Another factor is memory. Each application is imposing load on a messaging engine. In some cases, that load may be nothing, or it may be considerable. Eventually, the amount of load increases and impacts throughput. Finally the load from one more application may cause the total sum to reach a critical point, and the heap will run out, resulting in an out-of-memory exception. The first time that this pattern is used, there are two application deployment targets using a single messaging target. The second time the pattern is used, there are three application deployment targets using a single messaging target. The use of this pattern cannot continue indefinitely. Because it is commonly recommended that all configuration and deployment be scripted, it seems logical that the investment in scripting should be directed to a repeatable pattern. For example, the 1:1 ratio of application deployment target to messaging target is a repeatable and sustainable pattern. Patterns summary Maintaining a 1:1 ratio of application deployment targets to messaging targets is the general recommendation. When there is deviation away from this ratio, additional considerations and tradeoffs must be addressed. As the cell is expanding, so are the challenges presented by the topology. Some of the new challenges are based in the service integration bus. The next sections of this article address two major considerations in the use of the bus: startup time of a messaging engine, and service integration bus connectivity.

Messaging engine startup time The startup time of a single messaging engine depends on multiple factors, of course, but the factor of interest for this article is the total number of destinations that are hosted by the messaging engine. While this says little about the throughput and capacity of the messaging engine, it is relevant in the overall scheme of things because startup times are important not only during maintenance windows, but also during failover situations.

How to determine startup time To find the startup time for a specific messaging engine, consult the SystemOut.log file of the cluster member where the messaging engine is active. This time can be derived by subtracting the timestamp of when the messaging engine is in state ‘Starting’ from the timestamp of when the messaging engine had reached state ‘Started’. In the cluster member’s SystemOut.log, you will find output that looks like this:

33

Page 34: Expanding clustered topologies for WebSphere Process Server and

...SibMessage A [:] CWSIC2001I: Messaging connections are being accepted. ... ...SibMessage I [SCA.SYSTEM.WPSCell.Bus:MECluster1.000-SCA.SYSTEM.WPSCell.Bus] CWSID0016I: Messaging engine MECluster1.000-SCA.SYSTEM.WPSCell.Bus is in state Joined. ... ...SibMessage I [SCA.SYSTEM.WPSCell.Bus:MECluster1.000-SCA.SYSTEM.WPSCell.Bus] CWSID0016I: Messaging engine MECluster1.000-SCA.SYSTEM.WPSCell.Bus is in state Starting. ... ...SibMessage I [SCA.SYSTEM.WPSCell.Bus:MECluster1.000-SCA.SYSTEM.WPSCell.Bus] CWSIS1538I: The messaging engine, ME_UUID=511A8E1C8E8C85B7, INC_UUID=7304730417147CBD, is attempting to obtain an exclusive lock on the data store. ... ...SibMessage I [SCA.SYSTEM.WPSCell.Bus:MECluster1.000-SCA.SYSTEM.WPSCell.Bus] CWSIS1537I: The messaging engine, ME_UUID=511A8E1C8E8C85B7, INC_UUID=7304730417147CBD, has acquired an exclusive lock on the data store. ... ...SibMessage I [SCA.SYSTEM.WPSCell.Bus:MECluster1.000-SCA.SYSTEM.WPSCell.Bus] CWSID0016I: Messaging engine MECluster1.000-SCA.SYSTEM.WPSCell.Bus is in state Started.

The output in the log file indicates the process by which the messaging engine is “waking up”. As the servers start, there are messages about “messaging connections are now being accepted”, and there is a message about each messaging engine being in state Joined. Each messaging engine is a singleton, so at this point the HA Manager selects one of the instances of each messaging engine to proceed, and for that instance there is a message saying that it is in state Starting. Then the messaging engine attempts to obtain an exclusive lock for its data store information. After the exclusive lock is acquired, the instance of the messaging engine proceed to state Started. This last stage of processing requires that the ME publish information about its destinations so that they can be discovered and located by other servers, and this processing is dependent on the number of destinations. If there is a large number of destinations, the cluster member’s JVM may be open for e-business for some time before the messaging engine is in state Started. As a side note, for those who are running over to their test machines to try this right now… the very first start (ever) of a messaging engine takes slightly longer than a restart.

Number of destinations impacts startup time The definition of what is an acceptable or unacceptable startup time is ultimately determined by the customer and particular situation. Therefore it is difficult to prescribe exact numbers that apply to every case.

34

Page 35: Expanding clustered topologies for WebSphere Process Server and

The startup time of a messaging engine increases per each additional destination. This curve is not linear. However, this is an area in which significant improvements are soon expected.

Time

Number of destinations

Figure 13: Messaging engine startup time increases significantly per each additional destination that is it hosting

This behavior is particularly notable on the SCA.SYSTEM service integration bus. Each SCA module that is installed to an application deployment target triggers the automatic creation of some number of destinations on the SCA service integration bus. The number of destinations per module depends on the number of imports and exports defined for the module. This might be as few as four, or it may be greater than 20. As a result, if there are tens of modules deployed to a single server or cluster, then the messaging engine that is hosting the SCA destinations for that server or cluster may be hosting hundreds of queue destinations. The result of the accumulation of so many queue destinations on a single messaging engine is that the startup time and failover time for that messaging engine becomes increasingly longer. However, once the messaging engine is started, the larger number of destinations does not negatively impact the run time capabilities of the messaging engine. The current practice for coping with slow messaging engine startup times due to large numbers of destinations is to distribute the destinations across multiple bus members. However, each deployment target will only use a single messaging target per SIBus. For example, you may have application deployment target 1 configured to use the SCA messaging engine on messaging deployment target 1. All of the SCA queue destinations for the SCA modules deployed to application deployment target 1 will be hosted by the messaging engine on messaging deployment Target 1. In order to have any SCA queue destinations hosted by a different messaging engine, you must first create and configure a new messaging deployment target (messaging deployment target 2), and then create and configure a new application deployment target

35

Page 36: Expanding clustered topologies for WebSphere Process Server and

(application deployment target 2) that uses messaging deployment target 2. When you deploy the new SCA modules to the new application deployment target 2, then the new queue destinations will be hosted by the new messaging engine as shown in Figure 14.

Service Integration Buses

SCA

.SYSTEM

SCA

.APP

Messaging Target 1

Deployment Target 1

ME_member1 ME_member2

DT1_member1 Module1

DT1_member2Module1

DMGR

AdminConsole

ME ME

WPRCSDB

MEDB MEDB2

Messaging Target 2

Deployment Target 2

ME2_member1 ME2_member2

DT2_member1 Module2

DT2_member2 Module2

ME ME

sca/Module1

sca/Module2

Figure 14: Module1 is deployed to Deployment Target 1, and the destinations for Module1 are hosted by the messaging engine active in Messaging Target1. Module2 is deployed to Deployment Target 2, and the destinations for Module2 are hosted by the messaging engine active in Messaging Target 2.

As a result of creating an additional application deployment target and messaging deployment target, the cell faces increased administrative overhead and maintenance. The long startup time of a single messaging engine is traded for the work associated with creating, configuring, and maintaining additional JVMs.

36

Page 37: Expanding clustered topologies for WebSphere Process Server and

An important restriction that comes into play with this configuration is that a single SCA module must only be deployed once per cell, meaning that an SCA module having a specific name can only be deployed to a single deployment target. This is because at the time that the SCA module is deployed, a number of SCA destinations are automatically created. The names of these destinations are based on the SCA module names. Finally, another result of creating these additional clusters is that there is now more than one member of the SCA SIBuses. This opens the door to SIBus connectivity considerations, which are addressed in the next section.

Service integration bus connectivity When there are multiple application deployment targets and messaging deployment targets, there are multiple members per service integration bus. In this environment, there are additional considerations for service integration bus connectivity.

Service integration bus background A service integration bus provides location transparent messaging, meaning that an application can connect to any bus member and send to (or receive from) a destination deployed to any bus member. The destination does not need to be deployed to the same bus member that the application connected to — the bus routes the message to (or from) the destination. In addition, the default settings of the SIBus resources are set to provide workload balancing, so connections are deliberately spread across the bus members, but with preference for the local server or host. In the non-extended golden topology the location transparency of the bus is not exploited because there is only one bus member – all destinations are deployed to it and all applications connect to it. With the extended topologies described in this article, there are multiple bus members, so the bus by default workload balances connections across the bus members and transparently routes messages to destinations. This may not necessarily be what you want; in general, you can benefit from applying at least some control to the connection patterns. There are two approaches to controlling the connection patterns to the bus. One is by configuring target properties; the other is by adding messaging engines for each member of the ApplicationTargetCluster. These approaches are described in the following sections: Target properties and Controlling bus connection patterns by adding local messaging engines. For additional understanding regarding SIBus connectivity, the developerWorks article "Configuring efficient messaging in multicluster WebSphere Process Server cells" by Matt Roberts describes the best practices for configuring the IBM WebSphere Service Integration Bus component of WebSphere Process Server in a large-scale environment in which there are multiple clusters. The article references many critical fixes, specific

37

Page 38: Expanding clustered topologies for WebSphere Process Server and

configuration steps, and scripting. The article is entirely focused on Service Integration Bus, and provides a critical understanding of this fundamental component. A link to this article is provided in Appendix A: References.

Target properties Application resources that make connections to the service integration buses, such as connection factories and activation specifications, can use additional properties known as target properties. A target property configuration can be used to prescribe to which bus member or to which messaging engine the connection should be established. With no configured target properties to constrain where applications can connect, the bus will workload balance connections, without regard for which destination the import or export uses or where it is deployed. The target properties allow you to specify the name of a target and the type of the target, for example, “the target is a bus member called MECluster1”. The SIBus code attempts to locate that target when connecting to the bus. Some examples of targets are:

• Messaging engine: The connection will target the named messaging engine. • Bus member: The connection will target a messaging engine that is associated

with the named Bus member. The property target significance can be set to Preferred or Required.

• Preferred: The connection will be attempted to the specified target. If the specified target is not available, but other targets on the same service integration bus are available, then the connection may be established to an alternate target.

• Required: The connection will be attempted to the specified target. If the specified target is not available, then the connection will fail. The behavior after the exception occurs depends on the caller. In the case where the caller itself was invoked by asynchronous SCA, this may result in a failed event. In the case where the caller itself was synchronously invoked, this exception may trigger a rollback that goes all the way back.

The default is Preferred, which means that if the specified target is not available then it is still acceptable to connect to something else that is not the target. For example, if the ME in MECluster1 is not available, then it’s OK to connect to ME2 in MECluster2 instead. If the target significance is set to Required, then only the specified target is acceptable. If it isn’t available then the connection should fail – you do not want the application to connect anywhere else. Target properties are associated with:

38

Page 39: Expanding clustered topologies for WebSphere Process Server and

• A connection factory – JMS connection factories are configurable resources that have target properties.

• An activation spec – JMS and SIB resource adapter activation specifications have configurable target properties.

• A SCA binding – SCA bindings programmatically create connections to the bus. Prior to iFix JR29484 it was not possible to configure target properties for them. JR29484 made this possible by using a WebSphere variable (SCA_TARGET_SIGNIFICANCE) and introduced a default behaviour that the connection will prefer to connect to the member of the SIBus that hosts the destination that the SCA binding intends to send to or receive from.

With the exception of the SCA iFix, target properties are not automatically configured when deploying a module. The default behaviour is that a connection is made to any available messaging engine, without preference. If you want a different, you must configure the target properties manually. The following examples illustrate of the use of target properties in various environments with various deployments.

The example application: BusinessProcessApplication1 This application will be used in all of the following examples. Business Process Application1 contains two modules, Module1 (M1) and Module2 (M2). Module 1 has an export (E1) and an import (I1). Module2 has an export (E2). Module1’s import (I1) is wired to Module2’s export (E2).

I1 E1

M1

E2

M2

Figure 15: Module1 (M1) contains Export1 (E1) and Import1(I1). Module2 (M2) contains Export2 (E2). Import1 is wired to Export2.

There are a number of SCA destinations associated with these modules. These destinations include:

• Module1’s destination (sca/Module1, illustrated as destination M1) • Module1’s export destination (sca/Module1/export/E1, illustrated as destination

E1), • Module1’s import destination (sca/Module1/import/I1, illustrated as destination

I1) • Module2’s destination (sca/Module2, illustrated as destination M2) • Module2’s export destination (sca/Module2/export/E2, illustrated as destination

E2).

39

Page 40: Expanding clustered topologies for WebSphere Process Server and

Figure 16: Some of the destinations used by Module1 and Module2. Module1's import I1 has a forward routing path to Module2's export E2. E2 has a forward routing path to Module2's module destination M2.

Module1’s destination I1 has a forward routing path (FRP) to destination E2. Destination E2 has a FRP to destination M2 (the module destination for module M2). The message put to I1 will actually be routed to M2 – it is not stored on I1.

Example 1: BusinessProcessApplication1 in a non-expanded topology If Business Process Application1 is deployed to an application deployment target with a single messaging deployment target, then the SCA destinations are automatically created on the SCA.SYSTEM SIBus. These destinations include:

• Module1’s destination (sca/Module1, illustrated below as destination M1) • Module1’s export destination (sca/Module1/export/E1, illustrated as destination

E1) • Module1’s import destination (sca/Module1/import/I1, illustrated as destination

I1) • Module2’s destination (sca/Module2, illustrated as destination M2) • Module2’s export destination (sca/Module2/export/E2, illustrated as destination

E2) This is illustrated in Figure 17.

40

Page 41: Expanding clustered topologies for WebSphere Process Server and

MEM1 E1 I1 M2 E2

SCA.SYSTEM bus destinations

MEC1

ATC1

deploy

Figure 17: BusinessProcessApplication1 (containing Module1 and Module2) is deployed to ApplicationTargetCluster1 (ATC1). During the application deployment process, a number of SCA destinations are automatically created on the SCA.SYSTEM SIBus.

Because there’s only one messaging deployment target (bus member, in this case MECluster1) all the destinations are deployed to it and all connections will be made to it. Therefore no messages need to be routed to other MEs, because everything is local to the messaging deployment target. In this case, the definition of any target properties would be of little value, because there are no alternate targets to which a connection could be established. In a topology with multiple messaging targets there are other possible outcomes.

Example 2: BusinessProcessApplication1 is entirely deployed to a single target in a multi-clustered topology In this example, the topology consists of multiple sets of clusters. In addition, both of the modules (Module1 and Module2) related to the example application are deployed to a single application deployment target (ApplicationTargetCluster1). There are no cross-cluster invocations. All of the destinations required for Module1 and Module2 are hosted in the bus member MECluster1. In Figure 19, all of the SCA destinations for Module1 and Module2 are hosted by bus member MECluster1.

41

Page 42: Expanding clustered topologies for WebSphere Process Server and

Figure 18: How the application and its modules are deployed to these clusters. Business Process 1 (Module1 and Module2) is deployed to ApplicationTargetCluster1 (ATC1). The SCA destinations for Module1 and Module2 are hosted by the bus member MECluster1.

This example illustrates six variations, as shown in the following table:

Variation Target Props ME on ME on Connection made MECluster1 MECluster2

2A none available available MECluster1 2B none available available MECluster2 2C Preferred

MECluster1 available available MECluster1

2D Preferred MECluster1

Not available available MECluster2

2E Required MECluster1

available available MECluster1

2F Required MECluster1

Not available available Exception

With no target properties specified, the connection to the bus can be routed to either ME1 or ME2. Whichever ME is connected to, upon sending the message to I1, the bus resolves the forward routing paths, and the message is routed to destination M2. Variation 2A: If no target properties are set, and the SCA import connects to ME1, then the destination M2 is local and the message is enqueued to the destination as shown in Figure 19.

42

Page 43: Expanding clustered topologies for WebSphere Process Server and

Figure 19: No target significance properties are set. The connection is made to bus member MEC1 (as indicated by the shaded area around MEC1). All of the destinations are found locally.

43

Page 44: Expanding clustered topologies for WebSphere Process Server and

Variation 2B: Alternatively, if no target properties are set, and if the connection is made to ME2, then after the forward routing paths are resolved, destination M2 is deployed to the other bus member (ME1). The message must be routed to ME1. In order to do this store-and-forward, ME2 places the message onto a remote queue point (RQP) and then transmits to ME1. Note that the usage of the RQP goes by very quickly. This is illustrated below in Figure 20.

Figure 20: No target properties are set. The connection is made to bus member MEC2 (as indicated by the shaded area around MEC2). After all the forward routing paths have been resolved, it is determined that the destination for M2 is desired. Because the destination M2 is actually located over in ME1, then a Remote Queue Point is created on ME2.

44

Page 45: Expanding clustered topologies for WebSphere Process Server and

Variation 2C: If target properties are set such that the bus member MECluster1 is Preferred, then provided ME1 is available, the import will always connect to MEC1 and the message will always be handled locally to ME1. The forward routing path on I1 will resolve to E2 and finally to M2 which is also deployed to ME1. The message will be locally delivered to M2. Note that this is similar to Variation 2A.

Figure 21: Target properties specify that the bus member MECluster1 is Preferred. The messaging engine ME1is available. The connection is made to bus member MEC1, as indicated by the shaded area around MEC1.

45

Page 46: Expanding clustered topologies for WebSphere Process Server and

Variation 2D is shown in Figure 22. If target properties are set such that bus member MECluster1 is Preferred, and ME1 is not available – for example, it has been stopped or is in a failover – then the import I1 can connect to bus member MEC2, where ME2 will handle the message. On ME2, the forward routing path will resolve to destination M2. Because destination M2 is on ME1, and ME1 is not available, then a RQP will be created. The message will be stored in the RQP, and when ME1/destination M2 becomes available again, then the message will be forwarded from the RQP to the destination M2. The message stays on the RQP for as long as ME1 is not available.

Figure 22: Target properties are set so that bus member MECluster1 is Preferred. However, that messaging engine is not available. The connection is established to MEC2, as indicated by the shaded area.

46

Page 47: Expanding clustered topologies for WebSphere Process Server and

Variation 2E: If target properties are set such that bus member MECluster1 is Required, then if ME1 is available, the import will connect to MEC1 and the message will be handled locally in ME1. Note that the result is similar to Variation 2A.

Figure 23: The target properties are set so that bus member MECluster1 is Required, and ME1 is available. The connection is established to MEC1, as indicated by the shaded area.

47

Page 48: Expanding clustered topologies for WebSphere Process Server and

Variation F is illustrated in Figure 24. If target properties are set such that bus member MECluster1 is Required and ME1 is not available then the connection will fail and the BusinessProcessApplication1 will roll back.

Figure 24: The target properties are set so that bus member MECluster1 is Required. However, MECluster1 is not available. Therefore, a connection cannot be made.

Summary of example 2 The variations of example 2 illustrated that target properties and availability of messaging engines dictate the route that a message will take. In these variations, all of the modules were deployed to a single cluster and all of those modules’ destinations are hosted by a single bus member. In the next set of variations (example 3), the modules are deployed to different application deployment targets. The destinations for Module1 are on a different messaging target than the destinations for Module2.

Example 3: BusinessProcessApplication1 is deployed across multiple targets in a multi-clustered topology In this example, the BusinessProcessApplication1 is not deployed in its entirety to ApplicationTargetCluster1; instead it is split so that some modules of the BusinessProcessApplication1 are in ATC1 and others are in ATC2. Module M1 is deployed to ATC1 and module M2 to ATC2.

48

Page 49: Expanding clustered topologies for WebSphere Process Server and

M1 E1 I1

M2 E2

ATC1 ATC2

MEC1 MEC2

ME1 ME2

Figure 25: Module1 (M1) is deployed to ApplicationTargetCluster1 (ATC1). Module1's estinations are hosted by the bus member MECluster1. Module2 (M2) is deployed to pplicationTargetCluster2 (ATC2). Module2’s destinations are hosted by the bus member ECluster2.

dAM

The destinations for module M1 are deployed to MECluster1 and those for module M2 to MECluster2. Again, there are multiple variations to this example:

variation Target Props ME on ME on Connection made MECluster1 MECluster2

3A none available available MECluster1 3B none available available MECluster2 3C Preferred

MECluster1 available available MECluster1

3D Preferred MECluster1

available Not available MECluster1

3E Preferred MECluster1

Not available available MECluster2

3F Required MECluster1

available available MECluster1

3G Required MECluster1

available Not available MECluster1

3H Required MECluster1

Not available available Exception

49

Page 50: Expanding clustered topologies for WebSphere Process Server and

With no target properties specified, the connection to the bus can be routed to either bus member MEC1 or MEC2. Whichever bus member is chosen, upon sending the message to I1, the bus will resolve the forward routing paths, and the message will be routed to destination M2. Even though the modules are deployed to different clusters, the flow is still the same.

Figure 26: Even though the modules are now deployed to different clusters, the flow is still the same.

50

Page 51: Expanding clustered topologies for WebSphere Process Server and

Variation 3A: If no target properties are set, the messaging engines on bus members MEC1 and MEC2 are both available, and the SCA import connects to ME1, then the destination M2 is hosted by ME2. Therefore, the message will be enqueued to an RQP, and then forwarded to destination M2 on ME2. Note that the usage of the RQP is very brief. This is shown in Figure 27.

Figure 27: The connection is made to MEC1, as indicated by the shaded area. The forward routing paths resolve to Module 2's module destination (M2). Because M2 is hosted by a different bus member (MEC2), then a remote queue point (RQP) is created on ME1, and from there the message is forwarded to destination M2.

51

Page 52: Expanding clustered topologies for WebSphere Process Server and

Variation 3B: If no target properties are set, and if the connection is made to MEC2, then after the forward routing paths are resolved, destination M2 is hosted on the same messaging engine to which the connection is already established! Destination M2 is local and will process it locally – this is simply good fortune in this particular example; if there were additional bus members in the topology then they would need to use an RQP to reach ME2, just as ME1 does.

Figure 28: The connection is made to MEC2, as indicated by the shaded area. The forward routing paths resolve to Module 2’s module destination (M2). Because M2 is hosted by the same messaging engine (M2) to which the connection has been established, the messaging can be processed locally. In this case, a remote queue point is not necessary.

52

Page 53: Expanding clustered topologies for WebSphere Process Server and

Variation 3C: Target significance is set to Preferred for MECluster1. The messaging engine on MECluster1 is available, and the connection is made to MECluster1. Because the destination M2 is hosted by ME2, the message will be enqueued to an RQP, and then forwarded to destination M2 on ME2. The usage of the RQP is very brief.

Variation 3C is similar to Variation 3A.

Variation 3D: The target significance is set to Preferred for MECluster1. MECluster1 is available, but MECluster2 is not available.

Because the destination M2 is hosted by ME2, the message will be enqueued to an RQP. Because the messaging engine in MECluster2 is not available, the message cannot immediately be forwarded to destination M2 on ME2. However, as soon as ME2 becomes available, the message will be forwarded.

Variation 3D is simply Variation 3C (and Variation 3A) with a small delay.

Variation 3E: Target significance is set at Preferred to MECluster1. The messaging engine on MECluster1 is not available, and the connection is made to MECluster2. After the forward routing paths are resolved, destination M2 is hosted on the same messaging engine to which the connection is already established! Destination M2 is local and will process it locally – this is simply good fortune in this particular example.

Note that Variation 3E is similar to Variation 3B. Variation 3F: Target significance is set as Required to MECluster1. The messaging engine on MECluster1 is available. The connection is made to ME1. The message is enqueued to a RQP and then forwarded to destination M2 on ME2. The usage of the RQP is very brief.

Variation 3F is similar to Variation 3A and Variation 3C. Variation 3G: The target significance is set to Required for MECluster1. The messaging engine on MECluster1 is available, but MECluster2 is not available.

The connection will be made to MEC1. Because the destination M2 is hosted by ME2, the message will be enqueued to an RQP. Because the messaging engine in MECluster2 is not available, the message cannot immediately be forwarded to destination M2 on ME2. However, as soon as ME2 becomes available, the message will be forwarded.

Variation 3G is similar to Variation 3D (and similar to Variations 3A, 3C, and 3F with a small delay).

53

Page 54: Expanding clustered topologies for WebSphere Process Server and

Variation 3H: Target significance is set as Required to MECluster1. The messaging engine on MECluster1 is not available. The connection cannot be established and an exception occurs.

Summary of target properties examples The lesson to take away from these examples is to understand the location of the destinations, where the connections will be established, and whether target significance of Preferred or Required will provide the behavior that is expected during the flow of the application. The expected behavior is the route that the message must to get to its destination. This may mean that the application is expecting an exception if no connection is established, or this may mean that the application does not care about the route of the message as long as the message gets to its destination. The route of the message is important to message order, and is described in a following section.

When are target properties required? In general target properties are not required. Target properties and the availability of the messaging engines dictate the route that a message takes. However, without target properties, a message is still routed to its destination, although the route may vary. Be aware that a variation in route is often a desirable thing, as opposed to a failure. Specific routes may be required for performance reasons. Optimizing for performance implies optimizing to the shortest possible path, which in this case implies a direct route for messages. Specific routes are also necessary if the order of message delivery is important, such as in event sequencing (see Additional Messaging Engines and Event Sequencing (ES)).

Consuming messages The variations in example 2 and 3 variations focused on sending messages. To complete this picture, the receipt of messages from a destination must be considered. It is always better for a consumer to connect to the ME that hosts the destination to be received from. Setting target significance to “required” is advised in this case. This generally affects activation specification resources (both the J2C activation specifications used by SCA and the JMS activation specifications used by Business Process Choreographer) and connection factories. The rationale for this advice is that if the messaging engine that hosts the destination is available you should connect to that; if that messaging engine is not available then do not connect. There is no point connecting to a different messaging engine (that does not host the destination) and then attempting to receive from the destination, because if the hosting ME is not available, nor is the destination.

Order of message arrival The order of the messages as they arrive at their ultimate destination may be impacted by the SIBus connection. If it is important to maintain the order of message arrival, then you must regulate the route that the message takes to its destination4. 4 This message “route” has been mentioned multiple times. The route is the path that the message travels on its way to its destination. If the connection is made to the bus member that is hosting the destination,

54

Page 55: Expanding clustered topologies for WebSphere Process Server and

The most manageable way to achieve regulated routing is to ensure that the messages are only handled by the bus member that is hosting the destinations. Anything else becomes very complicated very quickly. If maintaining the order of message arrival is not critical, then the choice of target properties is not significant to this case.

Event Sequencing (ES) Event Sequencing is provided by Process Server. It depends on the SIBus and the proper configuration of the cell to ensure the order of message arrival. Therefore you must set target significance to “required” in order to force the connections to a predictable messaging target. Any time that RQPs are used, message order can be altered and this configuration cannot be used to support event sequencing. Although event sequencing uses “required” for its consumption of messages, the delivery order of messages (sent by SCA) is dependent on the configuration of the target properties described in Appendix D: Target significance properties to use the target significance of “required”.

Isolation It may be important to prevent any data from straying away from that data’s required or intended target, and into an unrelated deployment target. In such cases, the importance of controlling the SIBus connections is very clear. Target properties can also be used to constrain SIBus connectivity, making it easier to locate a message during problem diagnosis.

Summary In the single set of clusters case (such as a single golden topology), there is no need to set targets. In this case there is only a single member of each service integration bus. In the case of multiple sets of clusters, there are multiple members of each service integration bus. In this case bus connectivity options, such as the target properties, become very important. In a topology with multiple deployment targets and multiple bus members, all of the modules that are dependent on each other may be deployed to a single common deployment target, or some modules may be deployed to other sets of clusters. In the case of the cross-cluster invocation, it is important to understand the locations of the destinations and the SIBus connectivity used during cross-cluster invocations. There may be more than two members of a SIBus. When taking a broader view of topology configuration, it is useful to consider that there may be 3 or more members of a single SIBus. In this way, assumptions based on a 2 SIBus member environment can be avoided, and the long term growth path of the cell will be ensured. then the route is direct and is as short as possible. If the connection is made to a bus member that is not hosting the destination, then a RQP is used, and the route is a little bit longer.

55

Page 56: Expanding clustered topologies for WebSphere Process Server and

The following tables provide a quick reference to expected behavior for a deployment environment where all modules are deployed to a single target, and for an environment where some of the modules are deployed to separate targets. Be aware that one assumption of these tables is that there are 3 or more bus members.

56

Page 57: Expanding clustered topologies for WebSphere Process Server and

ME1 state ME2 state No target Prefer MECluster1

Require MECluster1

ME1 available

ME2 available

ME2 failed

No RQP No RQP May use RQP

ME1 failed ME2 available

Will use RQP (1)

Rollback

ME2 failed Figure 29: Table 1, Multiple sets of clusters, all modules deployed to a single target.

(1) If there is a third set of clusters, there will be a third messaging engine on the bus. An RQP is created on ME3 in the Preferred case even if ME1 and ME2 have failed. If there is not a third messaging engine, then if ME1 and ME2 are failed then no connection is possible and the transaction will roll back.

ME1 state ME2 state No target Prefer

MECluster1 Require MECluster1

ME1 available

ME2 available

ME2 failed

Will use RQP

Will use RQP

May use RQP

ME1 failed ME2 available

May use RQP

Rollback (4)

ME2 failed Will use RQP (2)(3)

Rollback

Figure 30: Table 2 Multiple sets of clusters, some modules are deployed on the first target and some modules deployed to the second target, and there is a wire between modules on separate targets.

(2) If there is a third set of clusters, there will be a third messaging engine on the bus and in that case a RQP will be created on ME3 in the Preferred case even if ME1 and ME2 have failed. If there is not a third messaging engine, then if ME1 and ME2 are failed then no connection is possible and the transaction will roll back

(3) If there are only 2 messaging engines then a RQP is not necessary because the connection will be established to the messaging engine that is hosting the destination to which the FRPs have resolved. However, if there are more than 2 messaging engines then RQPs will generally be used because there is no guarantee that the connection will be made to the specific messaging engine which is hosting the destination to which the FRPs have resolved.

(4) The transaction will rollback even though the messaging engine ME2 and destination E2 and M2 are available.

Proper planning and use of target properties is critical for SIBus connectivity in large topologies. The examples described here should be sufficient in most cases. In some

57

Page 58: Expanding clustered topologies for WebSphere Process Server and

specific cases, the following section on creating additional messaging engines may be applicable.

Controlling bus connection patterns by adding local messaging engines This method takes advantage of using target properties to dictate connectivity and then extends that behavior by adding bus members. To illustrate this case, imagine that BusinessProcessApplication1 is deployed to AppTargetCluster1, and that MECluster1 is hosting the destinations. MECluster1 is the only member of the SCA.SYSTEM SIBus. ME1 is not available. In this case, it doesn’t matter if target properties are set for Preferred or Required because there is only a single bus member (MEC1). If ME1 is not available, then an SIBus connection cannot be established at all.

Figure 31: MECluster1 is the only member of the SIBus. ME1 is not available. A connection from ATC1 to the bus member cannot be established.

Now imagine almost the same scenario. BusinessProcessApplication1 is deployed to AppTargetCluster1, and MEC1 is hosting the application’s modules’ destinations. MEC1 is not the only bus member because each member of the ATC1 has also been added as a bus member. ME1 is not available. In this case, if the target properties used by the modules’ resources are set so that MEC1 is Required, then no connection will be established, and the appropriate exception is raised. If the target properties used by the modules’ connection factories are set so that MEC1 is Preferred, then a connection to an alternate bus member is allowed and a RQP is created. The Preferred behavior is a good thing, and we want to take advantage of that! Because there are additional bus members, the client continues to perceive that the JMS service (the destination) is still available, and does not have to deal with the exception associated with not being able to put to the destination.

58

Page 59: Expanding clustered topologies for WebSphere Process Server and

The additional messaging engines have been created so that there is 1 messaging engine per member of the ATC1. These are illustrated as MEx, MEy, and MEz in Figure 32. The target properties for the senders are set so that the bus member MEC1 is Preferred. The target properties for the readers are set so that the bus member MEC1 is Required

Figure 32: MECluster1 is a member of the SIBus. ME1 is not available. Each member of ATC1 is also a member of the SIBus. If target properties are set that MEC1 is preferred, a connection is allowed to an alternate bus member. In this case, an SIBus connection is established with the local bus member.

As shown in this example, if the intended messaging engine is lost (for example, the network between ATC1 and MEC1 is dropped), and if the client must perceive that the JMS service is still available, then it is possible to force this behavior with additional configuration. In this option, the goal is to create a messaging engine on each member of the application deployment target cluster. Creating the additional messaging engines on the application deployment target can be achieved by using one of two options:

Option 1: Each member of the AppTargetCluster is a bus member

Each cluster member of the application deployment target cluster is added as a member to the service integration bus. This means that each cluster member is itself a member of the bus. Each bus member hosts its own unique messaging engine.

Option 2: The AppTargetCluster is a single bus member with multiple messaging engines.

The application deployment target is added as a member to the service integration bus. This means that the cluster itself is a member of the

59

Page 60: Expanding clustered topologies for WebSphere Process Server and

SIBus. Additional messaging engines are created for this bus member until the number of messaging engines equals the number of members of the application target cluster. Finally, an HA Policy is created for each messaging engine, in order to assign the messaging engine to a specific cluster member.

With Option 2, there will be only 1 additional bus member. There could be failover for each of the messaging engines, but you will have to properly configure an HA Policy for each of the messaging engines. With Option 1, there will be many additional bus members. There will not be any failover for each of the messaging engines, but you will not have to configure any HA Policies. With either option, an additional schema-qualified set of database tables are required for each new messaging engine.5 With either option, all senders must be configured to utilize Preferred target significance. All consumers (all MDBs) must be configured to specify that the target is Required. In both the required and preferred cases, the target is not the local messaging engine, but rather the bus member that is hosting the real destination. The net is that each member of the application deployment target is now also hosting a messaging engine. The destination is still hosted over in the Messaging Deployment Target, but now if that becomes unavailable, then the connection is made from the client to the messaging engine found locally. The remote queue point is established in the local messaging engine. When the original Messaging Deployment Target becomes available again, the message is forwarded to its intended destination. Figure 33 expands the previous example’s topology. In this diagram, there are multiple bus members and multiple deployment targets.

5 Consider putting the tables for these new messaging engines on a separate DBMS than the one already being used for the messaging engine tables for MECluster1 and MECluster2 (or both). If a single DBMS were used for all of the messaging engine tables, and that single DBMS fails, then all of the messaging engines go down together.

60

Page 61: Expanding clustered topologies for WebSphere Process Server and

Figure 33: Each member of the ApplicationTargetCluster1 has its own messaging engine. These messaging engines do not host any destinations. The destinations for the SCA modules still exist on the messaging engine in MECluster1.

Creating and configuring additional messaging engines offers several advantages as well as tradeoffs. One benefit is the perception that the service represented by the destination always appears to be available to senders. The other is control of the location of Remote Queue Points. The tradeoff is this: by using controlled RQPs, the service represented by the destination always appears to be available to senders. However, the price for this perceived availability is the creation, configuration, and administration of additional messaging engines, additional sets of messaging engine tables, and possibly Core Group Policy (HA Policy) resources. Additional considerations are discussed in the following sections.

Additional messaging engines and consuming messages Consumers should be configured to use a target significance of Required and target type of bus member. The target should be the bus member that is hosting the actual destination. There is no point for the readers to obtain a connection to a local messaging engine if the destinations are remote.

61

Page 62: Expanding clustered topologies for WebSphere Process Server and

Figure 34: The activation specification for the module is configured for a Required target significance and target type is bus member. The consumer should only connect to the target providing the real destination.

In Figure 34, the activation specification for Module2’s MDB is configured to use a Required target significance. The target type is set for BusMember and the target is MECluster1.

Additional messaging engines and order of message arrival As mentioned in Order of message arrival the order of the messages as they arrive to their ultimate destination may be impacted by the SIBus connection. If it is important to maintain the order of message arrival, then it is important to regulate the connection so that the messages all travel the same route to the ultimate destination. However, by definition this configuration of additional messaging engines will use RQPs, which will be unique per member of the ATC. Therefore, if there are n members of the ATC, there will be n routes to the destination. The order of the messages is preserved per route. This means that operations that completely occur within a single member of the ATC1 cluster will have the order of messages preserved. However, for operations that jump across cluster members, the order of messages cannot be guaranteed because of the different messaging routes used by the messages sent from each cluster member. If maintaining the order of message arrival is not critical, the use of RQPs on the local messaging engines is not an important.

62

Page 63: Expanding clustered topologies for WebSphere Process Server and

Additional messaging engines and Event Sequencing (ES) As stated in Event Sequencing (ES), Event Sequencing is provided by Process Server. It depends on the SIBus and the proper configuration of the cell to ensure the order of message arrival. Therefore, using a local messaging engine per each member of the ATC and each associated RQP results in multiple routes to the destination. Any time that RQPs are used, message order can be altered and this configuration cannot be used to support event sequencing. Although event sequencing uses “required” for its consumption of messages, the delivery order of messages (sent by SCA) is dependent on the configuration of the target properties. See Appendix D: Target significance properties for details on using the target significance of Required.

Additional messaging engines and isolation Additional messaging engines are not necessary in many topologies, but they can be beneficial in a topology that includes more than 2 sets of clusters and where strict isolation is required. For example, imagine a topology that has 4 sets of clusters (4 MEClusters and 4 AppTargetClusters). Of these 4 sets of clusters, imagine that sets 1 and 2 interact, and that sets 3 and 4 interact. If strict isolation must be preserved between the first group and the second group, and the destinations must always appear to be available, then this additional configuration may be a solution. Additional factors may apply, such as not having a requirement for ordering of message arrival.

Summary of additional messaging engines Target properties are critical to SIBus connectivity. In situations where additional isolation requirements and perception of destination availability apply, it may be useful to configure an extra layer of messaging capability local to the application deployment target. Even though the examples in this article have only shown 2 sets of clusters, it is generally useful to take the point of view that there could be 3 or more sets of clusters. The following two tables provide a quick reference to expected behavior for environments using additional messaging engines .

63

Page 64: Expanding clustered topologies for WebSphere Process Server and

ME1 state ME2 state Local ME state Local ME available ME2 available Local ME failed Local ME available

ME1 available

ME2 failed Local ME failed

direct put on ME1

Local ME available RQP on local ME ME2 available Local ME failed RQP on ME2

ME1 failed

Local ME available RQP on local ME ME2 failed Local ME failed Rollback (1)

Figure 35: Table 1 Multiple sets of clusters, all modules deployed to a single target, all destinations hosted by a remote messaging cluster, and an additional messaging engine has been created on each member of the application deployment target. All senders from ATC1 are configured to use Preferred target significance to MEC1. All readers on ATC1 are configured to use Required target significance to MEC1.

(1) If there is an additional set of clusters, such as ATC3 and MEC3, there will be a messaging engine (ME3) on the bus and in that case a RQP will be created on ME3 in the Preferred case even if all other ME’s have failed. If there is not such a messaging engine, then if all other ME’s are failed then no connection is possible and the transaction will roll back.

64

Page 65: Expanding clustered topologies for WebSphere Process Server and

ME1 state ME2 state Local ME state Local ME available ME2 available Local ME failed

ME1 available RQP on ME1

Local ME available ME2 failed Local ME failed Local ME available RQP on local ME ME2 available Local ME failed direct put to ME2

ME1 failed

Local ME available RQP on local ME ME2 failed Local ME failed Rollback (2)

Figure 36: Table 2, Multiple sets of clusters, some modules are deployed on the first target and some modules deployed to the second target, and there is a wire between modules on separate targets. All destinations hosted by a remote messaging cluster, and an additional messaging engine has been created on each member of the application deployment target. All senders from ATC1 are configured to use Preferred target significance to MEC1. All readers on ATC1 are configured to use Required target significance to MEC1. All readers on ATC2 are configured to use Required target significance to MEC2.

(2) If there is an additional set of clusters, such as ATC3 and MEC3, there will be a messaging engine (ME3) on the bus and in that case a RQP will be created on ME3 in the Preferred case even if all other ME’s have failed. If there is not such a messaging engine, then if all other ME’s are failed then no connection is possible and the transaction will roll back.

Summary This article describes an initial deployment topology for Process Server and WebSphere ESB and then uses this as a baseline for explaining how to scale up a production environment. It describes how to grow an existing topology either within the cluster or by adding new clusters. The article recommends growth patterns given specific application requirements. It also describes how to use target properties to control the connections to the service integration bus and to alter the routing path used for the messaging. Process Server and WebSphere ESB production environments are architected to grow. With the appropriate planning and testing, these environments can

65

Page 66: Expanding clustered topologies for WebSphere Process Server and

grow both horizontally and vertically to support the most demanding BPM and ESB applications.

66

Page 67: Expanding clustered topologies for WebSphere Process Server and

Appendix A: References

Configuring efficient messaging in multicluster WebSphere Process Server cells

This developerWorks article describes the best practices for configuring the IBM WebSphere Service Integration Bus component of WebSphere Process Server in a large-scale environment in which there are multiple clusters. This article references many critical fixes, specific configuration steps, and scripting. “Configuring efficient messaging in multicluster WebSphere Process Server cells” by Matt Roberts http://www.ibm.com/developerworks/websphere/library/techarticles/0811_roberts/0811_roberts.html

Building WebSphere Process Server and WebSphere ESB topologies

• Redbook: Production Topologies for WebSphere Process Server and WebSphere ESB V6 http://www.redbooks.ibm.com/abstracts/SG247413.html?Open

• developerWorks article describing topologies o WebSphere Process Server and WebSphere Enterprise Service Bus deployment patterns,

Part 1: Selecting your deployment pattern http://www-128.ibm.com/developerworks/websphere/library/techarticles/0610_redlin/0610_redlin.html

• developerWorks articles describing how to create the initial “golden topology” o V602: Clustering WebSphere Process Server V6.0.2, Part 2: Install and configure

WebSphere Process Server clusters: http://www.ibm.com/developerworks/websphere/library/techarticles/0704_chilanti2/0704_chilanti2.html

o V61x: Building clustered topologies in WebSphere Process Server V6.1 http://www.ibm.com/developerworks/websphere/library/techarticles/0803_chilanti/0803_chilanti.html

WebSphere Application Server Environments • Redbooks

o WebSphere Application Server V6 Scalability and Performance Handbook http://www.redbooks.ibm.com/abstracts/SG246392.html?Open

o WebSphere Application Server Network Deployment V6: High Availability Solutions http://w3.itso.ibm.com/abstracts/sg246688.html?Open

o Techniques for Managing Large WebSphere Installations http://www.redbooks.ibm.com/abstracts/sg247536.html?Open

WebSphere Application Server SIBus topics • WebSphere Application Server InfoCenter

o “Remote Queue Points Collection” http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/index.jsp?topic=/com.ibm.websphere.pmc.nd.doc/sibresources/SIBRemoteQueuePoint_CollectionForm.html

67

Page 68: Expanding clustered topologies for WebSphere Process Server and

o Planning issues common to all bus topologies http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/index.jsp?topic=/com.ibm.websphere.pmc.nd.doc/tasks/tjj0025_.html

WebSphere Application Server Core Group Policies • Sometimes referred to as “HA Manager Policies” • WebSphere Application Server InfoCenter

o “Service integration high availability and workload sharing configurations” http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/index.jsp?topic=/com.ibm.websphere.pmc.nd.doc/concepts/cjt0007_.html

• IBM Redbook o An out-of-context example may be found in the IBM Redbook “Production Topologies

for WebSphere Process Server and WebSphere ESB V6”, section 8.3.2, page “Creating HA Manager policies” http://www.redbooks.ibm.com/abstracts/SG247413.html?Open

Event sequencing • WebSphere Process Server TechNote

o “ WebSphere Enterprise Service Bus V6.1 PDFs incorrectly refer to event sequencing” http://www-1.ibm.com/support/docview.wss?rs=2346&context=SS7J6S&dc=DA400&uid=swg27013063&loc=en_US&cs=UTF-8&lang=en&rss=ct2346websphere

68

Page 69: Expanding clustered topologies for WebSphere Process Server and

Appendix B: Vocabulary Run time requirements: The application that is to be deployed will require certain things from the deployment environment which will enable the application to do the work it needs to do and behave the way you want it to behave. These requirements fall into two categories, functional and non-functional requirements. Examples of functional requirements include configuration for Process Choreography, human tasks, SCA, CEI, Business Rules, and Adapters. Functional requirements also include consideration for import types and export types, interactions between components, and other resource requirements. Non-functional requirements include expectations of behavior, for example, availability, failover, and disaster recovery. Remote, non-partitioned destinations: The application deployment target is not the messaging deployment target. The messaging deployment target is a member of the SIBus, and as a member of the SIBus it has one messaging engine. In fact, it has only one messaging engine. Therefore, the destinations created on this SIBus member are fully contained within the single messaging engine. These destinations are not partitioned. As shown in Figure 37, the MECluster is using local, non-partitioned destinations. Local means that the destinations are in the deployment target that is being referred to. Remote means that the destinations are not in the deployment target that is being referred to.

SIBus

ME_member2

ME

ME_member1

ATC_member1 ATC_member2AppTargetCluster

BPC/HTM/SCA BPC/HTM/S

MECluster

CA

Figure 37: The AppTargetCluster is using remote, non-partitioned destinations

Local, non-partitioned destinations: The application deployment target is also the messaging deployment target. It is also a member of the SIBus, and as a member of the SIBus it has a messaging engine. In fact, it has only one messaging engine. Therefore, the destinations created on this SIBus member are fully contained within the single messaging engine. These destinations are not partitioned. This configuration is not

69

Page 70: Expanding clustered topologies for WebSphere Process Server and

encouraged because of MDB starvation. Only the MDBs in the same cluster member as the active messaging engine are able to read messages from the destinations.

Figure 38: The AppTargetCluster is using local, non-partitioned destinations

Local, partitioned destinations: The application deployment target is also the messaging deployment target. The messaging deployment target is a member of the SIBus, and as a member of the SIBus it was originally configured with one messaging engine. Additional messaging engines were created for this SIBus member, and then HA Manager policies were created to associate each active messaging engine to a specific cluster member. Because there are multiple active messaging engines for this SIBus member, the destinations are partitioned across each of the active messaging engines. Partitioned queues introduce many unique challenges to a deployment environment, and it is strongly recommended that these be avoided. In Figure 39, there are 3 active messaging engines for this member of the SIBus.

Figure 39: The AppTargetCluster is using local, partitioned destinations.

SIBus

ME

ATC_member1 ATC_member2AppTargetCluster

BPC/HTM/SCA BPC/HTM/SCA

SIBus

ME

ATC_member1 ATC_member2AppTargetCluster

BPC/HTM/SCA BPC/HTM/SCA

ATC_member3

BPC/HTM/SCA

ME ME

70

Page 71: Expanding clustered topologies for WebSphere Process Server and

Multiple members of a single Service Integration Bus: It is possible for multiple deployment targets to be members of a single Service Integration Bus. In addition, this does not result in partitioned destinations. Each SIBus member has its own messaging engine. If a destination is created in the first SIBus member, that destination is hosted only in the messaging engine of the first SIBus member. In Figure 40, queue destination A is hosted only by the messaging engine in MECluster. Queue destination B is hosted only by the messaging engine in MECluster2.

SIBus

ME_member2

ME

ME_member1

ME2_member2

ME

ME2_member1

MECluster

MECluster2

A

B

Figure 40: Multiple members of a single Service Integration Bus (this does NOT result in partitioned queues)

Remote queue points (RQP): Remote queue points provide store and forward of messages. They are used when the destination to which a message is addressed is not local to the messaging engine to which the application is connected. The “regular” queue point is just the physical instantiation of the destination in the ME that hosts that destination. A remote queue point is a physical queue of messages that are on their way to the regular queue point, from another ME that has received the messages and needs to forward them. RQPs can be used when the ME that hosts the destination is unavailable. They can also be used in a more transient fashion to route messages, because the producing application only needs one connection to the bus (to one ME) but may send to multiple destinations that are deployed on other MEs.

71

Page 72: Expanding clustered topologies for WebSphere Process Server and

Forward Routing Path (FRP): A forward routing path is a property of a destination. Messages put to this destination are automatically forwarded to the destination specified in the default forward routing path. Many SCA destinations use a forward routing path. For example, export destinations specify a forward routing path that leads back to the SCA module destination, as shown in Figure 41. To find the default forward routing path, from the administrative console, select Service Integration => Buses > BusName => Destinations => DestinationName, then scroll

72

Page 73: Expanding clustered topologies for WebSphere Process Server and

down.

Figure 41: The default forward routing path can be found on the destination

73

Page 74: Expanding clustered topologies for WebSphere Process Server and

Appendix C: Fixes and enhancements The developerWorks article “Configuring efficient messaging in multicluster WebSphere Process Server cells” by Matt Roberts references many critical fixes. http://www.ibm.com/developerworks/websphere/library/techarticles/0811_roberts/0811_roberts.html

Tips for reducing startup time for MEs Messaging engine startup time, particularly for messaging engines hosting the SCA destinations, can be greatly reduced by doing either of the following:

• Making the startup time more linear per each additional destination • Reducing the total number of destinations that are automatically created for each

SCA module

Fixes In earlier versions of WebSphere Enterprise Service Bus and WebSphere Process Server, the connection factories and activation specifications used for SCA did not allow you to configure the Target Significance, Target Type, and Target properties. If you are using earlier versions of these products, apply the following maintenance: PK54128: A Service Integration Bus fix was introduced in 4Q 2007 which allowed these properties to be set on WebSphere Application Server V6.0.2x installations. This capability always existed on V6.1.x installations. This fix allows you to set the TargetSignificance, Target, and TargetType as additional properties on the J2C activation specification resources. This fix is included in WebSphere Application Server V6.0.2.27 (WebSphere Process Server V6.0.2.4) See also:

• http://www-1.ibm.com/support/docview.wss?rs=180&context=SSEQTP&q1=PK54128&uid=swg1PK54128&loc=en_US&cs=utf-8&lang=en

JR29484: This SCA fix was introduced in summer 2008 and allows the Target Significance property to be set on the connection factories used for SCA. These connection factories are created programmatically, and are not found in the Integrated Solutions Console. With this fix, the default behavior for the SCA connection factories is to use Target Significance = Preferred and this fix allows you to specify that the Target Significance should be “Required”. A WebSphere variable SCA_TARGET_SIGNIFICANCE is created and can be set to either ‘preferred’ or ‘required’. In the ‘required’ case, the Target Type property is set to “Destination”, and the Target Group is set to the destination name (which is determined at run time). The APAR number for this fix is JR29484.

74

Page 75: Expanding clustered topologies for WebSphere Process Server and

Appendix D: Target significance properties This section contains general instructions for common settings of target significance properties. Although generally useful, these instructions are not appropriate for every situation, and do not provide examples for every possible connection factory or activation specification that may be in your environment. Please use common sense when applying this information to your individual environment.

Service Component Architecture Each SCA module uses a number of connections and activation specification resources for the SCA.SYSTEM. <CellName>.Bus. In addition, there are adapters that use connections and activation specifications for the SCA.APPLICATION. <CellName>.Bus For each SCA, a number of destinations were created on the SCA.SYSTEM SIBus.

Destinations for SCA modules onAppTargetCluster1

(note: not a complete list)

Destinations for SCA modules on AppTargetCluster2

(note: not a complete list) sca/Module1 sca/Module2 sca/Module1/component/Module1 sca/Module2/component/* sca/Module/import/* sca/Module2/import/* sca/BFMIF_AppTargetCluster1 sca/BFMIF_AppTargetCluster2 sca/BFMIF_AppTargetCluster1/component/* sca/BFMIF_AppTargetCluster2/component/* sca/BFMIF_AppTargetCluster1/export/* sca/BFMIF_AppTargetCluster2/export/* sca/BFMIF_AppTargetCluster1/import/* sca/BFMIF_AppTargetCluster2/import/* sca/HTMIF_AppTargetCluster1 sca/HTMIF_AppTargetCluster2 sca/HTMIF_AppTargetCluster1/component/* sca/HTMIF_AppTargetCluster2/component/* sca/HTMIF_AppTargetCluster1/export/* sca/HTMIF_AppTargetCluster2/export/* sca/HTMIF_AppTargetCluster1/import* sca/HTMIF_AppTargetCluster2/import* At the same time, a number of JMS resources were created for various connections to those destinations.

Connections and Activation Specifications

for SCA on AppTargetCluster1 (not a complete list)

Connections and Activation Specifications

for SCA on AppTargetCluster2 (not a complete list)

The connection factories are created dynamically at run time and cannot be seen via

the administrative console. The target significance for these is specified via the

SCA_TARGET_SIGNIFICANCE variable.

The connection factories are created dynamically at run time and cannot be seen via

the administrative console. The target significance for these is specified via the

SCA_TARGET_SIGNIFICANCE variable. J2C Activation Specification Scope = AppTargetCluster1

J2C Activation Specification Scope = AppTargetCluster2

Module1_AS Module2_AS J2C Activation Specification Scope = AppTargetCluster1

BFMIF_AppTargetCluster1_AS

J2C Activation Specification Scope = AppTargetCluster2

BFMIF_AppTargetCluster2_AS

75

Page 76: Expanding clustered topologies for WebSphere Process Server and

JMS Activation Specification Scope = AppTargetCluster1

JMS Activation Specification Scope = AppTargetCluster2

HTMIF_AppTargetCluster2_AS HTMIF_AppTargetCluster1_AS J2C Activation Specification Scope = AppTargetCluster1

failedevent_AS

J2C Activation Specification Scope = AppTargetCluster1

failedevent_AS The J2C Connection Factories can be updated as follows:

1. Apply ifix JR29484. 2. Create the SCA_TARGET_SIGNIFICANCE variable and set the value to

required. 3. Verify that this is working by setting the trace specification to *=info:SCA.*=all and

restarting the clusters. When executing the application, examine the trace.log file and confirm that the following line is printed: SCA_TARGET_SIGNIFICANCE found at cluster scope with value required.

Each of the activation specifications can be updated as follows: 1. In the administrative console, select Resources => Resource Adapters. 2. Select the correct scope (AppTargetCluster1 or AppTargetCluster2) and click

Apply. 3. Select “Platform Messaging Component SPI Resource Adapter”. 4. Under Additional Properties, click J2C Activation Specification. 5. Click on the name of the activation specification. 6. Under Additional Properties, click J2C Activation Specification Custom

Properties. a. Target = MECluster1 (you may wish to use a different target) b. Target type = BusMember (you may wish to use a different targetType) c. Target significance = Preferred or Required

7. Save your changes and synchronize with the repository.

76

Page 77: Expanding clustered topologies for WebSphere Process Server and

Figure 42: Custom properties on an activation specification used by SCA

Business Process Choreographer Each business process container and the human task container use a number of connections and activation specification resources for the BPC. <CellName>.Bus. For each configured business process container and human task container, a number of destinations were created on the BPC SIBus.

Destinations for BPC/HTM on AppTargetCluster1

Destinations for BPC/HTM on AppTargetCluster2

BPEApiQueue_AppTargetCluster1 BPEApiQueue_AppTargetCluster2 BPEHldQueue_AppTargetCluster1 BPEHldQueue_AppTargetCluster2 BPEIntQueue_AppTargetCluster1 BPEIntQueue_AppTargetCluster2 BPERetQueue_AppTargetCluster1 BPERetQueue_AppTargetCluster2 HTMHldQueue_AppTargetCluster1 HTMHldQueue_AppTargetCluster2 HTMIntQueue_AppTargetCluster1 HTMIntQueue_AppTargetCluster2

77

Page 78: Expanding clustered topologies for WebSphere Process Server and

At the same time, a number of JMS resources were created for various connections to those destinations.

Connections and Activation Specifications

for BPC/HTM on AppTargetCluster1

Connections and Activation Specifications

for BPC/HTM on AppTargetCluster2 JMS Connection Factory

scope = AppTargetCluster1 JMS Connection Factory

scope = AppTargetCluster2 BPECF BPECF

JMS Connection Factory scope = AppTargetCluster1

BPECFC

JMS Connection Factory scope = AppTargetCluster2

BPECFC JMS Connection Factory

scope = AppTargetCluster1 HTMCF

JMS Connection Factory scope = AppTargetCluster2

HTMCF JMS Activation Specification JMS Activation Specification Scope = AppTargetCluster1

BPEApiActivationSpec Scope = AppTargetCluster2

BPEApiActivationSpec JMS Activation Specification Scope = AppTargetCluster1 BPEInternalActivationSpec

JMS Activation Specification Scope = AppTargetCluster2 BPEInternalActivationSpec

JMS Activation Specification Scope = AppTargetCluster1

JMS Activation Specification Scope = AppTargetCluster2

HTMInternalActivationSpec HTMInternalActivationSpec Each of the connection factories can be updated as follows:

1. In the administrative console, select Resources => JMS Providers => Default messaging.

2. Specify the proper scope and click Apply. 3. Under Connection Factories, click on JMS Connection Factory. 4. Click on the name of the connection factory. 5. You may wish to use values such as:

a. Target = MECluster1 (you may wish to use a different target) b. Target type = Bus member name (you may wish to use a different

targetType) c. Target significance = Preferred or Required

6. Click OK. 7. Save your changes and synchronize with the repository. 8. Repeat for each of the BPC/HTM’s JMS connection factories for connecting to

the BPC. <CellName>.Bus. The JMS activation specification resources can be updated as follows:

1. In the administrative console, select Resources => Resource Adapters. 2. Specify the proper scope and click Apply.

78

Page 79: Expanding clustered topologies for WebSphere Process Server and

3. Select the SIB JMS Resource Adapter. 4. Under Additional Properties, click J2C Activation Specifications. 5. Click on the name of the activation specification (for example,

BPEApiActivationSpec). 6. Under Additional Properties, click J2C activation specification custom

properties. 7. This list of properties includes the 3 you want to set: target, targetType, and

targetSignificance. You may wish to use values such as: a. target = MECluster1 (you may wish to use a different target) b. targetType = BusMember (you may wish to use a different targetType) c. targetSignificance = Preferred or Required

8. Save your changes and synchronize with the repository. 9. Repeat for each of the BPC/HTM’s activation specification resources.

Common Event Infrastructure (CEI) Each configuration of the CEI Event Server uses a number of connections and activation specification resources for the CommonEventInfrastructure_Bus. Many of the names, scopes, and location of the CEI resources may be slightly changed between V6.0x and V6.1x. This section uses data reflective of version V6.1.x. For each configured CEI Event Server, a number of destinations were created on the BPC SIBus.

Destinations for CEI on SupportCluster1

Destinations for CEI on SupportCluster2

Cluster1.CommonEventInfrastructureQueueDestination Cluster2.CommonEventInfrastructureQueueDestination Cluster1.CommonEventInfrastructureTopicDestination Cluster2.CommonEventInfrastructureTopicDestination

At the same time, a number of JMS resources were created for various connections to those destinations.

Connections and Activation Specifications

for BPC/HTM on AppTargetCluster1

Connections and Activation Specifications

for BPC/HTM on AppTargetCluster2 JMS Queue Connection Factory

scope = SupportCluster1 JMS Queue Connection Factory

scope = SupportCluster2 CommonEventInfrastructure_QueueCF CommonEventInfrastructure_QueueCF

JMS Topic Connection Factory scope = SupportCluster1

CommonEventInfrastructure_TopicCF

JMS Topic Connection Factory scope = SupportCluster2

CommonEventInfrastructure_TopicCF

79

Page 80: Expanding clustered topologies for WebSphere Process Server and

Each of the connection factories can be updated as follows (V6.1.x): 1. In the administrative console, select Resources => JMS Providers => Queue (or

Topic) Connection Factories. 2. Specify the proper scope and click Apply. 3. Click on the name of the connection factory. 4. You may wish to use values such as:

a. Target = MECluster1 (you may wish to use a different target) b. Target type = Bus member name (you may wish to use a different

targetType) c. Target significance = Preferred or Required

5. Click OK. 6. Save your changes and synchronize with the repository. 7. Repeat for each of the CEI’s JMS Queue and Topic connection factories for

connecting to the CommonEventInfrastructure_Bus. The JMS activation specification resources can be updated as follows:

1. In the administrative console, select Resources => JMS => Activation specifications.

2. Specify the proper scope and click Apply. 3. Click on the name of the activation specification (for example,

CommonEventInfrastructure_ActivationSpec). 4. Under Additional Properties, click J2C activation specification custom

properties. 5. This list of properties includes the 3 you want to set: target, targetType, and

targetSignificance. You may wish to use values such as: a. target = MECluster1 (you may wish to use a different target) b. targetType = BusMember (you may wish to use a different targetType) c. targetSignificance = Preferred or Required

6. Save your changes and synchronize with the repository.

Other There may be additional Service Integration Buses defined in your environment. Locate all connection factories and activation specifications, and set the target significance properties appropriately.

80

Page 81: Expanding clustered topologies for WebSphere Process Server and

Appendix E: Basic information about the CEI The ApplicationTargetCluster (“A” in Figure 43) knows which CEI emitter resource to use because the administrator has set the JNDI name of the CEI emitter factory resource in the “Common Event Infrastructure Destination” service of the ApplicationTargetCluster. Note: Figure 43 uses V6.0 names. Some names are slightly changed in V6.1. For example, the EventServer becomes the Event Service.

Figure 43: Basic flow of an event through the CEI

The CEI Emitter Factory resource (2) specifies the type and JNDI name of the CEI transmission profile resource that it will use. If the emitter resource is configured to do synchronous transmission, then it will use the Synchronous Transmission Profile (3) (also called the Event Bus Transmission or the Event Service Transmission Profile). If the emitter resource is configured to do asynchronous transmission, then it will use the Asynchronous Transmission Profile (4a) (also called the JMS Transmission Profile). The Synchronous Transmission Profile points directly into the configured EventServer (5) (EventService). This is going to invoke an EJB.

81

Page 82: Expanding clustered topologies for WebSphere Process Server and

The Asynchronous Transmission Profile specifies a JMS queue and a JMS queue connection factory resource. This queue connection factory and queue point to a destination (4b) on the CommonEventInfrastructure_Bus (SIBus). It is on this JMS queue connection factory resource that target significance properties can be specified for the inbound event traffic. A message-driven bean (4c) picks up the messages of the inbound events and routes them into the CEI EventServer (5) (EventService). The message-driven bean utilizes a J2C activation specification resource, on which the custom properties can be set for target significance. The Event Server Profile (6) is found as a resource prior to V6.1. At V6.1 and higher, this information is found directly on the EventService. This resource allows you to specify if you want to use event distribution or if you want to persist the events in the database. For production environments, it is advised to disable event persistence to the database. From the Event Server (Event Service), the Common Base Events can be persisted in the EVENT database tables (7). From the Event Server (Event Service), the Common Base Events may be matched into Event Groups (8). Each Event Group may specify a JMS topic and topic connection factory, or JMS queues and queue connection factories, or both. Target significance properties can be set on these connection factories for event distribution. By default there is an Event Group named “All events” which points to a JMS topic (9) and a JMS topic connection factory. These then point to a topic space on the CommonEventInfrastructure_Bus. As shown in Figure 44, the JNDI of the CEI emitter factory resource is set. This used to route the events that are generated from this server/cluster over to the location of the CEI EventServer (for example, the CEI EventServer may be configured in the SupportCluster).

82

Page 83: Expanding clustered topologies for WebSphere Process Server and

Figure 44: The CEI destination

Only one member of the CommonEventInfrastructure_Bus is commonly necessary, as there is often only a single CEI EventServer configured per cell. If there are multiple CEI EventServers configured in a cell, then multiple SIBus members may be desired. This is especially true in cases of high-volume asynchronous event activity.

83

Page 84: Expanding clustered topologies for WebSphere Process Server and

Appendix F: About the authors

Eric Herness is currently the Chief Architect for WebSphere Business Integration and is from the Rochester, Minnesota development lab. He is senior member of the WebSphere Foundation Architecture Board and a core member of the Software Group Architecture Board. Eric has been involved product architecture and product development in object technology and distributed computing for over 15 years.

Graham Wallis is a Senior Technical Staff Member at Hursley, UK and is responsible for the messaging componentry in WebSphere Application Server and the products built on it. Graham has been with IBM for 22 years and has worked on a variety of technologies, including data communications, parallel processing, asynchronous messaging, and high availability.

Charlie Redlin is an architect on the WebSphere Process Server development team in Rochester, Minnesota. He has worked in the development of WebSphere clusters and network deployment environments for many years. He currently works in a bring-up lab and is focused on the deployment and integration of WebSphere Process Server.

Karri Carlson-Neumann is an Advisory Software Engineer on the WebSphere Process Server development team in Rochester, Minnesota. She has been involved with the development of WebSphere Business Integration Server Foundation and WebSphere Process Server for many years. She currently works in a bring-up lab and is focused on the deployment and integration of WebSphere Process Server.

84