Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect...

49
Hello. Welcome to this computer based training course, titled “Technology Architecture Concerns and Principles.” This is part of a series of computer based and classroom training courses for Technology Architect practitioners across Accenture, aimed at providing us a common terminology, an understanding of key concepts and considerations, and a consistent approach to the work we do in this space for our teams and our clients. 1 2016 (C) Accenture

Transcript of Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect...

Page 1: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

Hello. Welcome to this computer based training course, titled “Technology Architecture Concerns and Principles.” This is part of a series of computer based and classroom training courses for Technology Architect practitioners across Accenture, aimed at providing us a common terminology, an understanding of key concepts and considerations, and a consistent approach to the work we do in this space for our teams and our clients.

12016 (C) Accenture

Page 2: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

The Master Technology Architect Program is one of the most important initiatives that we’ve got underway in Technology Architecture right now. The goal of the MTA Program is to build appropriate depth as well as breadth of technology specialization and Architecture skills in our people. The CBTs are a very important ingredient in our MTA progression, because the CBTs really allow all of you to get both an overview of some of the areas of specialization that are important for you to understand, as well as to identify different areas where you can build your skills and get additional depth, either in our specialization areas or other Principles and Concepts that are important to advancing your career as a Technology Architect and pursuing advancement through the Master Technology Architect Program.

Page 3: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

The intended audience for this course includes individuals who are already contributing towards Architecture design and development, aspiring Technology Architects, as well as Technology Architects looking for a refresher on various principles and implementation options. It is very important for any Architect to be aware of the various Architecture Principles involved when solving complex problems. This course provides an overview and description of the more frequently encountered Architecture Principles.

This training uses a presentation style approach, with quizzes to occasionally check your knowledge. Architecture Principles and approaches will continue to evolve, so this presentation is closely paired with an accompanying web site that will provide more detail and adjust over time. At relevant places in this course, you will see an “External ADA Link” tag; clicking this link will take you to the external site. In addition, you can always navigate to the external site using the ATTACHMENTS list above.

Please take a few minutes now to familiarize yourself with the screen components, like the playback control bar, the ability to view speaker notes during the playback of the presentation, and changing the size of the screen. Use the search tab off to the left if you’d like to search any of the presentation’s content. Sites and materials presented here are linked to within course slides and are also available in the ‘ATTACHMENTS’ list from the top of the screen. In addition, a downloadable PDF version of this content is available to you from the ‘ATTACHMENTS’ list. At any point during the playback of the course, you can click on ‘SCREEN COMPONENTS OVERVIEW’ from the top of the screen to pause the presentation and learn about the player’s screen components. A glossary of terminology used in the course is available at the top of the player window as well. Once you have familiarized yourself with these and are ready to continue, please click the play button at the bottom of the screen.

Page 4: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

This course provides an overview of various Technology Architecture Constraints, Concerns, and Principles. Architecture Constraints tend to reflect the vision and principles of an organization, also known as Guiding Principles or Enterprise Architecture Standards. Just like that vision, Architecture Constraints don’t change frequently, irrespective of market conditions or the introduction of new technologies. Within the context defined by the Constraints, there are many Concerns that need to be addressed when designing and building a system. Architecture Principles are proven design techniques that allow you to design an Architecture that addresses Concerns. Not only do Architects need to know and understand the Constraints and Concerns that need to be addressed, but also how to apply the Principles in order to successfully design and build an Architecture. Since the Architecture Concerns are interrelated and compete with each other, Technology Architects need to perform trade-off analyses to make informed decisions. There are many resources available to Technology Architects to assist in solving some of the complexities around Concerns and Principles, which will also be introduced in this course.

Page 5: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

We will start the course by briefly introducing the various Constraints driving Architecture, as well as the common Concerns that need to be addressed, and finally the Principles that can be used to design the Architecture. After the brief introduction, we will delve deeper into each of the Concerns and Principles. The session will conclude with additional details around the issues and tradeoffs among the Principles and Concerns.

2016 (C) Accenture 5

Page 6: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

Before undertaking the analysis and design of an Architecture, it is important to understand the scope, boundaries, and constraints that will shape the work. These “environment setters” affecting the Architecture are termed Architecture Constraints. They are sometimes referred to as “Guiding Principles” or Enterprise Architecture Standards. Architecture Constraints provide the overall context from which Architectural decisions are made, and often include client biases or preferences. Sometimes, the Constraints may not be written down; if they are not captured on paper, they need to be understood, agreed upon, and documented with the client as a part of Expectations Management.

Architecture Constraints include guidelines for applications and systems based on their particular technologies. Examples of these guidelines include directives such as “all data needs to be stored in Oracle,” “applications that are built or bought should be platform independent,” “applications must be managed centrally regardless of the distribution of their processing,” or even general directives like “buy before build.” Constraints can also include system- or industry-specific requirements, such as in military, medicine, and finance systems. A common example in these cases is for entire systems to be SOX compliant. SOX refers to the Sarbanes-Oxley legislation in the US, which is a reform for public company accounting and investor protection through reporting and controls. Architecture Constraints are often driven by external forces including regulatory requirements or governmental controls, as well as future plans for the organization.

There are 3 other fundamental factors which govern the Architecture Constraints. They are:

• Cost – which refers to developing an application within the allotted budget,

• Quality – which refers to developing an application which meets the needs of its users, and

• Time – which refers to delivering the application as promised, on time.

These three factors highly influence the Constraints, and impact many of the tradeoffs and design decisions within an Architecture because they are interrelated. For example, if there needs to be an increase in quality and a decrease in time, that will quite possibly lead to an increase in cost as well.

Page 7: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

There are additional forces which indirectly affect the integrity of a system Architecture, and they may not be fully defined as a result of functional analysis. Consequently, these non-functional forces or requirements are known as Architecture Concerns. They are also called the “ilities,” as many of the words end with “-ility.” In this course, these phrases will be used interchangeably.

The Architect might not be responsible for defining the concerns, but they're definitely responsible for fulfilling them. Often, Architecture Concerns are defined ambiguously—if at all—by customers, with statements such as “the system must be fast.” In such situations, an Architect must step in and help the customers to define the requirements clearly. For example, if the customer says “fast,” the Architect might ask leading questions like “how fast? Is it 3 seconds? 5 seconds?” An Architect must keep an eye on the design of every software and infrastructure component while keeping the Concern in mind. A key responsibility of an Architect is to properly identify and quantify the requirements for each area of Concern and constantly use those requirements as decisions are made in the design and implementation of the Architecture. Each area of Concern impacts, and is impacted by, the other areas of Concern. Understanding those interrelationships is fundamental in making the trade-off decisions that Architects must make.

There are a well-known set of Concerns which need to be understood and quantified using the “SMART” criteria, i.e., Specific, Measurable, Attainable, Realizable, and Traceable, so they can be made precise, measurable, and actionable. Concerns are an integral part of the entire software development process, used not only to drive the design process by providing key input requirements, but also to act as selection criteria. Some examples of these Concerns include: performance, scalability, and availability.

Page 8: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

Designing an Architecture is fundamentally an exercise in abstraction – organizing complexity into a set of understandable constructs that can be implemented and reused. Architecture Principles are the proven techniques and guidelines for designing an Architecture that supports abstraction, helps to address complexity, and manages risk.

The Architecture Principles provide guidance on how to design effective Architectures. They are well-established procedures that can be used in the context of addressing Architecture Concerns. The outlined approaches and techniques allow for maintainable and flexible Architecture designs, and are proven to be successful since they survive the test of time.

Some examples include: isolation of business logic, encapsulation of data, the use of layering, and simplicity in design (also known as KISS, short for “keep it simple, stupid”).

Page 9: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

Now let us see how these three facets come into play. Architecture Constraints provide a set of boundaries within which decisions can be made. Architecture Concerns provide requirements in addition to the technical requirements coming from the functional analysis. Architecture Principles are the proven approaches used in designs that help address the Concerns. The resulting outcome or solution is the Architecture. Architects are responsible for working within the Architecture Constraints to address Architecture Concerns using Architecture Principles so they can design and deliver a successful Architecture. They must create the blueprint and structure for designers and developers to follow in completing their work. This requires “balancing the forces” brought by these three areas, as there are many tradeoffs to consider. The Architect is in the center, managing the pull of each facet on the others, to come up with a solution that effectively addresses the overall needs.

Page 10: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

Bringing all these concepts together, Architecture is influenced by Constraints, Concerns, and Principles. Architecture Principles provide a means to address the Architecture Concerns, with the ultimate goal of satisfying the Architecture Constraints and Requirements. A helpful way to see how these all tie together is a mnemonic phrase using the prism analogy: “On the Architect’s DESK is a PRISM that helps achieve our GOAL.”

The Architecture Principles can be thought of as being a prism, which separates white light (the collection of requirements) into a spectrum of light (the individual Architecture Concerns). There are 7 primary Architecture Concerns that apply to all Architectures, as shown in the prism, analogous to the 7 colors in a rainbow. Note that this phrase does not address all possible Principles and Concerns, but it does cover several common ones. The business and technology requirements and implications of each Concern need to be understood before developing an Architecture. It is also important to note that no Concern stands on its own – each is related to one or more of the others, and addressing one Concern may make matters worse in another. In order to reach a balanced response to the requirements, the Architect must be pragmatic and drive compromise. Architects need to be able to look at the overall impact of Concerns in a solution and weigh the various tradeoffs.

Architecture Principles are standalone guidelines that Architects use to address Concerns, but following a Principle may have adverse affects on some of the Concerns that need to be solved. A Principle such as DRY (Don’t Repeat Yourself) has significant impacts on maintainability, but somewhat less impact on availability. It may improve performance with size, but also increase bottlenecks in processing. KISS (Keep It Simple, Stupid) runs in parallel to DRY, but hits more on the operability of a system. Efforts to improve performance involve specialized routines that may in turn violate the DRY and KISS Principles, but if they don’t, the solution may not scale or perform efficiently.

Page 11: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

2016 (C) Accenture 11

Page 12: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

Let’s examine some of the Concerns in more detail.

2016 (C) Accenture 12

Page 13: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

Let us further examine the prism analogy, and how it impacts an Architect’s decision making ability. The prism represents the base Architecture Principles that the Architect adheres to when designing a solution. On the left, a series of Constraints and Requirements come in and reveal themselves on the right side as additional Architecture Concerns that need to be taken into account. It is easy to obtain the functional and technical requirements since they are usually formally documented and given to the Architect. However, the Non Functional Requirements are much harder to obtain, as they are usually just implied by the client and not clearly identified or described. These include questions such as “How important is this function to your business? Can you lose a day of data and still be OK?”

The ability to define Architecture Concerns is a key task of Technology Architects. One needs the ability to read between the lines and determine the details of NFRs. It is important to note that these are simply another type of requirements that need to be gathered, they don’t come after business and technical requirements. NFRs are additional requirements that are not often explicitly written, but still need to be documented. One way of doing so is to ask a series of “What if…” questions to obtain some more clarification around availability and recoverability issues.

Typically, NFRs are defined in the Plan and Analyze stages of a project. They should be done in the earliest phase possible because they make a significant impact on the project timelines and budget. One of the main things that drives up costs of a system is the complexity and time to build it. If the Concerns are not properly identified and designed for, things can go haywire and issues can build up quite quickly. In the next several slides, we will dive into each of the Architecture Concerns in more detail.

Page 14: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

Scalability is the ability for a system to grow and meet performance goals with an increased volume of activity and data. Simply put, scalability boils down to how the system copes (or would cope) with additional demand. In other words, if an application is able to handle 100 users on a single CPU hardware, then the application should on be able to handle 200 users when the number of processors is doubled.

Structuring a business for growth is an extremely important factor when building a business for profit and success. Every product will either reach its limits one day or tend to become inefficient over a period of time. One can’t have infinite scalability, so one needs to decide which types and how much scalability to buy. To frame the scalability question effectively, it’s critical to define three parameters: •the target system life, •the key scalability dimensions, and •the required range of scaling.

The above mentioned parameters that define scalability could be detailed into the factors or key considerations shown here. User Volume: the current number of users (say 1,000) is expected to grow in x time frame (to 100,000 users). Transaction Volume: the current transaction volume (say 10 per hour) has potential to grow in x time frame (to 2,000 per hour). Data Size and Volume: the current size of data (say 100 GB) has potential to grow in x time frame (to 500 GB).

Additional questions include the location of data and whether or not the data is centralized or distributed around the world. These answers would in turn help the Architects to design the application to better suit the scalability demands for the future.

Page 15: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

Some of the approaches and proven practices to solving scalability issues are shown on this slide.

Horizontal and Vertical Scalability -A system can either be scaled up or scaled out. To scale vertically (or scale up) means to add resources to a single node in a system, typically involving the addition of CPUs or memory to a single computer. This could also mean expanding the number of running processes. Horizontal scalability (or scaling out) is adding more boxes of similar memory and CPU. Both scaling techniques have their own pros and cons. Vertical scaling has a limit in that a physical server has a limit on its maximum capacity. When a scalability solution needs to be implemented, one has to look at scaling vertically first before trying horizontal scalability. Even though vertical scaling is cheaper, it has a negative impact on reliability, as it may end up becoming a single system. On the other hand, vertical scaling has a positive impact on manageability. One rule of thumb used while choosing horizontal or vertical scalability is based on throughput. If the bottleneck is due to throughput, the better choice would be to scale horizontally.

Design for Interchangeability -Whenever you can generalize a resource, you make it interchangeable. In contrast, each time you add detailed state to a resource, you make it less interchangeable. For example, if a database connection is unique to a specific user, you cannot pool the connection for other users. Instead, database connections that are to be pooled should use role-based security, which associate connections with a common set of credentials. For connection pooling to work, all details in the connection string must be the same. Also, database connections should be explicitly closed to ensure their return to the pool as soon as possible. Relying on automatic disconnection to return the connection to the pool is a poor programming practice.

Have both Proactive and Reactive Scaling Strategies -To be effective, organizations must plan for both proactive and reactive scalability. Proactive scaling enables organizations to anticipate increased demand and pre-allocate system capacity, while reactive scaling ensures that extra resources are available to handle sudden, unanticipated demand. Proactive scalability demands a strong Architecture, sound data center practices, metric gathering tools, and a predictable scaling model. Reactive scalability requires an Architecture that provides simple horizontal scaling, ensures seamless scaling at the back end, and enables rapid reallocation of resources as business priorities change. Even when it is possible to plan for demand, it may not be possible or cost-effective to proactively purchase additional hardware resources to support new or existing applications. As a result, organizations must find ways to create an infrastructure that enables resources to be applied where they are needed most.

Use the Proper Caching Technique -Architecting and implementing a solution that keeps scalability linear and leaves enough room for increasing load as the business grows is a difficult task that requires experience. Caching is the most important tool in your tool box. For frequently accessed information, even a short cache lifespan can be productive. Watch your cache hit rates. A non-effective cache is worse than no cache. Also check the possibility of using cache farms, and the idea behind cache farms is to move memory devoted to the various caching layers into one large farm of caches.

Page 16: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

Scalability impacts other Architecture Concerns, and so the Architect must consider some of the many tradeoffs when designing a scalable system.

As the number of users on the system at a given time rises and the focus on scalability increases, the performance may continue to decrease until some action is taken. Additional interaction with the application would likely increase the demand on caching and database connection pooling, thus impacting the performance. A scalable system demands having complex component designs and infrastructure. Any further modifications to the system to adjust for the performance would in turn increase maintainability costs. The ability to configure, administer, and perform day-to-day operations on a system effectively influences application maintainability and operability. Without stress-testing the system in a real-world production scenario, one can't say that a given system is, or is not, scalable enough.

For more explanations and detailed examples of scalability, click on the link shown here.

Page 17: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

Performance represents the responsiveness of the system. This can be measured by the time required to respond to events, known as latency, or by the number of events that are processed in a period of time under specified constraints, also called throughput. Latency is a time delay between the moment something is initiated, and the moment one of its effects begins or becomes detectable. Latency measure is often applied to networks. Throughput measures the units of work processed, as identified by an end-user, in a unit of time. Throughput is expressed in various units based on the context, for example bytes/sec or number of transactions/sec. Particularly for batch operations or message queues, the throughput is specified.

Response time is the time period between the initiation of a user’s request to the system and the first appearance of a response from the system. Response time is measured for online applications, including user interfaces and integration with external online systems. Performance requirements should provide quantification around items such as the number of active users, number of transactions, average message size per transaction, or peak load conditions. For example, a performance requirement for a payment gateway could be: “The response time of the gateway for a payment should be less than 5 sec for a load of 100,000 payments per second. For loads above 100,000 transactions to 500,000 transactions, no more than 4% of the transactions should time-out (take more than 5 sec).”

Several factors to consider when determining system performance are shown here.

Transaction Volume: represents the number of business transactions per unit time. For example, 500 transactions/hour, with a possible peak load of 800 transactions/hour.

Business Conditions: objectives of the business have implications on the performance of systems within the enterprise. The performance can be affected by the geographic distribution of users or future plans to provide additional constraints around transactions and data sizes. Additionally, the cost of running the service may impact profitability.

Concurrent Users: the number of users demanding services from the system at a given point of time. This indirectly measures data and transaction loads of the system.

Additional questions include the current storage Architecture, as well as response times and throughputs of existing systems. This information would help in architecting the application so that it can scale up without losing performance.

Page 18: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

A misconception about performance is that it equates to speed—that is, the notion that poor performance can be improved simply by adding more processors. But it is not true for many systems, especially real-time systems. This is because, in real time computing, it is key to meet individual timing requirements of each service. Moreover, mechanisms such as caching, pipelining, and multithreading, which can reduce average response time, can make worst-case response times unpredictable. Performance is not about speed but about the predictability, whether it is worst case or best case. Predictability provides greater control over making decisions. There are several Architecture approaches and proven practices to address performance and enable a system to be more predictable.

Decoupling –This occurs when we can separate the different parts of a distributed system so that no one process ever needs to stop processing to wait for the others. This enables parallel processing.

Asynchronous Processing –Components operating in an asynchronous fashion operate in the most efficient way possible for the workload. Components linked by synchronous connections, on the other hand, tend to operate at the speed of the slowest part of the whole system.

Segregation of Workloads –Each application style could have different and conflicting resource acquisition and release requirements. For example, a batch process processes several rows as compared to an online system. To design high-performance operational workloads, one should look at scheduling ‘time windows’ and segmentation of nodes for conflicting workloads.

Apply Occam’s Razor Principle –When you are looking at tackling performance problems, many things under the sun seem to be the cause. In such a scenario, with multiple theories floating around, try the simplest solution first, whether it is related to database tuning, reducing network chattiness, etc.

Look at the Whole then the Parts -Most people jump into tuning databases when they hear about performance issues. Instead, they have to look at the entire system before zeroing down on databases, even if the customer says the “database is too slow.”

Create Repeatable Tests –During the process of tuning, one would be modifying the code, queries, and designs. Never throw away the old ones. Also, when creating tests, ensure they can run both the old and the new changes. This would not only help as a knowledge repository, but also to compare the changes in performance.

Page 19: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

Performance has significant impacts on many other Architecture Concerns, and so the Architect must take all of these into consideration when attempting to increase a system’s performance.

Scalability and performance go hand in hand. Decoupled systems are a key for scalability, which also increases performance. Security techniques, on the other hand, require additional processing steps which adversely affect performance. A system that performs well can usually handle an increase in activity or load, and is therefore more available. Also, when changes are made to a system to increase data load times or decrease response times, aspects such as the system’s maintainability and operability may be less than optimal.

For additional information about performance, including techniques around performance modeling, click on the link shown here.

Page 20: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

A system is not immune to everything, and there are several external forces, such as natural disasters, that may cause a system to stop functioning entirely. The ability for a system to resume operation quickly after such a disaster occurs is the system’s recoverability. A key aspect of recoverability is for a system to maintain data integrity after any sort of abnormal termination. The data that the application may be processing should be recovered and put back online within a reasonable amount of time.

There are three levels of a catastrophic failure in which a portion of a system is destroyed and needs to be recovered. The first is the entire application code base, which implies the deployed source code, is lost. To prevent such an incident from occurring, plans can be made to store code versions off site or store copies of a ghost image of the deployment environment. The second level is the set of configuration files used to set up the system. If these are damaged or lost, it can be catastrophic for distributed applications with several components and tiers. To help alleviate the severity of the damage in this case, scripts can be created to build and automate the setup of this data, and create backups of the configuration files as well. Avoid hard coding as much information as possible. The third level is the database environment. In this case, all application code tables and database configuration, basically data that is not mutable, can be created on the fly using DDL scripts. However, the transactional or business use data that was gathered by the application over time cannot be recovered unless a backup strategy is implemented.

Most of the factors and considerations around recoverability involve system downtime.

System Uptimes and Downtimes: if a highly available system has a serious failure, the impact to the business could be drastic. Recoverability is essential to maintaining an application along with its data, in order to sustain a profitable business and avoid potential legal consequences.

Time to Restore Service: if a system is brought offline, the time it takes to recover necessary data and bring the system back online is a key measure. This turnaround time can have serious implications in life and death situations, such as in hospital monitoring systems. A time-tested recoverability strategy can be critical.

Scheduled Operations: having periodic backups and maintenance tasks can improve a system’s recoverability. The most recent backup can be used to bring a system back online and functioning quickly, even though the data may be a little stale.

Page 21: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

There are approaches to addressing recoverability issues in both hardware and software. It doesn’t necessarily have to involve frequent backups of data. The ability to monitor a system closely is helpful as well, so if an outage or disaster were to occur, Architecture teams can be more prepared to respond and take action to recover the system and its data.

Component and Application Watchdogs –Watchdogs are special components that could be used for monitoring the operation of a system. They can be put in place to monitor a system at both component and application levels. The observed system has to send a periodic life-sign called a heartbeat to the watchdog. If this life-sign fails to arrive at the watchdog within a certain period, the watchdog assumes a system failure has occurred, and therefore moves the controlled system into a fail-safe or fail-operational state.

Session Replication –The value of clustering web applications lies in the ability to replicate the user sessions to a secondary server in the cluster. Unfortunately, there are still many application developers who do not take this into consideration and tend to store objects in sessions that cannot be replicated or that construct session objects that cause significant performance overhead during session replication. One should establish a good session replication design.

Develop Contingency Plans –One can never be prepared enough for something that can happen. A contingency plan should always be in place, especially for high availability systems. The business should also prioritize which operations need to be recovered. There may be additional dependencies or data synchronization requirements that need to be taken into account, so that if a system is recovered, the corresponding data also needs to be in a recovered state for it to function properly.

Periodic Test Recoveries –It is not enough to simply have a strategy for recovery. That strategy needs to be tested repeatedly for effectiveness and accuracy. Also, the tests should essentially start from scratch and measure the time and effort needed to get the application as well as the data back online and usable. Architecture teams must know where and how the data is stored and backed up to facilitate testing of the recovery strategy.

Page 22: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

We have seen how availability and recoverability go hand in hand. There is a direct correlation between the amount of uptime a system requires and the efforts to recover data for that system and bring it back online in case of a disaster. A high level of recoverability will therefore maintain a high availability. The additional measures to duplicate or backup data may cause a significant overhead in system performance. The addition of such complex component designs and infrastructure also cause an increase in complexity and maintainability costs.

More details and explanations for the recoverability of a system can be found at the link shown here.

Page 23: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

Security is unusual in that it can form part of a cross-cutting Architectural aspect of a system as an Architecture Concern, while at the same time it can also identify a set of related functional requirements. For example, requirements stating that “only users with administrative rights can delete items on page X” are truly functional. However, the related role-based authentication and auditing needs are examples of concerns that are not always recognized completely in functional requirements, and need to be treated by an Architect as an overall Architecture Concern, particularly since they also impact maintainability and usability. One way of seeing the cross-cutting aspect of security is to realize that it is often provided by a number of items spanning architectures. These include: application services in an execution architecture, network and operating system layer hardware and software components in the infrastructure architecture, data encryption and backup standards in the data architecture, as well as organizational procedures and standards for maintaining and monitoring all of them.

Security is a measure of the system's ability to resist unauthorized attempts at usage and denial of service, while still providing its services to legitimate users. It is categorized in terms of the types of threats that might be made to the system. Security is an essential part of a solution, yet it is often overlooked until the system is nearing completion. Just like an effective error- or exception-handling scheme, security must be taken into consideration throughout the design and build of an Architecture.

When securing a system, all aspects need to be secure. This includes business confidential materials, such as ideas and business processes, engineering implementations, such as frameworks and protocols, as well as internal computations, such as logic and algorithms. The most common security measures include authentication, authorization, data protection, and auditing.

There are primarily four types of security: identity management, transactions, software, and information.

Identity management: can be subdivided as authentication and authorization. Authentication is the process a system uses to ensure what an entity is who or what it claims to be. Some of the implementation mechanisms for authentication include passwords, smart cards, and bio metrics such as retina scans and thumb impressions. Authorization is about identifying who can do what. The most common approach is to have a trusted user, such as the system administrator, define user rights by individual or by class. For example, if you're designing a medical records system, you might permit a patient to access all of his own history but not any data about anyone else.

Transaction security: supports the ability for a system to be audited. This includes properties such as integrity, confidentiality, and accountability. Integrity is where data and system resources are only changed in appropriate ways by appropriate people. Confidentiality refers to data that is only available to the people intended to access it. As part of accountability, non-repudiation means users can’t perform an action and then later deny performing it.

Software security: can add significantly to the cost of developing, maintaining, and supporting the software. Obfuscation is often helpful in foiling attempts by hackers, but it can make programs extremely difficult to debug.

Information security: the primary approach to information security is not to protect the information once it's been accessed, but to prevent accessto it in the first place. To accomplish this, a variety of tools can be used, including intrusion detection software, user management tools such as password policy checkers, and network tools such as firewalls.

Local Laws: when working with sensitive data, different countries may have different levels of permissibility around what you can and cannot do with that data. Regulations such as SOX or HIPAA are enforced heavily in insurance and pharmaceutical industries and restrict the amount of data that can be shared. Applications that span geographical boundaries need to ensure the proper legalities are adhered to when dealing with sensitive client information.

Page 24: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

The authorization of a system works to ensure the correctly identified people, systems, and processes have access to perform business activities within the IT infrastructure. Since authentication and authorization services run throughout the application, it is better to have a separate security layer. It's a good idea to isolate the systems you use for authorization by wrapping them with your own authorization layer. This way you can swap out one type for another as the requirements of the system change over time. Here are some techniques that protect against illegal access, identity theft, and data avoidance.

Challenge-Response Authentication –

This a family of protocols in which one party presents a question ("challenge") and another party must provide a valid answer ("response") in order to be authenticated.

Authorization Techniques –

Some of the common technologies/techniques being used for Authorization include Lightweight Directory Access Protocol (LDAP), role based access control (RBAC), which is a generic name for technologies that provide authorization based upon user role, and file system authorization provided through ACLs (access control lists). Additionally, a single sign-on (SSO) utility can be implemented to manage user authentication and authorization in a secure fashion.

Security Policies –

A security policy is a specification document that defines how IT assets should be protected from attacks. Security policies typically include custom security requirements, privacy requirements, coding best practices, application design rules, and testing benchmarks. Ideally, security policies should also require that all security-related operations be concentrated in one segment of the application. You can then focus your resources on verifying and maintaining the security of that one critical module.

Penetration Testing –

Once the security policy is implemented in the code, a smoke test is needed to verify whether the security mechanisms operate correctly. This is done through penetration testing, which involves manually or automatically trying to mimic an attacker's actions and checking if any tested scenarios result in security breaches. When penetration testing is performed in this manner, it can provide reasonable assurance of the application's security after it has verified several paths through each security-related function.

Page 25: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

Application security needs to focus on protecting sensitive information from different

users and systems. Ensuring the correct level of security can have an impact on the cost

and complexity of a system. The complex component designs and infrastructure will

have an impact on performance as well as usability. Although, utilizing a packaged

solution, such as a third party single sign-on data store, which is optimized for

authentication and authorization, will most likely be more efficient than a similar

mechanism that is designed and constructed by an organization. Additional demands for

security can also create dependencies on metrics to maintain the third party

authentication application, and thus adversely affect maintainability. Since there are

more components to the application, there is an introduction of additional possible

points of failure, and the availability may decrease as well.

For more information regarding security, click on the link shown here.

Page 26: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

Maintainability is defined as the ability to perform successful repair actions within acceptable time, cost, and quality constraints. In other words, maintainability measures the ease and speed with which a system can be restored to operational status after a failure occurs or an enhancement is made.

Inherent in maintaining a system is the ability to change the system without injecting defects or degrading any non-functional abilities. This plays into the system design, which should allow modifications to be made in a modular fashion. Any changes to the system should be made quickly and at a low cost. Some key things to consider when addressing maintainability issues are shown here.

Availability and Uptime: any fixes or upgrades should be performed within the availability requirements of the system. Systems that provide account information need to available 24/7 and routine maintenance tasks need to be done during off-peak hours. Service Level Agreements can also identify time windows that will impact a system’s maintainability. Application Complexity: has a direct correlation to maintainability. It is relatively easy to maintain a system that is self-contained and has a web facing front end to monitor the status. However, a system that has multiple dependencies or no direct access to server and log information is significantly more difficult to maintain. The client may have strict enforcement for sensitive data and SOX compliance which may make issues hard to identify, and increase the time to perform maintenance. External Dependencies: may impose limits on the maintainability of a system. For a three-tier web application that is self-contained, maintenance can be performed routinely with minimal impact to users. On the other hand, a process based application using message queues and web services to feed data to downstream systems will have several moving parts. If any of these components fail, the application will face major problems, affecting other critical systems as well, and could be difficult to repair the problem.

As an example, if it is said that a particular component has a 90% maintainability in one hour, this means that there is a 90% probability that the component will be repaired within an hour. This is also the time the system may not be available for the users. This is a clear tradeoff between maintainability and availability one needs to keep in mind while describing maintainability. Another example could be to upgrade an OS with a security patch, which is part of the maintenance process. Can this maintenance be done by keeping the application available to the users?

Maintainability is not just about how the system could be built for maintenance, but also the capability of the organization to maintain the system. Usually the second part is not discussed. Some of the characteristics of a maintainable application include changeability, modifiability, and understandability. Changeability or modifiability is the ability to add, modify, or delete methods and components in modular fashion. Consider a scenario where one needs to change the discount given to online shopping users. If this change in discount could be implemented through changing the value in an external configuration or property file, and without restarting the application, then this system could be termed as maintainable in this situation. Understandability refers to building the software to be easily understood, so that changes can be made.

Page 27: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

Architectural design for maintainability requires an application that is serviceable, which means it must be easily repaired, and supportable, which means it must be cost-effectively kept in or restored to a usable condition. Better yet, if the design includes a durability feature called reliability, which is the absence of failures, then one can have the best of all worlds. Shown here are some approaches and proven practices to addressing maintainability issues.

Redundancy -If maintenance is necessary and system operations will be interrupted, redundant installations should be considered in order to permit maintenance without interrupting system operation.

Use Hooks -

One “architecture trap” is caused by the fact that the Architecture is widely used throughout the system, and so a change to the Architecture may impact almost all the application logic. If the application is advanced in testing (or live), the risk may be so great that you simply will not be able to make the amendment. You must therefore provide ways of adding logic later without opening up the existing code. This is normally done by inserting “hooks” (also called user exits, or named procedures) in the logic - usually a call to a dummy function, which can be replaced by a real one later if needed.

Robust Error Handling –Building a robust error and exception handling mechanism will keep the system’s availability and uptime high, and help to make the maintenance tasks easier to complete. The ability to closely monitor a system as it is performing may help to predict when outages or problems can occur. As the application does encounter errors or exceptions, the ability to gracefully recover or continue providing services is a big plus in terms of availability and maintainability.

Prevent Domino Effect –For applications that have many dependencies, any changes made may have unforeseen negative effects on other systems as well. Preventing these downstream problems due to changes made in one system is important for maintainability.

Page 28: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

As we have discussed, maintainability is closely tied to the availability of a system. As

efforts are made to keep uptimes high, the system’s maintenance costs are also kept

low. Some changes that optimize database calls or the display of presentation objects

may have intricate component designs and violate the original design of a system to be

modular. In this case, the emphasis on maintainability can cause a performance hit. The

simplicity may also direct the design away from a complex infrastructure that would

allow for scalability.

Even though Fred Brooks, a renowned software engineer and computer scientist, noted that the maintenance of a system increases as the changes increase, it may not be true all the time. If the design is modular, or componentized, then the change area would be limited and may not impact the entire system. So, having a modular system reduces maintenance cost. But maintainability does not restrict itself to the application level. It also includes ability to manage and maintain the code. Improving software maintainability helps reduce lifecycle costs as well.

For more details on ways to improve maintainability, click on the link shown here.

Page 29: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

There are various definitions for the quality attribute of operability. While some definitions compare this with reliability, operability should be defined from the system paradigm. It can be simplified as the usability plus efficiency of resources. Usability is the effectiveness, efficiency, and satisfaction with which specified users can achieve specified goals in particular environments. Thus operability relates to the ability of an Architecture to allow operations to be performed accurately and consistently, at the desired cost, and with the expected impacts.

The three parameters generally considered while defining operability are effort, customizability of components, and adminstrability of components, modules, and systems. Customizability and adminstrability could be implemented by adding tracing levels and moving commonly changing information to external configurable files, like XML or property files. Increasing customization and reducing adminstrability increases cost.

There are many questions one can ask to help determine operability and usability requirements. Is the system giving the users what they need? Can they effectively use the system? Can they see what it has (and hasn’t) done? The answers to these questions can help determine if a system is performing and operating as desired. Some additional considerations are also shown here. Ease of Operation: Users can wreck a system unintentionally if the system is not designed to be usable. If the user can’t work out whether the system has updated, he will often go in and look (creating unnecessary further transactions to overload the system), or he may try and do the update again (and may succeed in creating duplicate records). Awareness of End-Users: The other side of the coin is that real users often don’t want or care about the “gold-plated” features that are stated as “requirements.” Usually, as soon as users see something that meets 80% of their real day-to-day needs well, they are delighted to take it -with or without the bells and whistles. For example, if the key objective of going GUI or client-server is to gain concurrent access to multiple databases at once - you may not need to develop new applications at all, a common front-end giving them access to existing systems via windows may be all that is required.

Page 30: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

A key to designing a system with high operability is to make sure it will be utilized for the correct purpose. Here are some approaches and proven practices for operable systems.

Understand System Use –

The system will be of no value to end users if it does not fulfill the intended purpose for its existence. Ask users what they really want and design accordingly. Once the user understands the purpose of the system, he or she will also learn to use the systemeffectively since it adds value to their daily work.

Consistency –

By adhering to a standard look and feel, users will become more accustomed to the behavior and interaction of a system. As their familiarity increases, less errors will occur due to human ignorance.

Principle of Locality –

Having related operations close to each other on the screen makes working with the application more efficient. This has a directcorrelation to the consistency of the UI design as well.

Define Standards and Processes –

One of the ways to increase the operability of a system is to use the standard protocols and APIs. Even the usage of hooks for later modification has a positive impact on operability. A component with less interfaces and lot of parameters is more customizable as compared to a component with lot of interfaces and few parameters.

WITS: Walk In Their Shoes –

Remember the key words. It implies walking in the customer’s shoes and thinking like them. Pay attention to their working environment and the language used, which could reveal a lot about day to day processes. By talking to them and observing them, you can help them understand what they want and in turn design the correct solution.

RIFT: Remember It’s For Them –

This is similar to WITS, where the primarily goal is to build a usable system for the end-user. Take into consideration their objectives, not just yours. Design the solution to ensure efficiency when trying to achieve those objectives. Continue to ask for feedback on progress and have checkpoints to make sure both goals are aligned. Once a solution is in place, remember to provide training and support so the customer can continue going forward on their own.

Page 31: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

Operability and usability have many impacts on the other Architecture Concerns. An increase in scalability would impact the ability to configure, administer, and perform day-to-day operations on a system effectively, and therefore negatively affects operability. Additional protocols to enforce security will also impact the usability. A main focus for users is to have a system that is easy to use. Minimizing the effort and “fuss” needed to operate the system tends to have a positive impact on operability.

Additional information around operability and usability can be found on the link shown here.

Page 32: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

The availability of a system is the probability that it will be operational when it is needed. Availability is dependent on the reliability of a system. The more reliable a system is, the more available the system will be.

Availability is measured from the user’s point of view. A system is available if the user can use the application –otherwise it is unavailable. Accordingly, availability must be measured end-to-end – all components that are needed to run the application must be available. Many IT organizations mistakenly believe that availability is simply equal to main server or network availability. Some may only measure the availability of critical system components. These are grave mistakes. A user may equally be prevented from using an application because his or her PC is broken or infected with a virus and personal data is unavailable.

Availability tends to measure the mean time to failure and the mean time to repair a system after a failure. The mean time to failure refers to the average length of time the application runs before failing, and the mean time to repair refers to the average length of time needed to repair and restore service after a failure. Availability is usually expressed as a percentage of uptime in a specified time frame. If the uptime is specified, one should also look at availability from a downtime perspective. Generally, the term downtime is used to refer to periods when a system is unavailable. Downtime can be either planned or unplanned. The unplanned downtime is disruptive because it is difficult to predict its timing. Some unplanned downtimes include software and hardware failures, power fluctuations, network connectivity issues, and natural disasters. Planned downtimes, on the other hand, are a result of repair, backup, and upgrade operations including database backups, batch processing, and periodic network maintenance.

In addition to the uptimes and downtimes of a system, there are other driving factors to be considered for availability.

Cost of Downtime: depends on the actual loss of revenue based on existing benchmarks, such as loss of sales or idle time, as well as the opportunity cost of possible customers turning to competitors. While the actual loss of revenue is possible to measure, opportunity costs are more abstract and difficult to calculate. The cost of downtime is the prime factor in building high availability systems.

Available Budget: factors in to a system’s availability. High availability is achieved efficiently by reducing unplanned downtime, and unplanned downtime is primarily reduced by redundancy. As the redundancy increases, it has a direct impact on cost, complexity, and manageability. For example, multiple check point hardware needs to be installed to manage the network, server, and disk redundancies. As more and more redundancy gets built in to the system, the system tends to become more complex. This in turn would need skilled engineers to manage. The installation of monitoring systems, dashboards to keep track of faults, and extensive audit trails also increase manageability.

Page 33: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

There are several approaches to building Architectures that support high availability within systems. In an effort to minimize the amount of unplanned downtime, systems can be designed to tolerate faults. There are three main components to fault tolerant systems: detection, recovery, and treatment.

Fault detection is achieved by conducting replication checks where multiple replicas of the same component perform the same service and the results are compared. Additionally, timing checks are used for timing faults where processing times are recorded for known intervals. Fault detection can also be accomplished through other diagnostics checks and background audits where results are measured against known inputs.

Fault recovery and successful confinement of damage consists of determining a boundary of error propagation in the event of a failure. The system is then restored to a previous known valid state. This employs checkpoint/rollback techniques where the system is rolled back to the last check-pointed state. In this technique, the system is taken offline to perform the recovery.

Fault treatment consists of various types of system standby to address and recover from faults. Hot standby implies the component is fully active and duplicating the function of the primary component. Thus, if an error occurs, recovery can be practically instantaneous. Warm standby indicates the standby component is used to keep the last checkpoint of the operational component that it is backing up. When the principal component fails, the backward error recovery can be relatively short. Lastly, cold standby means the standby component is not operational, so that its state needs to be changed fully when the cutover occurs.

Along with fault tolerant systems, there can be many activities for planned downtime put in place to also increase availability.

Repairs are intended to remove faulty components and restore the system to a functional state. Several health monitoring techniques enable the location and repair of faulty components.

Backups are intended to preserve critical data onto storage mediums to avoid loss of data on disk/storage failures. Hot backups can provide a convenient solution, because they do not require downtime, as does a conventional cold backup.

Upgrades are implemented to replace the current hardware and software with newer or enhanced versions. Software deployment techniques along with dynamic loadable components make possible hot upgrades without restarting the systems.

One of the traps encountered when solving Architecture availability issues is to immediately jump into infrastructure solutions. In fact, there are other ways to solve the availability needs, the most prominent one being a look from the software design angle. Availability through software design means designing the software components such that changes can be made even when the system is running and without bringing it down. For example, by implementing dynamic loading of data and using external configuration, one can implement a good solution to increase availability. Consider a situation where tax related data could be changed from an external configuration file and the changes get automatically notified to all clients. The application in turn refreshes the data with the new tax data, without bringing down the system.

Page 34: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

Availability is not only a function of reliability, but it is also a function of maintainability. From the maintainability perspective, availability is the probability that a system is not failing or undergoing a repair action when it needs to be used. For example, if we have to upgrade some patches to the system as part of maintenance, does anyone need to bring the system down as part of planned downtime? Or can the upgrade be done without affecting the end users? As the time to repair or fix a system increases, the availability decreases.

Several performance techniques, such as co-location and local caching, may limit a system’s availability as these techniques build on machine affinities. Additionally, unrealistic customer expectations about a system’s availability can impact the operability.

Several other techniques and approaches in dealing with availability issues are explained following the link shown here.

Page 35: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous
Page 36: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

Now that we have seen the various factors driving Architecture, and the corresponding Concerns emanating out of them, let’s now dive into the Principles that could be used to address those Concerns.

2016 (C) Accenture 36

Page 37: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

How are Architectural Concerns addressed? Through proven techniques and approaches summarized as Architecture Principles. Architecture Principles help to address complexity and manage risk, primarily through abstraction – the ability to organize complexity into a set of understandable constructs that can be implemented and reused. There are a wide range of Principles that could be covered; we’ll focus on a few of the key ones in this section, as an introduction to this topic.

Page 38: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

If you have more than one way to express the same thing, at some point the two or three different representations will most likely fall out of step with each other. Even if they don't, you're guaranteeing yourself the headache of maintaining them in parallel whenever a change occurs – and changes will occur. “Don't Repeat Yourself” is important if you want flexible and maintainable software.

While designing components, the following questions need to be asked: What is the function of this component? What is the minimum information required to perform this function? If you are altering an existing component, you must take even more care as you can affect many other parts of an application. You should ask yourself these questions when adding functionality to a component, while also keeping information about additional dependencies in mind.

Componentization -

This Principle can be achieved through componentization, where an application is designed as independent components, and each component has specific responsibilities. In order to create a maintainable system, one needs to gather commonly used behavior together and create individual components. Maintaining such independent components is easier rather than managing objects with common behavior scattered all over the place. Any change that needs to be made, can be done at one place in the case of component-based designs. To put it simply, information must pass down from component to component, but never back up the component hierarchy. Componentization also helps in distributing the work to developers, and eases the deployment effort.

Reuse Plug-ins -Reusing plug-ins instead of writing new ones can accelerate the time to market of applications. Organizations can eliminate time spent writing code to integrate applications with legacy software systems, and quickly meet new business needs by editing existing plug-in functionality instead of starting from scratch. This also reduces the time it takes for new developers to become productive.

Several other techniques and applications of DRY are explained following the link shown here.

Page 39: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

The KISS Principle states that design simplicity should be a key goal and unnecessary complexity should be avoided. Extra features are really not needed; an approach that seems "too easy to be true" is in fact the best way. A very straightforward approach may seem less glamorous and less dramatic, but that trivial approach should indeed be taken. If something is complex, the best strategy is to break it down until it becomes simple. It also emphasizes a standard style across applications within an organization, whether it is the design of components, user interfaces, or implementation techniques. Standardization is a key factor for simplicity.

This Principle goes closely with another popular acronym in the Agile world called YAGNI, short for “You Ain't Gonna Need It.” YAGNI suggests to programmers that they should not add functionality until it is necessary. Ron Jefferies, the popular XP evangelist, writes, "Always implement things when you actually need them, never when you just foresee that you need them.” The reason for propagating YAGNI is that until the feature is actually needed, it is difficult to fully define what it should do and to test it. If the new feature is not properly defined and tested, the unnecessary feature may not work correctly, even if it is eventually needed.

By keeping the things simple one can avoid the expensive maintenance and manageability costs.

For additional explanation of KISS and its applications, click on the link shown here.

Page 40: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

The Architecture Principle to Isolate ensures that Technology Architecture contains only Architecture logic. It makes sure that business logic is included in the Business Architecture and not in the Technology Architecture. Similarly, it ensures that data itself is not included in the Technology Architecture; the Data Architecture is, but the actual data is not. The ability to Isolate also contributes to the resilience of a system by providing failure boundaries that permit part of a system to fail without compromising the whole.

Isolation protects system integrity by preventing one process from interfering with another. It is generally achieved by identifying the frequently changed components from the less frequently changed ones, and separating data from the process. Components can also be designed with low coupling, to relieve some of the interdependencies within a system. Nowadays many data related configurations are moved to external configuration files, such as XML and properties files. Any changes that need to be made would happen there without touching the business logic.

The “Model, View, Controller” design pattern is often useful in this scenario as well. It decouples the data by letting the Model handle that side of things. The View decouples the presentation of that data. The Controller is the only part that knows about both the Model and the View. It is used to control the flow between or to link the two components together. When the view changes, the business logic aspect does not need to change alongside it. This in turn aids the maintainability and extensibility of the application.

Keep Data and Process Separate -

One of the techniques to isolation is keeping data separate from the process. Data related to an application changes more often as compared to the process acting on the data. Separating them helps to modify, update, and maintain only the areas of change rather than unchanged modules. During any failure phase, one should design the components so that the failed components can be isolated, reducing the domino effect on the rest of the components.

Differentiate Data and Code -

Almost any aspect of the system that needs to be flexible can be structured as data, at least in theory. However, there are a number of practical considerations as well. The key point to remember here is that designing with the Isolation Principle does not eliminate the Encapsulation Principle. The ability to differentiate which aspects of an application should be stored as data versus code is significant for isolation. Current interest rates is an obvious example of something that must be held as a data variable because of its likelihood to change. The biggest issue here will be not whether, but where, to hold it as data, given the speed and unpredictability of changes.

For more information in addressing concerns with isolation, follow the link shown here.

Page 41: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

The Architecture Principle to Encapsulate is used as a generic term for techniques to package and “wrap” functionality, or to realize data abstraction. The goal is to design systems so that functions can be optimized independently of other functions, so that failure of one function does not cause other functions to fail, and in general to make it easier to understand, design, and manage complex interdependent systems. Encapsulation therefore implies the provision of mechanisms to support both modularity and information hiding. Like abstraction, the word "encapsulation" can be used to describe either a process or an entity. As a process, encapsulation means the act of enclosing one or more items within a physical or logical container. Encapsulation, as an entity, refers to a package or an enclosure that contains one or more items. This in turn would help in not only the separation of concerns into multiple layers, but also abstracting necessary information to the layers.

The idea of encapsulation comes from the need to cleanly distinguish between the specification and the implementation of an operation and also the need for modularity. Each layer then provides a standard set of interfaces to the layers above it and below it. It’s a type of modularity that keeps most application developers from having to concern themselves with the “lower-level plumbing” of a system.

All programming paradigms aid developers in the process of improving encapsulation. For example, object-oriented programming languages such as Java can separate concerns into objects, and a design pattern like MVC can separate content from presentation and data-processing (model) from content. Encapsulation enables Object Oriented experts to build flexible systems, that can extend as the business extends. Every module of the system can change independently, without impact to the other modules.

To find more applications and explanations of this Principle, click on the link shown here.

Page 42: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

The Architecture Principle to Group Related Functions together with low coupling and high cohesion leads to modular design. The goal for grouping related functions is to increase modularity and to ensure that the changes in one module have little effect onother modules. The rule of modularity indicates the only way to write complex software that won't fall on its face is to build it out of simple modules connected by well-defined interfaces, so that most problems are local and you can have some hope of fixing or optimizing a part without breaking the whole.

It has been found by research that more defects have been discovered if there are more modules with less lines of code versus less modules with more lines of code. So, one has to consider the right granular module size at the design time.

Coupling -

Coupling, or dependency, is the degree to which each program module relies on each one of the other modules. Coupling can be "low" or "high." Low coupling refers to a relationship in which one module interacts with another module through a stable interface and does not need to be concerned with the other module's internal implementation. With low coupling, a change in one module will not require a change in the implementation of another module. This in turn helps in upgrading the components separately andreducing the defects arising out of dependencies. Low coupling is often a sign of a well designed system, and when combined withhigh cohesion, supports the general goals of high readability and maintainability.

Cohesion -

Cohesion describes how closely the activities within a single component or a group of components are related to each other. It is a measure of how strongly-related and focused the responsibilities of a single module are. A highly cohesive component would consists of sub components or activities designed towards a common goal. Cohesion is an ordinal type of measurement and is usually expressed as "high cohesion" or "low cohesion" when being discussed. Modules with high cohesion tend to be preferablebecause high cohesion is associated with several desirable traits of software including modularity, flexibility, and scalability, whereas low cohesion is associated with undesirable traits such as being difficult to maintain, test, reuse, and even understand.

Grouping related functions together provides common shared utilities and shields most developers from complexity in other areas.Good modularity in a Technical Architecture can significantly aid in maintenance and future extension. It also supports code re-use, simplicity, testing, and interoperability.

Additional information for the Principle around grouping related functions is displayed at the link shown here.

Page 43: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

The use of layering is closely related to the separation of concerns – the process of breaking a complex program into distinct features that overlap in functionality as little as possible. A key consideration for this Principle is to isolate the impact of change by separating the concerns. This is also closely related to the Isolate Principle. Generally, changes are either functional or technical in nature - few are both, and the objective is to avoid having to tinker with complex technical logic to implement a simple functional change (or vice versa).

One of the ways to implement layering is by separating the business and technology logic into separate layers. Requirements mostly contain the business problems faced by a client, and the logic to solve the business problems is complimented by the technology logic. There is no simple definition of what can be regarded as technical, but two guidelines are if the processing is platform or environment specific - it is technical, and if the average business analyst does not have the skills to design it - it is technical. However, one key consideration is to not exclude too much detail of the database design from business analysts. While the physical database implementation may be technical, the data model and logical database design are based on the business. Business designers will also usually require some detailed knowledge of the database design to design efficient database calls.

Candidates for hiding behind layers include anything temporary or likely to change and anything technically complex, such as communications, database access logic, and file handling logic. Most changes only affect the modules within one layer. The layer can be redesigned and slotted back into the Architecture. Skills can be focused on a layer by layer basis. Layering creates a conceptual model of the system for people to work within, and each layer shields the layer above it from the layer below, therefore when changes are made only one layer has to be amended.

Technical logic doesn’t change frequently as compared to the business logic. By keeping them separate, one could create a stable and maintainable system. Whenever changes need to be made to the business logic, they can do so without worrying about any impact on the overall Technical Architecture. Beware of people stuffing business fixes into the Architecture "because it's simpler.” This will add complexity to the Architecture, and limit its flexibility. The use of layering helps the Architect to create checkpoints at logical boundaries, and in turn reduces the maintainability. Points of failure can also be easily identified if a layered Architecture is in place. This would also help in building a scalable system.

Several applications of layering and other uses of this Principle are explained following the link shown here.

Page 44: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

Lastly, let’s look at some tradeoffs among the Principles and Concerns that Architects must consider.

2016 (C) Accenture 44

Page 45: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

As described in Wikipedia, a trade-off usually refers to losing one quality or aspect of something in return for gaining another quality or aspect. It implies a decision to be made with full comprehension of both the upside and the downside of a particular choice.

Architects must know these tradeoffs, as this knowledge helps in prioritizing the system-quality Principles and Concerns, and in turn provides help during the design decision-making process. Even though the customer might have various NFRs as part of their requirements, it is the duty of the Architect to help and educate the customer about the competing Concerns and how to chose the ones which benefit the application the most. Good tradeoffs produce usable applications and winning solutions.

Page 46: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

Architects must work closely with the client to identify and address the Architecture Constraints, Concerns, and Principles. At a high level, some of the steps in conducting a tradeoff analysis with the client are shown on this slide.

First, understand the constraints as well as the functional and technical requirements imposed by the client. From these requirements, identify the Architecture Concerns.

Next, make a note of all the issues that arise from the initial discussions, and explain to the client the nature of the Concerns - how they impact each other positively as well as negatively, depending on the implementation of the Architecture.

Finally, help the customer to prioritize the Concerns while keeping the domain, technology, and requirement in mind. The client may not be aware of the significance of choosing one Concern over another, and it is the job of the Technology Architect to provide the necessary advice and guidance in making a successful decision.

Page 47: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

This diagram represents the impact of each of the Architecture Concerns on the others, as described in the previous sections.

For example, if the quality attributes mentioned by the customer are Security and Availability, we have to do a tradeoff here. Security strives to have minimality whereas availability desires maximality. So, it would be difficult to build a high secure system without compromising the availability of that system. If the customer is looking at building a high performing application, the design would need a multi-threaded system with the use of a good caching mechanism. Such systems would need more maintenance as compared to a single threaded system. A modular system is easier to maintain as compared to a non-modular application. It would also impact the scalability.

Feel free to click on the various Architecture Concerns to link to additional material and further explore some of the impacts and strategies associated with them.

Page 48: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

We touched very briefly on many different topics in this course, so be sure to visit the “ATTACHMENTS” link at the top to explore areas in further detail. You can download the content of the presentation and audio transcripts via the same link.

You will now take a short quiz to help reinforce what you have learned. You will need to answer all of the questions correctly to pass this final quiz. Once you pass the final quiz, mark yourself complete in myLearning to obtain credit for this course.

Thank you.

Page 49: Understanding Solution Blueprints and Using … 2030 V3 Technology...Technology Architect practitioners across Accenture, ... it is important to understand the scope, ... analogous

Please take the time now to reinforce what you have learned from this course. To pass this assessment, you need to answer all of the questions correctly.

Once you have successfully completed this course, please mark yourself complete in myLearning.