CALIFORNIA BUILDING CODE – MATRIX ADOPTION TABLE CHAPTER 26

12
Enhancing Organizational Performance through Event-based Process Predictions Twenty-first Americas Conference on Information Systems, Puerto Rico, 2015 1 Enhancing Organizational Performance through Event-based Process Predictions Full paper Julian Krumeich Institute for Information Systems German Research Center for Artificial Intelligence (DFKI GmbH) [email protected] Dirk Werth Institute for Information Systems German Research Center for Artificial Intelligence (DFKI GmbH) [email protected] Peter Loos Institute for Information Systems German Research Center for Artificial Intelligence (DFKI GmbH) [email protected] Abstract Enterprises in today’s globalized world are compelled to react on threats and opportunities in a highly flexible manner. Due to technological advancements, real-time information availability, especially in manufacturing operations, has reached new dimensions and increasingly provides Big Data. With Complex Event Processing (CEP) the required technology to analyze and correlate heterogeneous event data is already available. Yet, these techniques are only scattered applied to Predictive Analytics, especially in the Event-driven Business Process Management domain. Most approaches are based on pure descriptive analytics not considering current context-situations appropriately. To enact a stronger situation-awareness, the paper at hand proposes the concept of event-based process predictions combining CEP with Predictive Analytics and outlines its potentials in particular for a proactive control of manufacturing processes. Keywords Predictive analytics, Proactive, Big Data, Business process intelligence, Business process forecast and simulation, Complex event processing, Event-driven business process management. Introduction Motivation Today, enterprises compete in a globalized world characterized by its constantly changing economic conditions. To be successful in this highly-competitive environment, enterprises are forced to react on threats and opportunities in a timely manner. In this regard, it is a mandatory task to continuously monitor and optimize business processes towards current business situations. With advancements in systems integration and new technologies like the Internet of Things (IoT), real-time information availability, especially in manufacturing operations, has reached a new dimension (Bruns and Dunkel 2010). This allows for in-depth insights into intra-organizational as well as cross-company business processes. Consequently, myriads of internal and external business events become visible forming increasingly big data (Dhar et al. 2014). To turn such an enormous quantity of low level events (such as single sensor signals) into business value (like an early discovery of machinery failures or breakdowns), it is crucial to filter event streams to detect meaningful patterns that indicate important situations with a decisive impact on the efficiency of business processes (Luckham 2012). Hence, it is vital to initiate reactions in real-time to certain detected event

Transcript of CALIFORNIA BUILDING CODE – MATRIX ADOPTION TABLE CHAPTER 26

Enhancing Organizational Performance through Event-based Process Predictions

Twenty-first Americas Conference on Information Systems, Puerto Rico, 2015 1

Enhancing Organizational Performance through Event-based Process Predictions

Full paper

Julian Krumeich Institute for Information Systems

German Research Center for Artificial Intelligence (DFKI GmbH) [email protected]

Dirk Werth Institute for Information Systems

German Research Center for Artificial Intelligence (DFKI GmbH)

[email protected]

Peter Loos Institute for Information Systems

German Research Center for Artificial Intelligence (DFKI GmbH)

[email protected]

Abstract

Enterprises in today’s globalized world are compelled to react on threats and opportunities in a highly flexible manner. Due to technological advancements, real-time information availability, especially in manufacturing operations, has reached new dimensions and increasingly provides Big Data. With Complex Event Processing (CEP) the required technology to analyze and correlate heterogeneous event data is already available. Yet, these techniques are only scattered applied to Predictive Analytics, especially in the Event-driven Business Process Management domain. Most approaches are based on pure descriptive analytics not considering current context-situations appropriately. To enact a stronger situation-awareness, the paper at hand proposes the concept of event-based process predictions combining CEP with Predictive Analytics and outlines its potentials in particular for a proactive control of manufacturing processes.

Keywords

Predictive analytics, Proactive, Big Data, Business process intelligence, Business process forecast and simulation, Complex event processing, Event-driven business process management.

Introduction

Motivation

Today, enterprises compete in a globalized world characterized by its constantly changing economic conditions. To be successful in this highly-competitive environment, enterprises are forced to react on threats and opportunities in a timely manner. In this regard, it is a mandatory task to continuously monitor and optimize business processes towards current business situations. With advancements in systems integration and new technologies like the Internet of Things (IoT), real-time information availability, especially in manufacturing operations, has reached a new dimension (Bruns and Dunkel 2010). This allows for in-depth insights into intra-organizational as well as cross-company business processes. Consequently, myriads of internal and external business events become visible forming increasingly big data (Dhar et al. 2014).

To turn such an enormous quantity of low level events (such as single sensor signals) into business value (like an early discovery of machinery failures or breakdowns), it is crucial to filter event streams to detect meaningful patterns that indicate important situations with a decisive impact on the efficiency of business processes (Luckham 2012). Hence, it is vital to initiate reactions in real-time to certain detected event

Enhancing Organizational Performance through Event-based Process Predictions

Twenty-first Americas Conference on Information Systems, Puerto Rico, 2015 2

patterns. The temporal distance between the occurrence of such complex event patterns and the initiation of corresponding actions represents potential loss of business value in terms of information advantage (cf. Figure 1). With Complex Event Processing (CEP) the required technology to detect complex event patterns in real-time is already available. CEP is considered to be an important driver to further advance the domain of BPM (Dixon and Jones 2011). In the last years, this has motivated numerous research efforts coining the term Event-Driven Business Process Management (Krumeich et al. 2014a).

Figure 1. Gained Business Value through Event Prediction and Proactive Actions (following Schwegmann et al. (2013) and Fülöp et al. (2012))

In this regard, the earlier complex events are being detected or even emerging ones could be predicted through Predictive Analytics, the more valuable it is to have knowledge about them. This can be exemplified by an intuitive example: consider a hurricane as a complex event whose occurrence is predicted (cf. “Event predicted” in Figure 1) and corresponding actions like evacuation processes are initiated. The earlier this complex event is being predicted rather than reactively detected, the more valuable is this information (cf. Fülöp et al. 2012; “Business Value Gained” in Figure 1). This general idea can be applied to business context in form of predicting (complex) events to initiate corresponding proactive business process actions (cf. Figure 1).

In future, those companies that are able to analyze their business operations based on the rapidly growing mass of data, predict the best proceeding process flow, and proactively control their processes with this knowledge will be a vital step ahead competitors. Such a company sketches the vision of a "Predictive Enterprise" as the next stage in the evolution of real-time enterprises within the age of data as a decisive competitive asset (Bruns and Dunkel 2010; Luckham 2012).

Problem Statement and Research Contribution

Considering existing research, Predictive Analytics is only scattered applied to CEP. Moreover, concepts and implementations incorporating both aspects especially with the aim of a proactive control of business processes are missing (Krumeich et al. 2014a). Such “Event-driven Predictive Analytics” (EDPA) approaches are almost exclusively used for monitoring purposes in the EDBPM domain. There is a

Enhancing Organizational Performance through Event-based Process Predictions

Twenty-first Americas Conference on Information Systems, Puerto Rico, 2015 3

research gap in terms of using EDPA approaches for the actual control of business processes (Krumeich et al. 2014a). In this regard, EDPA yields considerable potential in terms of computing situation-aware forecasts of individual business processes instances (Redlich et al. 2012; Janiesch et al. 2012). With this knowledge in mind, processes can be proactively controlled.

To address this research gap, the paper at hand extends and refines the concept of event-based process predictions, which was introduced in previous work (cf. Krumeich et al. 2014b-d), and dedicatedly illustrates its usefulness for a proactive control of business processes. For this purpose, the paper elucidates a motivating scenario from manufacturing industry, in particular from the analytical process manufacturing industry, which has proven to be a valuable application domain (cf. Krumeich et al. 2014b-d). A typical representative of this kind of industry is the steel producing one that is characterized by producing various co- and end-products of different quality. These products of different quality need to be dynamically considered in the ongoing production planning and control. If resulting steel qualities would be detected earlier or even predicted along the production process, proactive planning actions could be performed entailing economic and ecologic benefits (cf. Figure 1 for potential business value gained through proactive actions).

Applied Research Methodology and Paper Structure

This paper applies a design-oriented research approach following the seven design science research guidelines proposed by Hevner et al. (2004). In this regard, the concept of “proactive process control using event-based predictions” as the underlying design science artifact is outlined in section 4 (Guideline 1). The relevance for constructing the underlying artifact as well as the relating research gap is pointed out in the introduction section (Guideline 2). To comply with guideline 3, the paper applies two evaluation methods that analyze the artefact from an abstract point of view. First of all, a motivating scenario describes the artifact’s utility in a general manner (cf. section 2). Furthermore, a revelatory, single case study – published before in Krumeich et al. (2014b-d) – exemplifies, which process and context data a typical steel producing company is currently able to collect by its applied sensor technology forming a potential foundation for the concept’s practical realization (cf. sections 2 and 3). Following the principle of design as an iterative process (Guideline 6), the refined concept is based on previously published work and incorporates feedback from several workshop and conference presentations (cf. Krumeich et al. 2014b-d). Guideline 5 was accomplished by outlining the applied research methodology in this section. Last but not least, the submission of this paper aims at fulfilling Guideline 7, the dissemination of research results.

Motivating Scenario

In the past, datasets sensed from manufacturing processes were rather small covering only insufficiently operational context situations; hence, leading to imprecise forecasts. General speaking, the larger the quantity as well as the higher the level of detail of process observations available, the more accurate a process prediction will be. To be more specific, the accuracy of predictions increases with the “square root of the number of independent observations” (Feindt and Jarke 2014). While, in principle, it was possible to expand and detail process data bases, the related data gathering process has proven to be too complex, too expensive and not accomplishable in a timely manner. Conventionally, processes had been planned and controlled in the context of business process intelligence predictions are computed using stochastic means, i.e. mean values indicate the likeliness of certain outcomes and results of processes. However, this approach of descriptive analytics neither take into account nor reflect the current process and context situation in which a manufacturing process takes place.

Today, manufacturing companies still perform insufficient analyses and forecasts based on their collected sensor data, even though it will positively influence their economic and ecological performance (Unni 2012). This is in contrast to industries such as insurance or banking that have fully implemented predictive analytics into their business models (Minelli et al. 2013). Yet, recent technological progress in the fields of IoT and Cyber-Physical Systems enable to equip production processes relatively cost-neutral by sensors. This allows to measure internal and external process parameters in a previously unprecedented level of detail. Thereby the technical foundation has been created to establish and continuously enrich a data base allowing for highly-accurate predictions to control processes.

Enhancing Organizational Performance through Event-based Process Predictions

Twenty-first Americas Conference on Information Systems, Puerto Rico, 2015 4

Shortcomings in Utilizing Conventional Techniques of Descriptive Analytics

In this motivating scenario, a steel manufacturing company receives two customer orders for which a production planning has to be carried out as well as the concerning manufacturing processes to be controlled. For the first order, a final product D with a minimum quality of 90 quality units (QU) has to be manufactured; the second order only requires a minimum quality of 70 QU of product D. Based on the statistical likelihood of occurrence of certain quality characteristics after passing through a manufacturing process A, conventional descriptive analytics approaches determine an 80% probability that final products of type D will have qualities of 95 QU (D1.1) or 80 QU (D1.2; see Figure 2a). Thus, both customer orders could be satisfied. In this context as the process proceeds, the production plan assumes further processing of the intermediate C, which is created with a quality of 90 QU. The timing of customer orders and the machine allocation plan would be based on this production plan.

A B

C1

C2

D1.1

D1.2

D2.1

D2.2

80 %

20 %

10 %

90 %

100 %

100 %

90 QU

70 QU

65 QU

55 QU

95 QU

80 QUEvent

EventEvent

Pro

du

ctio

n

Pro

cess

A

M1

t

M2

Situation x

Event

XO R

XO R

V

Pro

du

ctio

n

Pro

cess

B

t

Situation y

C D2

D1

> 60 QU Q: 75

Q: 90

M3

Event

Event

Event

Event

Event

25 %

D3Q: 70

75 %

100 %

C

XOR

V

100 %

a) b)

Figure 2. Descriptive Analytics of Scenario Manufacturing Processes

In addition to manufacturing process A, manufacturing process B allows to produce the final product D out of intermediate products C (see Figure 2b). This manufacturing variant is particularly suitable if the resulting product C (in process A) does not have the required quality of at least 90 QU to serve the further processing to D > 80 QU. However, this alternative processing consumes more time and a higher amount of material. Moreover, the end products of this production process meet only in 25% of all cases the required quality criteria. In the stochastically more likely case (75%) only one product with 70 QU can be manufactured and eventually be sold (see the lower process path in Figure 2b). In the worst case, this final product must therefore be disposed separately or, alternatively, needs to be returned to manufacturing process A (e.g., in steelmaking by melting it down again).

Thus, to rely on pure descriptive analytics would lead to the process execution plan outlined above with a probability of 80% (visualized in red). However, this means at the same time that this assumption is simply wrong in almost a quarter of all cases and would lead to final products that are not available without additional expense (cf. the lower process path in Figure 2a). Thus, the use of descriptive analytics proves to be insufficient to control individual process instances, since it may lead to an incorrect production planning due to non-considering the current individual situation of process executions.

Stronger Situation-awareness through Predictive Process Analytics

If the considered manufacturing processes would be equipped with appropriate sensors, a database can be developed that maps diverse situations of production and the corresponding manufacturing context patterns. This underlying data could then be correlated with a current process situation based on prediction models. Hence, in the outlined case a significantly more accurate prediction could be identified by having knowledge about situations x and y. This is possible, since the production plan would not build on type-related descriptive analytics, but rather instance-related predictive analytics capturing the current process and event situation.

This means for the underlying scenario: if for instance after completion of process step A within manufacturing process A, a certain quality of the input raw material M2, a machine variance of the machines used by va1, ..., van, as well as the participation of employee m in process step B are detected, a special situation will be identified that is correlated with prediction model build on historical data sets. Based on this recognition, the process forecast would either significantly strengthen the probability of process variant A or, in contrast, predict the stochastic exception (see green process steps in Figure 2a). Considering the latter, the intermediate C with a quality of 70 QU would result with high significance. For its further processing to product D, a final quality of 65 QU is determined (see green process steps in

Enhancing Organizational Performance through Event-based Process Predictions

Twenty-first Americas Conference on Information Systems, Puerto Rico, 2015 5

Figure 2a). According to this, the computation forecasts that the customer orders cannot be satisfied as initially planned. Based on the determined values for C2 and other process parameters, in accordance to the diagnosed situation y, it will also be predicted that the further processing by manufacturing process B will significantly contribute to the stochastic unlikely final product D that is of sufficient quality to satisfy the customer orders (see green process steps shown in Figure 2b).

This shows that by incorporating process and event parameters, the further course of a process can be predicted with considerably higher precision. Hence, the production could be planned more precisely and be proactively controlled. For instance, certain setup procedures for starting manufacturing process B could already be performed in parallel to the running process steps B and C2 in production process A. Stretched production time, increased material requirements as well as personnel and equipment utilization could be scheduled earlier or examined for the computation of possible alternatives.

Case Study: Big Data in Steel Bar Production

Characteristics of Steel Production Processes

Steel manufacturer are typical representatives of the analytical process industry that has fundamental differences to classic discrete assembly (Rapp 2002). Analytical manufacturing processes feature 1:n-relationships between input and output factors. This means one input factor is analytical processed into several so-called co-products (cf. Figure 3). Opposed to that, n:1-relationships are characteristic for synthetic manufacturing processes, as they are typical for the automotive assembly. In addition, analytically resulting main products as well as the various types of co- and by-products differ regarding quality. These quality distinctions have to be considered in the ongoing production planning and control (Hahn and Laßmann 1999).

Moreover, resulting co-products often cyclically flow into the manufacturing process or are continuously required for additional production lines (cf. Figure 3). These cyclical relations between output and input factors are again diametrically opposed to classic assembly industries (Rapp 2002).

Steelmaking Casting Rolling / Forming FabricationReduction

Cyclical Material

Flows

Analytical Manufacturing

Process

Analytical Manufacturing

Process

Analytical Manufacturing

Process

Cyclical Material

Flows

Figure 3. Schematic Representation of Steel Production Processes and their Characteristics (following Allwood and Cullen 2011)

Due to fluctuations in production—as they typically occur in the process industry—an accurate and timely planning and control of production processes is complicated (Scheer 1998). These fluctuations have different reasons: they can be the result varying quality of raw materials, external influences, such as temperature or pressure as well as internal influences, such as reaction rates, e.g. in the chemical industry (Rapp 2002; Scheer 1998). These influencing factors can be detected via sensors as events.

Enhancing Organizational Performance through Event-based Process Predictions

Twenty-first Americas Conference on Information Systems, Puerto Rico, 2015 6

Furthermore, the required end products are tremendously customer-specific in terms of their levels of quality; standard commodities only rarely exist (Rapp 2002). This is again in contrast to discrete assembly manufacturing. Even if for example in the automotive industry, complexity is given due to the wide range of variations, there is no heterogeneity in terms of the actual product quality, which is to be measured and the logistical process flow controlled accordingly. The individual material properties—as per specific customer needs—forces process manufacturing companies to individually analyze and control each of their processes.

For example, while it is not possible within the production of automobiles to figuratively build in air conditions instead of flawed engines, quality gradations of products are common practice in steelmaking, depending on the subsequent application, and serve as valid intermediate, by-, co- or end products. If the target deviation of a co-product is too high for customer order A, in which high-precision steel for aircraft or aerospace industry is needed, it can instead be used for customer order Z, in which steel for the building industry is needed. To avoid excess capacities, to reduce the demand for raw materials and to get free production capacities, a re-scheduling should be taken into account early in production planning, so that required co-products for customer order Z do not have to be scheduled separately.

Thus, production in the process manufacturing industry is neither characterized by strong linearity (as in the discrete assembly manufacturing), nor is the quality of resulting co-products easily detectable or even deterministic, e.g. based on bills of materials. Hence, choosing this type of manufacturing as the underlying object of investigation proves to be particularly appropriate to elucidate event-based process predictions and illustrate its advantages for a proactive control of individual manufacturing processes.

Data Characterization, Challenge of Big Data and Requirement Derivation

The case study illustrated in this section analyzes a steel manufacturer located in Germany that has fully specialized on highest quality steel products. To achieve highest standards, the production process is equipped with various sensor technology to assure sophisticated quality assurance measures. Therefore, the company is an exemplary observation point to illustrate which enormous amounts of process and context data is already collectable in steel manufacturing processes building a potential foundation for deriving event-based process predictions.

In order to meet customers' specific quality requirements for various existing end products, the examined steel manufacturer conducts comprehensive quality checks within its production line providing masses of sensor data at the lowest system level (L1). In addition, ambient and positioning sensors are installed to monitor the control of steel bars via a material flow tracking system (L2-system level). Based on this basic data and the available customer orders, the production planning and control system calculates a rough schedule (L3 to L4 system level).

In the following, sample data obtained from the applied sensor networks at the analyzed steel manufacturer are classified according to the Big Data characteristics proposed by Beyer and Laney (2012). One example from the sensor network illustrates the immense volume of data in monitoring the production process. In rolling mills 31 and 32 there are two optical surface test sensors that can continuously provide real-time data for the detection of surface defects during the rolling process. Basically, this allows to take into account the varying customer demands for a particular surface quality. The unit can already prototypically detect errors and differentiate the types of errors. This optical testing generates several hundred terabytes of data annually (volume). Currently, only a sporadic reactive analysis of these data is possible. Also, other context data from the sensor network and the systems settled on L-2 or L-3 level can currently not be linked due to the volume of data to be analyzed. While these systems could in principle detect production deviations in batch mode, this takes too long to allow timely reactions.

While this is just one example of very large resulting data of individual sensors in a particular section of the factory, another example illustrates the high data diversity (variety), which is continuously collected by different sensors and sensor networks at various points throughout the production process. This places high demands on an analysis by big data principles. For instance, the further processing of steel bars as of now already provides half a million of sensor data records, which reflect one production area to a particular context. In the next couple of months the sensor performance will be advanced, such that over 1.5 million sensor data will be available on L1 and L2 level. According to the principles of CEP; however,

Enhancing Organizational Performance through Event-based Process Predictions

Twenty-first Americas Conference on Information Systems, Puerto Rico, 2015 7

only the identification of relevant events in this torrent of both homogeneous and heterogeneous data as well as their correlation allows to derive patterns and deviations. This is possible only by using highly structured, technical knowledge. At this point, the basic claim to a scalable solution becomes clear, since sensor networks should be flexibly expandable, but must also allow analyses and forecasts within a required time frame. The company plans to increase sensing in this subsection to an output of more than five million records, which underlines the need for scalability.

Thus, in terms of the analysis of these large and diverse data, the responding time is crucial, since speed is a decisive competitive factor in the analysis (velocity / analytics). Classic reporting or batch processing would be significantly too slow, so that so-called high velocity technologies must be performed in near-real-time analyses. It is also crucial to conduct accurate forecasts of the process sequences. Each day, an average of one terabyte of video data is recorded in a single subsection of the plant. However, a pure video analysis method is not sufficient for predictive analytics methodologies. In the existing system, it has been shown that only some production deviations could be detected by this classical approach. In addition, there is no feedback for process optimization. Therefore process data need to be included in the model formation and forecasting. Here, as outlined, over one million data sets are incurred in the coming months. For analyzing the dependencies among process and video data, data from a long period of time must be used for model training. In this case, the data volume may rapidly exceed 50 terabytes. For a real-time adaptive prediction, on average one-tenth of the data should be used. At present, however, such a number of data can hardly be processed in real-time. A compression of the data is impossible because of its variety to be considered.

Proactive Process Control using Event-based Process Predictions

In the course of recent technological progress in the fields of IoT and Cyber-Physical Systems it is now possible to equip production processes relatively cost-neutral with sensors. This allows the measurement of internal and external parameters of production processes in a previously unprecedented level of detail, even in real-time. Thereby the technical foundation is created to establish and continuously enrich a data base allowing to build up accurate prediction models of business processes.

M1

M4

M2

M3

Ev ent Ev e

nt

Ev ent Ev ent Ev e

nt

Ev ent

1) Sensor-equipped

manufacturing plant

2) Complex Event

Processing

3) Historical process and

context information

M5

M6

Ev ent

Ev ent

Ev ent

Ev ent

Ev ent

A B

C1

C2

D11

D12

D21

D22

Q: 75t: 56' 14''

Forecasted processing time

t: 56' 14''

Forecasted qualityQ: 75

4) Event-based

prediction models5) PPC

systems

CE

CE

Ato

mic

eve

nts

V

V

V

VV

V

Ev ent

Ev entEv e

nt Ev ent

Ev ent

Ev ent

Figure 4. Basic Infrastructure to Realize a Proactive Process Control using Event-based Process Predictions

From the obtained mass of process and context parameters, which can be regarded as elementary events, it is essential to identify patterns that significantly influence the progress of underlying manufacturing processes (Figure 4, 1). To detect events matching these defined patterns, the research and technology field of CEP has turns out to be promising. CEP has already been successfully used for fraud detection in banking and finance, in which financial transaction processes can be examined on a very fine-grained level by means of information technology for years (von Ammon et al., 2010). Thus, with CEP, complex event patterns of currently running process instance can be detected (see Figure 4, 2). Conventionally, CEP does not require to store masses of historic events (see Figure 4, 3). However, the concept of event-based process predictions uses this potential data basis for deriving event-based prediction models by utilizing various methods of predictive analytics, e.g. artificial neural networks (see Figure 4, 4). These models allow for the computation of individual forecasts of running process instances via correlating them with the current process and context situation detected via CEP. In contrast to the usage of descriptive analytics, these predictions are of substantially higher accuracy (see Figure 4, 4) and the results can feed into production planning and control systems to achieve a proactive process control (see Figure 4, 5).

Enhancing Organizational Performance through Event-based Process Predictions

Twenty-first Americas Conference on Information Systems, Puerto Rico, 2015 8

Complex Event CE1

tEvent Cloud

0.05

P(z)

f

y

0.35f

f

x

f

f

w

t

0.59ftt

0.87tttw x y

Predicted

Complex Event CE2

z

Figure 5. Basic Concept of Event-based Process Predictions

Besides using this potential database (cf. Figure 4, 3) to compute forecasts on process progressions as well as corresponding KPI, prediction models can be build that calculates the likelihood of occurrence of certain events that are part of complex ones. As depicted in Figure 5, the likelihood of occurrence of event z – as an atomic event – can be predicted based on already detected process and context events, e.g. w, x and y. In case w, x and y have occurred, the likeliness of z has been computed to 0.87. This number may exceed a certain predefined threshold to trigger z as a predicted event. In this case, the corresponding complex event CE2 will be detected by CEP and associated actions will be proactively initiated. Of course, this example abstracts from different characteristics of composite events that constitute complex ones, such as temporal or spatial properties.

To realize event-based predictions from a technical perspective, i.e. incorporating the increasingly large amounts of events, a powerful event processing technology is needed in the first place (Bruns and Dunkel 2010). In this respect, CEP engines are generally regarded as the basic enabler for realizing real-time process control (cf. Figure 6 and 1). CEP is defined as a technology for dynamic processing of multiple events at the same time. In doing so, it allows expressing causal, temporal, spatial and other relations between events. These relationships specify patterns, which can be used for real-time event monitoring within a set of detected events (Bruns and Dunkel 2010). The required knowledge about possible event types and their dependencies is provided to Event Processing Agents (EPA) by means of event models (cf. Figure 6, 1).

Complex-Event Processing Layer

Event Processing Engine

Event Stream (pre-processed)

Event Stream (post-processed)Event Model Event Rules

Prediction-Layer

Complex EventsPredicted Events

Event-based Prediction Models

Process-Engine-Layer

Process Models Process Instances

1

3

EPA1 EPAn

2

Figure 6. Proactive Process Control using Event-based Process Predictions

Enhancing Organizational Performance through Event-based Process Predictions

Twenty-first Americas Conference on Information Systems, Puerto Rico, 2015 9

Declarative event rules specify event patterns and actions that should be started after its detection. While deductive rules derive new aggregated or complex events, reactive ones initiate certain procedures within connected Event Handler, like provided by predictive analytics and process engines (cf. Figure 6, 2 and 3). By utilizing predictive analytics on event streams, the likelihoods of occurrence of complex events can be computed, of which only parts has already been detected. This results in predicted events that feed back into the CEP engine (cf. Figure 6, 2), which in turn triggers actions within the connected process engine (cf. Figure 6, 3). Since the detected complex event has not fully occurred so far, but relies on certain predicted events, the initiated action in the process engines can be considered as a proactive process control.

Related Work

Process Data and Context Data Acquisition and its Real-Time Analysis

In order to detect context situations and progress of ongoing processes, data must be continuously collected. Technically, this can be done by means of physical and virtual sensors, which are often connected in a wireless network. Physical sensors are able to measure for instance pressure, temperature or vibration and recognize complex structures such as audio signals (Stäger et al. 2007). Additional data is obtained from IT systems by evaluating for example log files or exchanged messages at the lowest system level (Choi 2010).

The next step is to analyze this mass of collected data. A common approach is that atomic events, which are detected by sensors, are being first cumulated to more complex ones, which is called CEP. CEP combines methods, techniques and tools to analyze mass of events in real-time and to obtain higher aggregated information from this most elementary data. Such approaches can be found for example in Wu et al. (2006), who apply CEP on real-time data streams, and with particular respect to RFID-based data streams in (Wang et al. 2006). Often ontologies are used to identify semi-automatically the semantic context of events (Taylor and Leidinger 2011). Operations on data of atomic and complex events require algorithms that are well-adapted to the process context as well as on big data characteristics.

A well-known approach for big data analytics is MapReduce, which decomposes a problem into independent sub-problems that are distributed and solved by so-called mappers. However, traditional forms of MapReduce follow a batch approach, which is inapplicable for data streams. In this regard, the field of research of stream mining and stream analytics has been formed recently (Brito et al. 2011).

Process Prediction and Simulation

Predictive analytics refers to the forecast of prospectively occurring events. The necessary mathematical modelling can be done by using historical data. A characteristic example of a simple forecast calculation is the moving average. Such simple models only work if they are mostly independent of external influencing events, which is rarely the case. Especially in business processes several dependencies and influencing factors exist. Thus, modern statistical methods are required to recognize dependencies and patterns in large amounts of data, such as decision trees, univariate and multivariate statistics and data warehouse algorithms (Brito et al. 2011).

Conventional approaches combining business intelligence with BPM are coined as Business Process Intelligence (BPI), yet the typical using conventional descriptive analytics approaches (cf. Linden et al. (2011) for a discussion on existing definitions). Since it is feasibly to increase the context awareness in the era of IoT, it is promising to use such big data collected by sensors to increasingly utilize predictive analytics on business processes (Feindt and Jarke 2014). Fülöp et al. (2012) present a first conceptual framework for combining CEP with predictive analytics. Also Janiesch et al. (2012) provide first conceptual works; however, they state that "[e]vent-based or event-driven business intelligence" approaches are only rudimentary researched and find limited integration in software tools. Even if Schwegmann et al. (2013) already combines BAM with BPI, the question on how findings from this connection can be used for adapting and optimizing business processes is still open.

Enhancing Organizational Performance through Event-based Process Predictions

Twenty-first Americas Conference on Information Systems, Puerto Rico, 2015 10

Business Process Adaptation and Optimization

In traditional BPM approaches, processes are typically adapted and improved on type level at design time after passing a business process lifecycle. This ex post handling—as it is regularly applied in business process controlling and mining—however causes considerable time delay (Loos et al. 2010). Since an aggregation of process execution data has to be done in a first place, before processes can eventually be optimized at type level, problematic process executions have already been completed and can no longer be optimized. Thus, scientists from the process mining community explicitly call for a stronger "operational support" (van der Aalst et al. 2012). This requires the existence of immediate feedback channels to the business process execution. Without the possibility of such feedback loops, only ex-post considerations are possible. At this level of run time adaptation and optimization of business processes, first approaches already exist (Sinur et al. 2012).

In the scope of commercial BPM systems, there are some early adopters characterized by the term Intelligent Business Process Management Suites (Sinur et al. 2012). For example, IBM’s Business Process Manager provides means to analyze operations on instance level in real-time, to control them and to respond with further process steps in an ad-hoc manner (IBM 2013); yet, a fully automation cannot be attested. On the market of real-time CEP, further vendors are present; e.g. Bosch Software Innovations or Software AG have implemented CEP functionality into BPM solutions (Sinur et al. 2012).

However, in order to speak of intelligent BPM systems, a more powerful automated analysis of the variety of events must be implemented as a basis for automated event-based predictions and process control (Krumeich et al. 2014a).

Discussion, Limitations and Future Work

Enterprises in today’s globalized world are compelled to react on threats and opportunities in a highly flexible manner. Due to technological advancements, real-time information availability, especially in manufacturing operations, has reached new dimensions. This data can be frequently considered as big data as illustrated in section 3. This allows to compute highly-accurate forecasts and thus new application possibilities for predictive analytics. The paper at hand revises the concept of event-based process predictions, a combination of predictive analytics and CEP, and outlines its potential for a proactive control of business processes (cf. sections 2 and 4). In this regard, the paper contrasts conventional descriptive analytics, which can be typically found in business process intelligence, with predictive analytics that provides a basis for deriving process instance specific forecasts dedicatedly considering the current existing process and context situation. Hence, process data is not only used to describe – as in descriptive analytics – how a certain process has performed in past, e.g. by computing different mean values, but particularly analyzes respectively predicts – i.e. predictive analytics – future probabilities and trends how a certain process instance will proceed.

In future work, it is of particular interest how to use predictive analytics to reach the third stage of business analytics: prescriptive analytics (Gröger et al. 2014). In this regard, prescriptive analytics uses prediction models to simulate optimized process executions and recommends the best course of action to achieve these simulated process outcomes based on certain key performance indicators.

As the paper outlined in section 3, the finance and insurance industries are no longer the only industries that have fully implemented predictive analytics into their business models due to the sufficient availability of fine-granular data. Increasingly industries with a dedicated focus on physical objects, like the manufacturing one, have reached new dimensions in data sensing through the era of IoT (Dhar et al. 2014). Whereas the paper sketches how large quantities of low level data can be transferred into business value, the paper abstracts from technical requirements needed to analyze these masses of data. In future research, these primarily technical issues will be of particular interest in an ongoing research project (cf. section on acknowledgements). Nevertheless, especially from the lens of information systems research, it is promising to develop innovative concepts on how big data can be generally turned into business value.

With a stronger focus on the actual concept of event-based process predictions, future work should deal with questions on how to (automatically) induce complex event patterns based on data analytics and on how to depict complex event patterns in standard process visualization methods to process and domain experts.

Enhancing Organizational Performance through Event-based Process Predictions

Twenty-first Americas Conference on Information Systems, Puerto Rico, 2015 11

Acknowledgements

This research was funded in part by the German Federal Ministry of Education and Research under grant numbers 01IS12050 (project IDENTIFY) and 01IS14004A (project iPRODICT).

REFERENCES

Allwood, J., and Cullen, J. 2011. Sustainable Materials - with Both Eyes Open: Future Buildings, Vehicles, Products and Equipment - Made Efficiently and Made with Less New Material, Cambridge, UK: UIT Cambridge.

Brito et al. 2011. “Scalable and low-latency data processing with stream MapReduce,” in Proceedings of the 3rd IEEE Conference on Cloud Computing Technology and Science (CLOUDCOM ‘11), pp. 48-58.

Bruns, R., and Dunkel, J. 2010. Event-Driven Architecture: Softwarearchitektur für ereignisgesteuerte Geschäftsprozesse, Heidelberg, Germany: Springer.

Choi, J. 2010. “RFID context-aware systems,” in Sustainable radio frequency identification solutions, C. Turcu et al., Eds. Rijeka: InTech, pp. 307-330.

Dhar, V., Jarke, M., and Laartz, J. 2014. “Big Data,” Business & Information Systems Engineering (6:5), pp. 257-259.

Dixon, J. and Jones, T. 2011. Hype Cycle for Business Process Management, Stamford, CT: Gartner, https://www.gartner.com/doc/1751119.

Feindt, M., and Jarke, M. 2014. “Interview with Michael Feindt on „Prescriptive Big Data Analytics,” Business & Information Systems Engineering (6:5), pp. 301-302.

Fülöp et al. 2012. “Predictive Complex Event Processing: A Conceptual Framework for Combining Complex Event Processing and Predictive Analytics,” in Proceedings of the 5th Balkan Conference in Informatics, New York, NY: ACM Press, pp. 26-31.

Gröger, C., Schwarz, H. and Mitschang, B. 2014. “Prescriptive Analytics for Recommendation-based Business Process Optimization,” in Proceedings of the 17th International Conference on Business Information Systems, A. Witold and A. Kokkinaki (eds.), pp. 25-37.

Hahn, D., and Laßmann, G. 1999. Produktionswirtschaft – Controlling industrieller Produktion, Heidelberg, Germany: Physica.

Hevner, A. R., March, S. T., Park, J. and Ram, S. 2004. “Design Science in Information Systems Re-search,” MIS Quarterly (28:1), pp. 75-105.

IBM Eds. 2013. IBM Business Process Manager Standard, IBM, http://www-03.ibm.com/software/products/us/en/business-process-manager-standard.

Janiesch, C., Matzner, M., and Müller, O. 2012. “Beyond process monitoring: A proof-of-concept of event-driven business activity management,” Business Process Management Journal (18:4), pp. 625-643.

Krumeich, J., Weis, B., Werth, D., and Loos, P. 2014a. “Event-driven business process management: Where are we now? – A comprehensive synthesis and analysis of literature,” Business Process Management Journal (20:4), pp. 615-633.

Krumeich, J., Jacobi, S., Werth, D., and Loos, P. 2014b. “Towards planning and control of business processes based on event-based predictions,” in Proceedings of the 17th International Conference on Business Information Systems, A. Witold and A. Kokkinaki (eds.), pp. 38-49.

Krumeich, J., Jacobi, S., Werth, D., and Loos, P. 2014c. “Advanced planning and control of manufacturing processes in steel industry through big data analytics: Case study and architecture proposal,” in Proceedings of the 3rd IEEE Congress on Big Data, pp. 530-537.

Krumeich, J., Werth, D., Loos, P., Schimmelpfennig, J., and Jacobi, S. 2014d. “Big Data Analytics for Predictive Manufacturing Control – A Case Study from Process Industry,” in Proceedings of the 2014 IEEE Conference on Big Data, pp. 16-24.

Linden, M., Felden, C., and Chamoni, P. 2010. “Dimensions of business process intelligence,” in Proceedings of Business Process Management Workshops, pp. 208–213.

Loos. P. 1997. Produktionslogistik in der chemischen Industrie – Betriebstypologische Merkmale und Informationsstrukturen, Wiesbaden, Germany: Gabler.

Loos, P., Balzert, S., and Werth, D. 2010. “Controlling von Geschäftsprozessen,” in Prozessmanagement, R. Jochem., K. Mertins, and T. Knothe (eds.), Düsseldorf, Germany: Symposium Publishing, pp. 443-471.

Enhancing Organizational Performance through Event-based Process Predictions

Twenty-first Americas Conference on Information Systems, Puerto Rico, 2015 12

Luckham, D. 2012. Event Processing for Business: Organizing the Real-Time Enterprise, Hoboken, NJ: John Wiley & Sons.

Minelli, M., Chambers, M., and Dhiraj, A. 2013. Big data, big analytics: Emerging business intelligence and analytic trends for today’s business, Hoboken, NJ: John Wiley & Sons.

Rapp, W.V. 2002. Information Technology Strategies: How Leading Firms Use IT to Gain an Advantage, New York, NY: Oxford University Press.

Redlich, D., and Gilani, W. 2012. “Event-Driven Process-Centric Performance Prediction via Simulation,” in Proceedings of Business Process Management Workshops, pp. 473-478.

Scheer, A.-W. 1998. Wirtschaftsinformatik: Referenzmodelle für industrielle Geschäftsprozesse, Heidelberg. Germany: Springer.

Schwegmann, B., Matzner, M., and Janiesch, C. 2013. “A method and tool for predictive event-driven process analytics,” in Proceedings of the 11th International Conference on Wirtschaftsinformatik, pp. 721-735.

Sinur, J., Schulte, W. R., Hill, J. B., and Jones, T. 2012. Magic Quadrant for Intelligent Business Process Management Suites, Stamford, CT: Gartner, http://www.gartner.com/doc/2179715/magic-quadrant-intelligent-business-process.

Stäger, M., Lukowicz, P., and Tröster, G. 2007. „Power and accuracy trade-offs in sound-based context recognition systems,” Pervasive and Mobile Computing (3:3), pp. 300-327.

Taylor, K., and Leidinger, L. 2011. “Ontology-driven complex event processing in heterogeneous sensor networks,” in Proceedings of the 8th Extended Semantic Web Conference, pp. 285-299.

Unni, K. 2012. Steel Manufacturing Could Use More Sensing and Analysis, http://www.sensorsmag.com/process-industries/steel-manufacturing-could-use-more-sensing-and-analysis-10249.

van der Aalst et al. 2012. “Process mining manifesto,” in Proceedings of Business Process Management Workshops, pp. 169–194.

von Ammon, R., Ertlmaier, T., Etzion, O., Kofman, A., and Paulus, T. 2010. “Integration Complex Events for Collaborating and Dynamically Changing Business Processes,” in Proceedings of the International Workshops on ICSOC/ServiceWave, pp. 370-384.

Wang, F., Liu, S., and Bai, Y. 2006. “Bridging physical and virtual worlds: Complex event processing for RFID data streams,” in Proceedings of the 10th International Conference on Extending Database Technology, pp. 588–607.

Wu, E., Diao, Y., and Rizvi, S. 2006. “High-performance complex event processing over streams,” in Proceedings of the International Conference on Management of Data, pp. 407–418.