Research Article Financial Time Series Prediction...

15
Research Article Financial Time Series Prediction Using Elman Recurrent Random Neural Networks Jie Wang, 1 Jun Wang, 1 Wen Fang, 2 and Hongli Niu 1 1 School of Science, Beijing Jiaotong University, Beijing 100044, China 2 School of Economics and Management, Beijing Jiaotong University, Beijing 100044, China Correspondence should be addressed to Jie Wang; [email protected] Received 9 June 2015; Accepted 30 August 2015 Academic Editor: Sandhya Samarasinghe Copyright © 2016 Jie Wang et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. e experimental results show that this approach gives good performance in predicting the values from the stock market indices. 1. Introduction Predicting stock price index is difficult due to uncertainties involved. In the past decades, the stock market prediction has played a vital role for the investment brokers and the individual investors, and the researchers are on the constant look out for a reliable method for predicting stock market trends. In recent years, the artificial neural networks (ANNs) have been applied to many areas of statistics. One of these areas is time series forecasting. References [1–3] reveal different time series forecasting by ANNs methods. ANNs have been also employed independently or as an auxiliary tool to predict time series. ANNs are nonlinear methods which mimic nerve system. ey have functions of self- organizing, data-driven, self-study, self-adaptive, and asso- ciated memory. ANNs can learn from patterns and capture hidden functional relationships in a given data even if the functional relationships are not known or difficult to identify. A number of researchers have utilized ANNs to predict finan- cial time series including backpropagation neural networks, back radial basis function neural networks, generalized regression neural networks, wavelet neural networks, and dynamic artificial neural network [4–9]. Statistical theories and methods play an important role in financial time series analysis because both financial theory and its empirical time series contain an element of uncertainty. Some statistical properties for the stock market fluctuations are uncovered in the literatures such as power-law of logarithmic returns and volumes, heavy tails distribution of price changes, volatility clustering, and long-range memory of volatility [1, 10–12]. e backpropagation neural network (BPNN) is a neural network training algorithm for financial forecasting, which has powerful problem-solving ability. Multilayer perceptron (MLP) is one of the most prevalent neural networks, which has the capability of complex mapping between inputs and outputs that makes it possible to approximate nonlinear func- tion. Reference [13] employs MLP in trading and hybrid time- varying leverage effects and [14] in forecasting of time series. e two architectures have at least three layers. e first layer is called the input layer (the number of its nodes corresponds Hindawi Publishing Corporation Computational Intelligence and Neuroscience Volume 2016, Article ID 4742515, 14 pages http://dx.doi.org/10.1155/2016/4742515

Transcript of Research Article Financial Time Series Prediction...

Page 1: Research Article Financial Time Series Prediction …downloads.hindawi.com/journals/cin/2016/4742515.pdfResearch Article Financial Time Series Prediction Using Elman Recurrent Random

Research ArticleFinancial Time Series Prediction UsingElman Recurrent Random Neural Networks

Jie Wang1 Jun Wang1 Wen Fang2 and Hongli Niu1

1School of Science Beijing Jiaotong University Beijing 100044 China2School of Economics and Management Beijing Jiaotong University Beijing 100044 China

Correspondence should be addressed to Jie Wang wangjiebjtuyeahnet

Received 9 June 2015 Accepted 30 August 2015

Academic Editor Sandhya Samarasinghe

Copyright copy 2016 Jie Wang et alThis is an open access article distributed under the Creative Commons Attribution License whichpermits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

In recent years financial market dynamics forecasting has been a focus of economic research To predict the price indices of stockmarkets we developed an architecturewhich combined Elman recurrent neural networkswith stochastic time effective function Byanalyzing the proposedmodel with the linear regression complexity invariant distance (CID) andmultiscale CID (MCID) analysismethods and taking themodel comparedwith differentmodels such as the backpropagation neural network (BPNN) the stochastictime effective neural network (STNN) and the Elman recurrent neural network (ERNN) the empirical results show that theproposed neural network displays the best performance among these neural networks in financial time series forecasting Furtherthe empirical research is performed in testing the predictive effects of SSE TWSE KOSPI and Nikkei225 with the establishedmodel and the corresponding statistical comparisons of the above market indices are also exhibited The experimental resultsshow that this approach gives good performance in predicting the values from the stock market indices

1 Introduction

Predicting stock price index is difficult due to uncertaintiesinvolved In the past decades the stock market predictionhas played a vital role for the investment brokers and theindividual investors and the researchers are on the constantlook out for a reliable method for predicting stock markettrends In recent years the artificial neural networks (ANNs)have been applied to many areas of statistics One of theseareas is time series forecasting References [1ndash3] revealdifferent time series forecasting by ANNs methods ANNshave been also employed independently or as an auxiliarytool to predict time series ANNs are nonlinear methodswhich mimic nerve system They have functions of self-organizing data-driven self-study self-adaptive and asso-ciated memory ANNs can learn from patterns and capturehidden functional relationships in a given data even if thefunctional relationships are not known or difficult to identifyA number of researchers have utilizedANNs to predict finan-cial time series including backpropagation neural networks

back radial basis function neural networks generalizedregression neural networks wavelet neural networks anddynamic artificial neural network [4ndash9] Statistical theoriesand methods play an important role in financial time seriesanalysis because both financial theory and its empirical timeseries contain an element of uncertainty Some statisticalproperties for the stock market fluctuations are uncovered inthe literatures such as power-law of logarithmic returns andvolumes heavy tails distribution of price changes volatilityclustering and long-range memory of volatility [1 10ndash12]

The backpropagation neural network (BPNN) is a neuralnetwork training algorithm for financial forecasting whichhas powerful problem-solving ability Multilayer perceptron(MLP) is one of the most prevalent neural networks whichhas the capability of complex mapping between inputs andoutputs thatmakes it possible to approximate nonlinear func-tion Reference [13] employsMLP in trading and hybrid time-varying leverage effects and [14] in forecasting of time seriesThe two architectures have at least three layers The first layeris called the input layer (the number of its nodes corresponds

Hindawi Publishing CorporationComputational Intelligence and NeuroscienceVolume 2016 Article ID 4742515 14 pageshttpdxdoiorg10115520164742515

2 Computational Intelligence and Neuroscience

to the number of explanatory variables) The last layer iscalled the output layer (the number of its nodes correspondsto the number of response variables) An intermediary layerof nodes the hidden layer separates the input from the outputlayer Its number of nodes defines the amount of complexitywhich the model is capable of fitting In previous studies for-ward networks have frequently been used for financial timeseries prediction while unlike forward networks recurrentneural network uses feedback connections to model spatialas well as temporal dependencies between input and outputseries to make the initial states and the past states of theneurons capable of being involved in a series of processingReferences [15ndash17] show the applications in different areas ofrecurrent neural networkThis ability makes them applicableto time series prediction with satisfactory prediction results[18] As a special recurrent neural network the Elman recur-rent neural network (ERNN) has been used in the presentpaper for prediction ERNN is a time-varying predictivecontrol system that was developed with the ability to keepmemory of recent events in order to predict future output

The nonlinear and nonstationary characteristics of thestock market make it difficult and challenging for forecast-ing stock indices in a reliable manner Particularly in thecurrent stock markets the rapid changes of trading rulesand management systems have made it difficult to reflect themarketsrsquo development using the early data However if onlythe recent data are selected a lot of useful information (whichthe early data hold) will be lost In this research a stochastictime effective neural network (STNN) and the correspondinglearning algorithm were presented References [19ndash22] intro-duce the corresponding stochastic time effective models anduse them to predict financial time series Particularly [23]has shown a random data-time effective radial basis functionneural network which is also applied to the prediction offinancial price series The present paper has optimized theERNN model which is different with the above models alsoat first step of the procedures we employ different inputvariables from [23] At the last section of this paper two newerror measure methods are first introduced to evaluate thebetter predicting results of the proposed model than othertraditional models For this improved network model eachof historical data is given a weight depending on the timeat which it occurs The degree of impact of historical dataon the market is expressed by a stochastic process where adrift function and the Brownianmotion are introduced in thetime strength function in order to make the model have theeffect of random movement while maintaining the originaltrend In the present work we combine MLP with ERNNand stochastic time effective function to develop a stock priceforecasting model called ST-ERNN

In order to display that the ST-ERNN can provide ahigher accuracy of the financial time series forecastingwe compare the forecasting performance with the BPNNmodel the STNNmodel and the ERNNmodel by employingdifferent global stock indices Shanghai Stock Exchange (SSE)Composite Index Taiwan Stock Exchange CapitalizationWeighted Stock Index (TWSE) Korean Stock Price Index

(KOSPI) and Nikkei 225 Index (Nikkei225) are applied inthis work to analyze the forecasting models by comparison

2 Proposed Approach

21 Elman Recurrent Neural Network (ERNN) The Elmanrecurrent neural network a simple recurrent neural networkwas introduced by Elman in 1990 [24] As is well known arecurrent network has some advantages such as having timeseries and nonlinear prediction capabilities faster conver-gence andmore accuratemapping ability References [25 26]combine Elman neural network with different areas for theirpurposes In this network the outputs of the hidden layerare allowed to feedback onto themselves through a bufferlayer called the recurrent layer This feedback allows ERNNto learn recognize and generate temporal patterns as wellas spatial patterns Every hidden neuron is connected to onlyone recurrent layer neuron through a constantweight of valueone Hence the recurrent layer virtually constitutes a copy ofthe state of the hidden layer one instant before The numberof recurrent neurons is consequently the same as the numberof hidden neurons To sum up the ERNN is composedof an input layer a recurrent layer which provides stateinformation a hidden layer and an output layer Each layercontains one or more neurons which propagate informationfrom one layer to another by computing a nonlinear functionof their weighted sum of inputs

In Figure 1 a multi-input ERNN model is exhibitedwhere the number of neurons in inputs layer is 119898 and in thehidden layer is 119899 and one output unit Let 119909

119894119905(119894 = 1 2 119898)

denote the set of input vector of neurons at time 119905 119910119905+1

denotes the output of the network at time 119905 + 1 119911119895119905(119895 =

1 2 119899) denote the output of hidden layer neurons at time119905 and 119906

119895119905(119895 = 1 2 119899) denote the recurrent layer neurons

119908119894119895is the weight that connects the node 119894 in the input layer

neurons to the node 119895 in the hidden layer 119888119895 V119895are theweights

that connect the node 119895 in the hidden layer neurons to thenode in the recurrent layer and output respectively Hiddenlayer stage is as follows the inputs of all neurons in the hiddenlayer are given by

net119895119905 (119896) =

119899

sum

119894=1

119908119894119895119909119894119905 (119896 minus 1) +

119898

sum

119895=1

119888119895119906119895119905 (119896)

119906119895119905 (119896) = 119911

119895119905 (119896 minus 1) 119894 = 1 2 119899 119895 = 1 2 119898

(1)

The outputs of hidden neurons are given by

119911119895119905 (119896) = 119891

119867(net119895119905 (119896))

= 119891119867(

119899

sum

119894=1

119908119894119895119909119894119905 (119896) +

119898

sum

119895=1

119888119895119906119895119905 (119896))

(2)

Computational Intelligence and Neuroscience 3

x1t

x2t

xmt

u1t

u2t

u3t

unt

z1t

z2t

z3t

znt

cj

wij

j

yt+1

Output layer

Input layer

Hidden layer

Recurrent layer

Figure 1 Topology of Elman recurrent neural network

where the sigmoid function in hidden layer is selected as theactivation function 119891

119867(119909) = 1(1 + 119890

minus119909) The output of the

hidden layer is given as follows

119910119905+1 (

119896) = 119891119879(

119898

sum

119895=1

V119895119911119895119905 (119896)) (3)

where 119891119879(119909) is an identity map as the activation function

22 Algorithm of ERNN with a Stochastic Time Effective Func-tion (ST-ERNN) The backpropagation algorithm is a super-vised learning algorithm which minimizes the global error119864 by using the gradient descent method [18 21] For the ST-ERNNmodel we assume that the error of the output is givenby 120576119905119899

= 119889119905119899

minus 119910119905119899

and the error of the sample 119899 is defined as

119864 (119905119899) =

1

2

120593 (119905119899) (119889119905119899

minus 119910119905119899

)

2

(4)

where 119905119899is the time of the sample 119899 (119899 = 1 119873) 119889

119905119899

isthe actual value 119910

119905119899

is the output at time 119905119899 and 120593(119905

119899) is

the stochastic time effective function which endows eachhistorical data with a weight depending on the time at whichit occurs We define 120593(119905

119899) as follows

120593 (119905119899) =

1

120573

expint119905119899

1199050

120583 (119905) 119889119905 + int

119905119899

1199050

120590 (119905) 119889119861 (119905) (5)

where 120573 (gt 0) is the time strength coefficient 1199050is the

time of the newest data in the data training set and 119905119899is

an arbitrary time point in the data training set 120583(119905) is thedrift function 120590(119905) is the volatility function and 119861(119905) is thestandard Brownian motion

Intuitively the drift function is used to model determin-istic trends the volatility function is often used to model aset of unpredictable events occurring during thismotion andBrownian motion is usually thought as random motion ofa particle in liquid (where the future motion of the particleat any given time is not dependent on the past) Brownianmotion is a continuous time stochastic process and it isthe limit of or continuous version of random walks SinceBrownian motionrsquos time derivative is everywhere infinite itis an idealised approximation to actual random physical pro-cesses which always have a finite time scaleWe beginwith anexplicit definition A Brownian motion is a real-valued con-tinuous stochastic process 119884(119905) 119905 ge 0 on a probability space(ΩFP) with independent and stationary increments Indetail we have the following (a) continuity the map 119904 997891rarr

119884(119904) is continuousP as (b) independent increments if 119904 le 119905119884119905minus 119884119904is independent of F = (119884

119906 119906 le 119904) (c) stationary

increments if 119904 le 119905 119884119905minus119884119904and 119884

119905minus119904minus1198840have the same prob-

ability law From this definition if 119884(119905) 119905 ge 0 is a Brownianmotion then119884

119905minus1198840is a normal randomvariablewithmean 119903119905

and variance 1205902119905 where 119903 and 120590 are constant real numbers ABrownian motion is standard (we denote it by 119861(119905)) if 119861(0) =0 P as E[119861(119905)] = 0 and E[119861(119905)]

2= 119905 In the above random

4 Computational Intelligence and Neuroscience

data-time effective function the impact of the historical dataon the stock market is regarded as a time variable functionthe efficiency of the historical data depends on its time Thenthe corresponding global error of all the data at each networkrepeated training set in the output layer is defined as

119864 =

1

119873

119873

sum

119899=1

119864 (119905119899) =

1

2119873

sdot

119873

sum

119899=1

1

120573

expint119905119899

1199050

120583 (119905) 119889119905 + int

119905119899

1199050

120590 (119905) 119889119861 (119905)

sdot (119889119905119899

minus 119910119905119899

)

2

(6)

The main objective of learning algorithm is to minimizethe value of cost function 119864 until it reaches the presetminimum value 120585 by repeated learning On each repetitionthe output is calculated and the global error 119864 is obtainedThe gradient of the cost function is given by Δ119864 = 120597119864120597119882For the weight nodes in the input layer the gradient of theconnective weight 119908

119894119895is given by

Δ119908119894119895= minus120578

120597119864 (119905119899)

120597119908119894119895

= 120578120576119905119899

V119895120593 (119905119899) 1198911015840

119867(net119895119905119899

) 119909119894119905119899

(7)

for the weight nodes in the recurrent layer the gradient of theconnective weight 119888

119895is given by

Δ119888119895= minus120578

120597119864 (119905119899)

120597119888119895

= 120578120576119905119899

V119895120593 (119905119899) 1198911015840

119867(net119895119905119899

) 119906119895119905119899

(8)

and for the weight nodes in the hidden layer the gradient ofthe connective weight V

119895is given by

ΔV119895= minus120578

120597119864 (119905119899)

120597V119895

= 120578120576119905119899

120593 (119905119899) 119891119867(net119895119905119899

) (9)

where 120578 is the learning rate and 1198911015840

119867(net119895119905119899

) is the derivativeof the activation function So the update rules for the weights119908119894119895 119888119895 and V

119895are given by

119908119896+1

119894119895= 119908119896

119894119895+ Δ119908119896

119894119895

= 119908119896

119894119895+ 120578120576119905119899

V119895120593 (119905119899) 1198911015840

119867(net119895119905119899

) 119909119894119905119899

119888119896+1

119895= 119888119896

119895+ Δ119888119896

119895= 119888119896

119895+ 120578120576119905119899

V119895120593 (119905119899) 1198911015840

119867(net119895119905119899

) 119906119895119905119899

V119896+1119895

= V119896119895+ ΔV119896119895= V119896119895+ 120578120576119905119899

120593 (119905119899) 119891119867(net119895119905119899

)

(10)

Note that the training aim of the stochastic time effectiveneural network is to modify the weights so as to minimizethe error between the networkrsquos prediction and the actualtarget In Figure 2 the training algorithm procedures of thestochastic time effective neural network are displayed whichare as follows Step 1 Perform input data normalization InST-ERNN model we choose four kinds of stock prices as

the input values in the input layer daily opening price dailyhighest price daily lowest price and daily closing price Theoutput layer is the closing price of the next trading dayThen determine parameters of the network such as learningrate 120578 which is between 0 and 1 the maximum trainingiterations number 119870 and initial connective weights Alsothe topology of the network architecture is the number ofneural nodes in the hidden layer in this paper Step 2 Atthe beginning of data processing connective weights 119908

119894119895

V119895 and 119888

119895follow the uniform distribution on (minus1 1) Step 3

Introduce the stochastic time effective function 120593(119905) in theerror function 119864 Choose the drift function 120583(119905) and thevolatility function 120590(119905) Give the transfer function from theinput layer to the hidden layer and the transfer function fromthe hidden layer to the output layer Step 4 Establish an erroracceptable model and set preset minimum error 120585 Basedon network training objective 119864 = (1119873)sum

119873

119899=1119864(119905119899) if 119864 is

below preset minimum error go to Step 6 otherwise go toStep 5 Step 5 Modify the connective weights calculate thegradient of the connective weights 119908

119894119895 Δ119908119896119894119895 V119895 ΔV119896119895 119888119895 and

Δ119888119896

119895 Then modify the weights from the layer to the previous

layer 119908119896+1119894119895

V119896+1119895

or 119888119896+1119895

Step 6 Output the predictive value119910119905+1

= 119891119879(sum119898

119895=1V119895119891119867(sum119899

119894=1119908119894119895119909119894119905+ sum119898

119895=1119888119895119906119895119905))

3 Forecasting and StatisticalAnalysis of Stock Price

31 Selecting and Preprocessing of the Data To evaluate theperformance of the proposed ST-ERNN forecasting modelwe select the daily data from Shanghai Stock Exchange (SSE)Composite Index Taiwan Stock Exchange CapitalizationWeighted Stock Index (TWSE) Korean Stock Price Index(KOSPI) and Nikkei 225 Index (Nikkei225) to analyze theforecasting models by comparison In Table 1 we show thatthe selected number of each index is 2000The SSE data coverthe time period from 16032006 up to 19032014 the TWSEis from 09022006 up to 19032014 and KOSPI used in thispaper is from 20022006 up to 19032014 while Nikkei225is from 27012006 up to 19032014 Usually the nontradingtime periods are treated as frozen such that we adopt onlythe time during trading hours To reduce the impact ofnoise in the financial market and finally lead to a betterprediction the collected data should be properly adjustedand normalized at the beginning of the modelling There aredifferent normalization methods that are tested to improvethe network training [27 28] which include ldquothe normalizeddata in the range of [0 1]rdquo in the following equation which isalso adopted in this work

119878 (119905)1015840=

119878 (119905) minusmin 119878 (119905)max 119878 (119905) minusmin 119878 (119905)

(11)

where the minimum and maximum values are obtained onthe training set during the training process In order to obtain

Computational Intelligence and Neuroscience 5

Table 1 Data selection

Index Date sets Total number Hidden number Learning rateSSE 16032006sim19032014 2000 9 0001TWSE 09022006sim19032014 2000 12 0001KOSPI 20022006sim19032014 2000 10 005Nikkei225 27012006sim19032014 2000 10 001

Start

Step 1 construct the ST-ERNNmodel

Step 2 initialize the connective

algorithm

Step 4 establish the erroraccuracy

If E lt 120585

Yes

Step 6 output predictive valueyt+1

Input preset minimumerror 120585

Compute the costfunction E

No

Set the topology of network architecture

Establish input vector xt and output yt+1

Set the training data

Introduce the stochastic time effectivefunction

Establish the update rule for the weights

Apply transfer function

Step 5 modify the connectiveweights

Step 3 set up the learning

weights wij j and cjwij j or cj

Modify the weights wij j or cj

Δj and Δcj

Calculate the gradient of weights Δwij

Figure 2 Training algorithm procedures of ST-ERNN

the true value after the forecasting we can revert the outputvariables as

119878 (119905) = 119878 (119905)1015840(max 119878 (119905) minusmin 119878 (119905)) +min 119878 (119905) (12)

32 Training and Forecasting by ST-ERNN Model In the ST-ERNNmodel after we have done the experiments repeatedlyon the different index data different number of neural nodes

in the hidden layer were chose as the optimal see Table 1The dataset was divide into two subsets the training set andthe testing set It is noteworthy that the lengths of the datapoints we chose for these four time series are the same thelengths of training data and testing data are also set the sameThe training set with 75of datawas used formodel buildingand the testing set with the last 25 was used to test on theout-of-sample set the predictive part of a modelThe training

6 Computational Intelligence and Neuroscience

set is totally 1500 data for the SSE is from 16032006 to22022012 while that for the TWSE is from 09022006 to08032012 for KOSPI is from 20022006 to 09032012 andfor Nikkei is from 27012006 to 09032012 The numberof the rest is 500 defined as the testing set We preset thelearning rate and the maximum training cycle by referring to[21 29 30] then we have done the experiment repeatedly onthe training set of different index data different numbers ofneural nodes in the hidden layer are chosen as the optimal seeTable 1 The maximum training iterations number 119870 is 300different dataset has different learning rate 120578 and here aftermany times experiments of the training data we choose 00010001 005 and 001 for SSE TWSE KOSPI and Nikkei225respectively And the predefined minimum training thresh-old 120585 = 10

minus5 When using the ER-STNNmodel to predict thedaily closing price of stock index we assume that 120583(119905) (thedrift function) and120590(119905) (the volatility function) are as follows

120583 (119905) =

1

(119888 minus 119905)2

120590 (119905) = [

1

119873 minus 1

119873

sum

119894=1

(119909 minus 119909)2]

12

(13)

where 119888 is the parameter which is equal to the number ofsamples in the datasets and 119909 is the mean of the sample dataThen the corresponding cost function can be written by

119864 =

1

119873

119873

sum

119899=1

119864 (119905119899) =

1

2119873

119873

sum

119899=1

1

120573

exp

int

119905119899

1199050

1

(119888 minus 119905)2119889119905

+ int

119905119899

1199050

[

1

119873 minus 1

119873

sum

119894=1

(119909 minus 119909)2]

12

119889119861 (119905)

(119889119905119899

minus 119910119905119899

)

2

(14)

Figure 3 shows the predicting results of training and test-ing data for SSE TWSE KOSPI and Nikkei225 with the ST-ERNN model correspondingly The curves of the actual dataand the predictive data are intuitively very approximatingIt means that with many times experiments the financialtime series have been well trained the forecasting results aredesired by ST-ERNNmodel

The plots of the real and the predictive data for thesefour price series are respectively shown in Figure 4Throughthe linear regression analysis we make a comparison of thepredictive value of the ST-ERNN model with the real pricedata It is known that the linear regression can be used to fita predictive model to an observed data set of 119884 and 119883 Thelinear equations of SSE TWSE KOSPI and Nikkei225 areexhibited respectively in Figures 4(a)ndash4(d) We can observethat all the slopes of the linear equations for them are drawnnear to 1 which implies that the predictive values and the realvalues are not deviating too much

Table 2 Linear regression parameters of market indices

Parameter SSE TWSE KOSPI Nikkei225119886 09873 09763 09614 09661119887 7582 173 3804 6535119877 0992 09952 09963 09971

A valuable numericalmeasure of association between twovariables is the correlation coefficient 119877 Table 2 shows thevalues of 119886 119887 and119877 for the above indices119877 is given as follows

119877 =

sum119873

119894=1(119889119905minus 119889119905) (119910119905minus 119910119905)

radicsum119873

119894=1(119889119905minus 119889119905)

2

sum119873

119894=1(119910119905minus 119910119905)2

(15)

where 119889119905is the actual value 119910

119905is the predicting value 119889

119905is

the mean of the actual value 119910119905is the mean of the predicting

value and119873 is the total number of the data

33 Comparisons of Forecasting Results We compare theproposed and conventional forecasting approaches (BPNNSTNN and ERNN model) on the four indices mentionedabove where STNN is based on the BPNN and combinedwith the stochastic effective function [19] For these fourdifferent models we set the same inputs of the networksincluding four kinds of series daily open price daily closingprice daily highest price and daily lowest priceThe networkoutput is the closing price of the next trading day In the stockmarkets the practical experience shows us that the abovefour kinds of data of the last trading day are very importantindicators when predicting the closing price of the nexttrading day To choose better parameters we have carried outmany experiments on these four different indices In order toachieve the optimal networks of each forecasting approachthe most appropriate numbers of neural nodes in the hiddenlayer are different the learning rates are also varying by train-ing different models see Table 3 In Table 3 ldquoHiddenrdquo standsfor the number of neural nodes in the hidden layer and ldquoLrrdquo stands for learning rateThe hidden number is also chosenby referring to [21 29 30] The experiments have been donerepeatedly to determine hidden nodes and training cycle inthe training processThedetails of principles of how to choosethe hidden number are as follows If the number of neuralnodes in the input layer is119873 the number of neural nodes inthe hidden layer is set to be nearly 2119873+1 and the number ofneural nodes in the output layer is 1 Since the ERNN modeland the ST-ERNN model have similar topology structuresin Table 3 the number of neural nodes in hidden layer andthe learning rate are chosen approximatly in the two modelsof training process Also the BPNN model and the STNNmodel are similar so the chosen parameters are basically thesame Figures 5(a)ndash5(d) show the predicting values of the fourindexes on the test set From these plots the predicting valuesof the ST-ERNNmodel are closer to the actual values than theothermodels curves To compare the training and forecasting

Computational Intelligence and Neuroscience 7

500 1000 1500 20000

500

1000

1500

Real valueST-ERNN

Test set

Training set

(a) SSEReal valueST-ERNN

500 1000 1500 20002000

4000

6000

8000

10000

12000

Training set

Test set

(b) TWSE

500 1000 1500 2000

400

600

800

1000

Training set

Test set

Real valueST-ERNN

(c) KOSPI

500 1000 1500 2000

1

15

2

25

3

35

4

Test setTraining set

Real valueST-ERNN

times104

(d) Nikkei225

Figure 3 Comparisons of the predictive data and the actual data for the forecasting models

results more clearly the performance measures RMSE MAEMAPE and MAPE(100) are uncovered in the next part

To analyze the forecasting performance of four consid-ered forecasting models deeply we use the following errorevaluation criteria [31ndash35] the mean absolute error (MAE)the root mean square error (RMSE) and the correlationcoefficient (MAPE) the corresponding definitions are givenas follows

MAE =

1

119873

119873

sum

119905=1

1003816100381610038161003816119889119905minus 119910119905

1003816100381610038161003816

RMSE = radic1

119873

119873

sum

119905=1

(119889119905minus 119910119905)2

MAPE = 100 times

1

119873

119873

sum

119905=1

10038161003816100381610038161003816100381610038161003816

119889119905minus 119910119905

119889119905

10038161003816100381610038161003816100381610038161003816

(16)

where 119889119905and 119910

119905are the real value and the predicting value

at time 119905 respectively 119873 is the total number of the dataNoting that MAE RMSE and MAPE are measures of thedeviation between the prediction values and the actual valuesthe prediction performance is better when the values of theseevaluation criteria are smaller However if the results are notconsistent among these criteria we choose the MAPE as thebenchmark since MAPE is relatively more stable than othercriteria [16]

Figures 6(a)ndash6(d) show the forecasting results of SSETWSE KOSPI and Nikkei225 for four forecasting mod-els The empirical research shows that the proposed ST-ERNN model has the best performance the ERNN and theSTNN both outperform the common BPNN model Thestock markets showed large fluctuations which are reflectedin Figure 6 we can see that the large fluctuation periodforecasting is relatively not accurate from these four models

8 Computational Intelligence and Neuroscience

200 400 600 800 1000 1200 1400

200

400

600

800

1000

1200

1400

Real value of SSE

Pred

ictiv

e val

ue o

f SSE

Predictive value pointsLinear fit

Y = 09873X + 7582

(a) SSE

4000 5000 6000 7000 8000 9000 10000

4000

5000

6000

7000

8000

9000

10000

Real value of TWSE

Pred

ictiv

e val

ue o

f TW

SE

Predictive value pointsLinear fit

Y = 09763X + 173

(b) TWSE

300 400 500 600 700 800 900 1000 1100

400

600

800

1000

Real value of KOSPI

Pred

ictiv

e val

ue o

f KO

SPI

Predictive value pointsLinear fit

Y = 09614X + 3804

(c) KOSPI

Predictive value pointsLinear fit

1 15 2 25 3 351

15

2

25

3

35

Real value of Nikkei225

Pred

ictiv

e val

ue o

f Nik

kei2

25times104

times104

Y = 09661X + 6535

(d) Nikkei225

Figure 4 Comparisons and linear regressions of the actual data and the predictive values for SSE TWSE KOSPI and Nikkei225

Table 3 Different parameters for different models

Index data BPNN STNN ERNN ST-ERNNHidden L r Hidden L r Hidden L r Hidden L r

SSE 8 001 8 001 10 0001 9 0001TWSE 10 001 10 001 12 0001 12 0001KOSPI 8 002 8 002 10 003 10 005Nikkei225 10 005 10 005 10 001 10 001

When the stock market is relatively stable the forecastingresult is nearer to the actual value Compared with theBPNN the STNN and the ERNN models the forecastingresults are also presented in Table 4 where the MAPE(100)stands for the latest 100 days of MAPE in the testing dataTable 4 shows that the evaluation criteria by the ST-ERNN

model are almost smaller than those by other models FromTable 4 and Figure 6 we can conclude that the proposedST-ERNN model is better than the other three models InTable 4 the evaluation criteria by the STNN model andERNNmodel are almost smaller than those by BPNN for fourconsidered indices It illustrates that the effect of financial

Computational Intelligence and Neuroscience 9

1500 1600 1700 1800 1900 2000

200

400

600

800

1000

1200

1400

STNNBPNN

ERNN

ST-ERNNReal value

(a) SSE

1500 1600 1700 1800 1900 20005000

6000

7000

8000

9000

10000

11000

STNNBPNN

ERNN

ST-ERNNReal value

(b) TWSE

1500 1600 1700 1800 1900 2000

300

400

500

600

700

800

900

1000

STNNBPNN

ERNN

ST-ERNNReal value

(c) KOSPI

STNNBPNN

ERNN

ST-ERNNReal value

1500 1600 1700 1800 1900 2000

095

1

105

11

115

12

125

13

135times104

(d) Nikkei225

Figure 5 Predictive values on the test set for SSE TWSE KOSPI and Nikkei225

time series forecasting of STNN model is superior to that ofBPNN model and the dynamic neural network is effectiverobust and precise than original BPNN for these four indicesBesides most values of MAPE(100) are smaller than thoseof MAPE in all stock indexes Therefore the short-termprediction outperforms the long-term prediction Overalltraining and testing results are consistent with the measureddata which demonstrates that the ST-ERNN predictor hashigher forecast accuracy

In Figures 7(a) 7(b) 7(c) and 7(d) we considered therelative errors of the ST-ERNN forecasting results Figure 7depicts that most of the predicting relative errors for thesefour price series are betweenminus01 and 01Moreover there aresome points with large relative errors of forecasting results infour models especially on the SSE index which can attribute

to the large fluctuation that leads to the large relative errorsThe definition of relative error is given as follows

119890 (119905) =

119889119905minus 119910119905

119889119905

(17)

where 119889119905and 119910

119905denote the actual value and the predicting

value respectively at time 119905 119905 = 1 2

4 CID and MCID Analysis

The analysis and forecast of time series have long been afocus of economic research for a more clear understanding ofmechanism and characteristics of financial markets [36ndash42]In this section we employ an efficient complexity invariantdistance (CID) for time series Reference [43] shows that

10 Computational Intelligence and Neuroscience

200 400 600 800 1000 1200 1400 1600 1800 20000

200

400

600

800

1000

1200

1400

1600

BPNNSTNNERNN

ST-ERNNReal value

1560 1580 1600 1620 1640

800

1000

1200

(a) SSE

BPNNSTNNERNN

ST-ERNNReal value

200 400 600 800 1000 1200 1400 1600 1800 20003000

4000

5000

6000

7000

8000

9000

10000

11000

1600 1620 1640 1660 1680 17005500

6000

6500

7000

7500

(b) TWSE

BPNNSTNNERNN

ST-ERNNReal value

200 400 600 800 1000 1200 1400 1600 1800 2000200

400

600

800

1000

1200

1800 1850 1900 1950

400500600700

(c) KOSPI

BPNNSTNNERNN

ST-ERNNReal value

200 400 600 800 1000 1200 1400 1600 1800 2000

1

15

2

25

3

35

4

1800 1850 1900 1950 2000

095

1

105

11

115times104

times104

(d) Nikkei225

Figure 6 Comparisons of the actual data and the predictive data for SSE TWSE KOSPI and Nikkei225

complexity invariant distancemeasure can produce improve-ments in classification and clustering in the vast majority ofcases

Complexity invariance uses information about complex-ity differences between two time series as a correction factorfor existing distance measures We begin by introducingEuclidean distance and use this as a starting point to bringin the definition of CID Suppose we have two time series 119875and 119876 of length 119899 Consider

119875 = 1199011 1199012 119901

119894 119901

119899

119876 = 1199021 1199022 119902

119894 119902

119899

(18)

The ubiquitous Euclidean distance is

ED (119875 119876) = radic

119899

sum

119894=1

(119901119894minus 119902119894)2 (19)

The Euclidean distance ED(119875 119876) between two time series 119875and 119876 can be made complexity invariant by introducing acorrection factor

CID (119875 119876) = ED (119875 119876) times CF (119875 119876) (20)

where CF is a complexity correction factor defined as

CF (119875 119876) = max (CE (119875) CE (119876))min (CE (119875) CE (119876))

(21)

and CE(119879) is a complexity estimate of a time series 119879 whichcan be computed as follows

CE (119879) = radic

119899minus1

sum

119894=1

(119905119894+1

minus 119905119894)2 (22)

Computational Intelligence and Neuroscience 11

200 400 600 800 1000 1200 1400 1600 1800 2000

minus02

minus01

0

01

02

Relat

ive e

rror

of S

SE

Relative error(a) SSE

Relative error

200 400 600 800 1000 1200 1400 1600 1800 2000

minus01

minus005

0

005

01

Rela

tive e

rror

of T

WSE

(b) TWSE

Relative error

200 400 600 800 1000 1200 1400 1600 1800 2000

minus02

minus015

minus01

minus005

0

005

01

015

Relat

ive e

rror

of K

OSP

I

(c) KOSPIRelative error

200 400 600 800 1000 1200 1400 1600 1800 2000minus01

minus005

0

005

01

Relat

ive e

rror

of N

ikke

i225

(d) Nikkei225

Figure 7 ((a) (b) (c) and (d)) Relative errors of forecasting results from the ST-ERNNmodel

It is worth noticing that CF accounts for differences inthe complexities of the time series being compared CF forcestime series with very different complexities to be furtherapart In the case that all time series have the same complexityCID simply degenerates to Euclidean distanceThepredictionperformance is better when the CID distance is smallerthat is to say the curve of the predictive data is closer tothe actual data The actual values can be seen as the series119875 and the predicting results as the series 119876 Table 5 showsCID distance between the real indices values of SSE TWSEKOSPI and Nikkei225 and the corresponding predictionsfrom each network model It is clear that the CID distancebetween the real index values and the prediction by ST-ERNNmodel is the smallest one moreover the distances by theSTNN model and the ERNN model are smaller than thoseby the BPNN for all the four considered indices

In general the complexity of a real system is not con-strained to a sole scale In this part we consider a developedCID analysis that is the multiscale CID (MCID) TheMCID analysis takes into account the multiple time scales

while measuring the predicting results and it is appliedto the stock prices analysis for the actual data and thepredicting data in this work The MCID analysis shouldcomprise two steps (i) Considering one-dimensional discretetime series 119909

1 1199092 119909

119894 119909

119873 we construct consecutive

coarse-grained time series 119910(120591) corresponding to the scalefactor 120591 according to the following formula

119910(120591)

119895=

1

120591

119895120591

sum

119894=(119895minus1)120591+1

119909119894 1 le 119895 le

119873

120591

(23)

For scale one the time series 119910(1) is simply the originaltime series The length of each coarse-grained time series isequal to the original time series divided by the scale factor120591 (ii) Calculate the CID for each coarse-grained time seriesand then plot as a function of the scale factor Figure 8shows the MCID values between the forecasting results andthe real market prices from BPNN ERNN STNN and ST-ERNNmodels In Figure 8 it is obvious that the MCID from

12 Computational Intelligence and Neuroscience

0 5 10 15 20 25 30 35 40

500

1000

1500

2000

2500

3000

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(a) SSE

0 5 10 15 20 25 30 35 400

05

1

15

2

25

3times104

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(b) TWSE

0 5 10 15 20 25 30 35 400

500

1000

1500

2000

2500

3000

3500

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(c) KOSPI

0 5 10 15 20 25 30 35 40

05

1

15

2

25

3

35

4

45

5times104

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(d) Nikkei225

Figure 8 ((a) (b) (c) and (d)) MCID values between the forecasting results and the real market prices from BPNN ERNN STNN andST-ERNNmodels

ST-ERNN with the actual value is the smallest one in anyscale that is the ST-ERNN (with the stochastic time effectivefunction) for forecasting stock prices is effective

5 Conclusion

The aim of this research is to develop a predictive modelto forecast the financial time series In this study we havedeveloped a predictive model by using an Elman recurrentneural network with the stochastic time effective function toforecast the indices of SSE TWSE KOSPI and Nikkei225Through the linear regression analysis it implies that thepredictive values and the real values are not deviating too

much Then we take the proposed model compared withBPNN STNN and ERNN forecasting models Empiricalexaminations of predicting precision for the price time series(by the comparisons of predicting measures as MAE RMSEMAPE and MAPE(100)) show that the proposed neuralnetwork model has the advantage of improving the precisionof forecasting and the forecasting of this proposed modelmuch approaches to the real financial market movementsFurthermore from the curve of the relative error it canmake a conclusion that the large fluctuation leads to thelarge relative errors In addition by calculating CID andMCID distance the conclusion was illustrated more clearlyThe study and the proposed model contribute significantly tothe time series literature on forecasting

Computational Intelligence and Neuroscience 13

Table 4 Comparisons of indicesrsquo predictions for different forecasting models

Index errors BPNN STNN ERNN ST-ERNNSSE

MAE 453701 249687 37262647 127390RMSE 544564 405437 493907 370693MAPE 201994 118947 182110 41353MAPE(100) 50644 36868 43176 26809

TWSEMAE 2527225 1405971 1512830 1056377RMSE 3168197 1868309 2054236 1361674MAPE 32017 17303 18449 13468MAPE(100) 22135 11494 13349 12601

KOSPIMAE 743073 563309 479296 182421RMSE 771528 582944 508174 210479MAPE 166084 124461 109608 42257MAPE(100) 74379 59664 49176 21788

Nikkei225MAE 2038034 1381857 1662480 685458RMSE 2385933 1697061 2073395 890378MAPE 18556 12580 15398 06010MAPE(100) 07674 05191 04962 04261

Table 5 CID distances for four network models

Index BPNN STNN ERNN ST-ERNNSSE 30521 17647 23202 16599TWSE 27805 98763 10830 61581KOSPI 33504 23128 25510 10060Nikkei225 44541 23726 32895 25421

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

The authors were supported in part by National NaturalScience Foundation of China under Grant nos 71271026 and10971010

References

[1] Y Kajitani K W Hipel and A I McLeod ldquoForecastingnonlinear time series with feed-forward neural networks a casestudy of Canadian lynx datardquo Journal of Forecasting vol 24 no2 pp 105ndash117 2005

[2] T Takahama S Sakai A Hara and N Iwane ldquoPredicting stockprice using neural networks optimized by differential evolutionwith degenerationrdquo International Journal of Innovative Comput-ing Information and Control vol 5 no 12 pp 5021ndash5031 2009

[3] K Huarng and T H-K Yu ldquoThe application of neural networksto forecast fuzzy time seriesrdquo Physica A vol 363 no 2 pp 481ndash491 2006

[4] F Wang and J Wang ldquoStatistical analysis and forecasting ofreturn interval for SSE and model by lattice percolation systemand neural networkrdquoComputers and Industrial Engineering vol62 no 1 pp 198ndash205 2012

[5] H-J Kim and K-S Shin ldquoA hybrid approach based on neuralnetworks and genetic algorithms for detecting temporal pat-terns in stock marketsrdquo Applied Soft Computing vol 7 no 2pp 569ndash576 2007

[6] M Ghiassi H Saidane and D K Zimbra ldquoA dynamic artificialneural network model for forecasting time series eventsrdquoInternational Journal of Forecasting vol 21 no 2 pp 341ndash3622005

[7] M R Hassan B Nath andM Kirley ldquoA fusionmodel of HMMANNandGA for stockmarket forecastingrdquo Expert Systems withApplications vol 33 no 1 pp 171ndash180 2007

[8] D Devaraj B Yegnanarayana and K Ramar ldquoRadial basisfunction networks for fast contingency rankingrdquo InternationalJournal of Electrical Power and Energy Systems vol 24 no 5 pp387ndash395 2002

[9] B A Garroa and R A Vazquez ldquoDesigning artificial neuralnetworks using particle swarm optimization algorithmsrdquo Com-putational Intelligence and Neuroscience vol 2015 Article ID369298 20 pages 2015

[10] Q Gan ldquoExponential synchronization of stochastic Cohen-Grossberg neural networks withmixed time-varying delays andreaction-diffusion via periodically intermittent controlrdquoNeuralNetworks vol 31 pp 12ndash21 2012

[11] D Xiao and J Wang ldquoModeling stock price dynamics bycontinuum percolation system and relevant complex systemsanalysisrdquo Physica A Statistical Mechanics and its Applicationsvol 391 no 20 pp 4827ndash4838 2012

[12] D Enke and N Mehdiyev ldquoStock market prediction usinga combination of stepwise regression analysis differentialevolution-based fuzzy clustering and a fuzzy inference Neural

14 Computational Intelligence and Neuroscience

Networkrdquo Intelligent Automation and Soft Computing vol 19no 4 pp 636ndash648 2013

[13] G Sermpinisa C Stasinakisa and C Dunisb ldquoStochastic andgenetic neural network combinations in trading and hybridtime-varying leverage effectsrdquo Journal of International FinancialMarkets Institutions amp Money vol 30 pp 21ndash54 2014

[14] R Ebrahimpour H Nikoo S Masoudnia M R Yousefi andM S Ghaemi ldquoMixture of mlp-experts for trend forecastingof time series a case study of the tehran stock exchangerdquoInternational Journal of Forecasting vol 27 no 3 pp 804ndash8162011

[15] A Bahrammirzaee ldquoA comparative survey of artificial intelli-gence applications in finance artificial neural networks expertsystem and hybrid intelligent systemsrdquo Neural Computing andApplications vol 19 no 8 pp 1165ndash1195 2010

[16] A Graves M Liwicki S Fernandez R Bertolami H Bunkeand J Schmidhuber ldquoA novel connectionist system for uncon-strained handwriting recognitionrdquo IEEE Transactions on Pat-tern Analysis and Machine Intelligence vol 31 no 5 pp 855ndash868 2009

[17] M Ardalani-Farsa and S Zolfaghari ldquoChaotic time seriesprediction with residual analysis method using hybrid Elman-NARX neural networksrdquoNeurocomputing vol 73 no 13ndash15 pp2540ndash2553 2010

[18] M Paliwal and U A Kumar ldquoNeural networks and statisticaltechniques a review of applicationsrdquo Expert Systems withApplications vol 36 no 1 pp 2ndash17 2009

[19] Z Liao and J Wang ldquoForecasting model of global stock indexby stochastic time effective neural networkrdquoExpert SystemswithApplications vol 37 no 1 pp 834ndash841 2010

[20] L Pan and J Cao ldquoRobust stability for uncertain stochasticneural network with delay and impulsesrdquo Neurocomputing vol94 pp 102ndash110 2012

[21] H F Liu and J Wang ldquoIntegrating independent componentanalysis and principal component analysis with neural networkto predict Chinese stock marketrdquo Mathematical Problems inEngineering vol 2011 Article ID 382659 15 pages 2011

[22] Z Q Guo H Q Wang and Q Liu ldquoFinancial time seriesforecasting using LPP and SVM optimized by PSOrdquo SoftComputing vol 17 no 5 pp 805ndash818 2013

[23] H L Niu and J Wang ldquoFinancial time series predictionby a random data-time effective RBF neural networkrdquo SoftComputing vol 18 no 3 pp 497ndash508 2014

[24] J L Elman ldquoFinding structure in timerdquo Cognitive Science vol14 no 2 pp 179ndash211 1990

[25] MCacciola GMegali D Pellicano and F CMorabito ldquoElmanneural networks for characterizing voids in welded strips astudyrdquo Neural Computing and Applications vol 21 no 5 pp869ndash875 2012

[26] R Chandra and M Zhang ldquoCooperative coevolution of Elmanrecurrent neural networks for chaotic time series predictionrdquoNeurocomputing vol 86 pp 116ndash123 2012

[27] D K Chaturvedi P S Satsangi and P K Kalra ldquoEffect of differ-ent mappings and normalization of neural network modelsrdquo inProceedings of the National Power Systems Conference vol 9 pp377ndash386 Indian Institute of Technology Kanpur India 1996

[28] S Makridakis ldquoAccuracy measures theoretical and practicalconcernsrdquo International Journal of Forecasting vol 9 no 4 pp527ndash529 1993

[29] L Q Han Design and Application of Artificial Neural NetworkChemical Industry Press 2002

[30] M Zounemat-Kermani ldquoPrincipal component analysis (PCA)for estimating chlorophyll concentration using forward andgeneralized regression neural networksrdquoApplied Artificial Intel-ligence vol 28 no 1 pp 16ndash29 2014

[31] D Olson and C Mossman ldquoNeural network forecasts ofCanadian stock returns using accounting ratiosrdquo InternationalJournal of Forecasting vol 19 no 3 pp 453ndash465 2003

[32] H Demuth and M Beale Network Toolbox For Use withMATLAB The Math Works Natick Mass USA 5th edition1998

[33] A P Plumb R C Rowe P York and M Brown ldquoOptimisationof the predictive ability of artificial neural network (ANN)models a comparison of three ANN programs and four classesof training algorithmrdquo European Journal of PharmaceuticalSciences vol 25 no 4-5 pp 395ndash405 2005

[34] D O Faruk ldquoA hybrid neural network and ARIMA model forwater quality time series predictionrdquo Engineering Applicationsof Artificial Intelligence vol 23 no 4 pp 586ndash594 2010

[35] M Tripathy ldquoPower transformer differential protection usingneural network principal component analysis and radial basisfunction neural networkrdquo Simulation Modelling Practice andTheory vol 18 no 5 pp 600ndash611 2010

[36] C E Martin and J A Reggia ldquoFusing swarm intelligenceand self-assembly for optimizing echo state networksrdquo Com-putational Intelligence and Neuroscience vol 2015 Article ID642429 15 pages 2015

[37] R S Tsay Analysis of Financial Time Series JohnWiley amp SonsHoboken NJ USA 2005

[38] P-C Chang D-D Wang and C-L Zhou ldquoA novel modelby evolving partially connected neural network for stock pricetrend forecastingrdquo Expert Systems with Applications vol 39 no1 pp 611ndash620 2012

[39] P Roy G S Mahapatra P Rani S K Pandey and K NDey ldquoRobust feed forward and recurrent neural network baseddynamic weighted combination models for software reliabilitypredictionrdquo Applied Soft Computing vol 22 pp 629ndash637 2014

[40] X Gabaix P Gopikrishnan V Plerou andH E Stanley ldquoA the-ory of power-law distributions in financial market fluctuationsrdquoNature vol 423 no 6937 pp 267ndash270 2003

[41] R N Mantegna and H E Stanley An Introduction to Econo-physics Correlations and Complexity in Finance CambridgeUniversity Press Cambridge UK 2000

[42] T H Roh ldquoForecasting the volatility of stock price indexrdquoExpert Systems with Applications vol 33 no 4 pp 916ndash9222007

[43] G E Batista E J KeoghOM Tataw andVM de Souza ldquoCIDan efficient complexity-invariant distance for time seriesrdquo DataMining and Knowledge Discovery vol 28 no 3 pp 634ndash6692014

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 2: Research Article Financial Time Series Prediction …downloads.hindawi.com/journals/cin/2016/4742515.pdfResearch Article Financial Time Series Prediction Using Elman Recurrent Random

2 Computational Intelligence and Neuroscience

to the number of explanatory variables) The last layer iscalled the output layer (the number of its nodes correspondsto the number of response variables) An intermediary layerof nodes the hidden layer separates the input from the outputlayer Its number of nodes defines the amount of complexitywhich the model is capable of fitting In previous studies for-ward networks have frequently been used for financial timeseries prediction while unlike forward networks recurrentneural network uses feedback connections to model spatialas well as temporal dependencies between input and outputseries to make the initial states and the past states of theneurons capable of being involved in a series of processingReferences [15ndash17] show the applications in different areas ofrecurrent neural networkThis ability makes them applicableto time series prediction with satisfactory prediction results[18] As a special recurrent neural network the Elman recur-rent neural network (ERNN) has been used in the presentpaper for prediction ERNN is a time-varying predictivecontrol system that was developed with the ability to keepmemory of recent events in order to predict future output

The nonlinear and nonstationary characteristics of thestock market make it difficult and challenging for forecast-ing stock indices in a reliable manner Particularly in thecurrent stock markets the rapid changes of trading rulesand management systems have made it difficult to reflect themarketsrsquo development using the early data However if onlythe recent data are selected a lot of useful information (whichthe early data hold) will be lost In this research a stochastictime effective neural network (STNN) and the correspondinglearning algorithm were presented References [19ndash22] intro-duce the corresponding stochastic time effective models anduse them to predict financial time series Particularly [23]has shown a random data-time effective radial basis functionneural network which is also applied to the prediction offinancial price series The present paper has optimized theERNN model which is different with the above models alsoat first step of the procedures we employ different inputvariables from [23] At the last section of this paper two newerror measure methods are first introduced to evaluate thebetter predicting results of the proposed model than othertraditional models For this improved network model eachof historical data is given a weight depending on the timeat which it occurs The degree of impact of historical dataon the market is expressed by a stochastic process where adrift function and the Brownianmotion are introduced in thetime strength function in order to make the model have theeffect of random movement while maintaining the originaltrend In the present work we combine MLP with ERNNand stochastic time effective function to develop a stock priceforecasting model called ST-ERNN

In order to display that the ST-ERNN can provide ahigher accuracy of the financial time series forecastingwe compare the forecasting performance with the BPNNmodel the STNNmodel and the ERNNmodel by employingdifferent global stock indices Shanghai Stock Exchange (SSE)Composite Index Taiwan Stock Exchange CapitalizationWeighted Stock Index (TWSE) Korean Stock Price Index

(KOSPI) and Nikkei 225 Index (Nikkei225) are applied inthis work to analyze the forecasting models by comparison

2 Proposed Approach

21 Elman Recurrent Neural Network (ERNN) The Elmanrecurrent neural network a simple recurrent neural networkwas introduced by Elman in 1990 [24] As is well known arecurrent network has some advantages such as having timeseries and nonlinear prediction capabilities faster conver-gence andmore accuratemapping ability References [25 26]combine Elman neural network with different areas for theirpurposes In this network the outputs of the hidden layerare allowed to feedback onto themselves through a bufferlayer called the recurrent layer This feedback allows ERNNto learn recognize and generate temporal patterns as wellas spatial patterns Every hidden neuron is connected to onlyone recurrent layer neuron through a constantweight of valueone Hence the recurrent layer virtually constitutes a copy ofthe state of the hidden layer one instant before The numberof recurrent neurons is consequently the same as the numberof hidden neurons To sum up the ERNN is composedof an input layer a recurrent layer which provides stateinformation a hidden layer and an output layer Each layercontains one or more neurons which propagate informationfrom one layer to another by computing a nonlinear functionof their weighted sum of inputs

In Figure 1 a multi-input ERNN model is exhibitedwhere the number of neurons in inputs layer is 119898 and in thehidden layer is 119899 and one output unit Let 119909

119894119905(119894 = 1 2 119898)

denote the set of input vector of neurons at time 119905 119910119905+1

denotes the output of the network at time 119905 + 1 119911119895119905(119895 =

1 2 119899) denote the output of hidden layer neurons at time119905 and 119906

119895119905(119895 = 1 2 119899) denote the recurrent layer neurons

119908119894119895is the weight that connects the node 119894 in the input layer

neurons to the node 119895 in the hidden layer 119888119895 V119895are theweights

that connect the node 119895 in the hidden layer neurons to thenode in the recurrent layer and output respectively Hiddenlayer stage is as follows the inputs of all neurons in the hiddenlayer are given by

net119895119905 (119896) =

119899

sum

119894=1

119908119894119895119909119894119905 (119896 minus 1) +

119898

sum

119895=1

119888119895119906119895119905 (119896)

119906119895119905 (119896) = 119911

119895119905 (119896 minus 1) 119894 = 1 2 119899 119895 = 1 2 119898

(1)

The outputs of hidden neurons are given by

119911119895119905 (119896) = 119891

119867(net119895119905 (119896))

= 119891119867(

119899

sum

119894=1

119908119894119895119909119894119905 (119896) +

119898

sum

119895=1

119888119895119906119895119905 (119896))

(2)

Computational Intelligence and Neuroscience 3

x1t

x2t

xmt

u1t

u2t

u3t

unt

z1t

z2t

z3t

znt

cj

wij

j

yt+1

Output layer

Input layer

Hidden layer

Recurrent layer

Figure 1 Topology of Elman recurrent neural network

where the sigmoid function in hidden layer is selected as theactivation function 119891

119867(119909) = 1(1 + 119890

minus119909) The output of the

hidden layer is given as follows

119910119905+1 (

119896) = 119891119879(

119898

sum

119895=1

V119895119911119895119905 (119896)) (3)

where 119891119879(119909) is an identity map as the activation function

22 Algorithm of ERNN with a Stochastic Time Effective Func-tion (ST-ERNN) The backpropagation algorithm is a super-vised learning algorithm which minimizes the global error119864 by using the gradient descent method [18 21] For the ST-ERNNmodel we assume that the error of the output is givenby 120576119905119899

= 119889119905119899

minus 119910119905119899

and the error of the sample 119899 is defined as

119864 (119905119899) =

1

2

120593 (119905119899) (119889119905119899

minus 119910119905119899

)

2

(4)

where 119905119899is the time of the sample 119899 (119899 = 1 119873) 119889

119905119899

isthe actual value 119910

119905119899

is the output at time 119905119899 and 120593(119905

119899) is

the stochastic time effective function which endows eachhistorical data with a weight depending on the time at whichit occurs We define 120593(119905

119899) as follows

120593 (119905119899) =

1

120573

expint119905119899

1199050

120583 (119905) 119889119905 + int

119905119899

1199050

120590 (119905) 119889119861 (119905) (5)

where 120573 (gt 0) is the time strength coefficient 1199050is the

time of the newest data in the data training set and 119905119899is

an arbitrary time point in the data training set 120583(119905) is thedrift function 120590(119905) is the volatility function and 119861(119905) is thestandard Brownian motion

Intuitively the drift function is used to model determin-istic trends the volatility function is often used to model aset of unpredictable events occurring during thismotion andBrownian motion is usually thought as random motion ofa particle in liquid (where the future motion of the particleat any given time is not dependent on the past) Brownianmotion is a continuous time stochastic process and it isthe limit of or continuous version of random walks SinceBrownian motionrsquos time derivative is everywhere infinite itis an idealised approximation to actual random physical pro-cesses which always have a finite time scaleWe beginwith anexplicit definition A Brownian motion is a real-valued con-tinuous stochastic process 119884(119905) 119905 ge 0 on a probability space(ΩFP) with independent and stationary increments Indetail we have the following (a) continuity the map 119904 997891rarr

119884(119904) is continuousP as (b) independent increments if 119904 le 119905119884119905minus 119884119904is independent of F = (119884

119906 119906 le 119904) (c) stationary

increments if 119904 le 119905 119884119905minus119884119904and 119884

119905minus119904minus1198840have the same prob-

ability law From this definition if 119884(119905) 119905 ge 0 is a Brownianmotion then119884

119905minus1198840is a normal randomvariablewithmean 119903119905

and variance 1205902119905 where 119903 and 120590 are constant real numbers ABrownian motion is standard (we denote it by 119861(119905)) if 119861(0) =0 P as E[119861(119905)] = 0 and E[119861(119905)]

2= 119905 In the above random

4 Computational Intelligence and Neuroscience

data-time effective function the impact of the historical dataon the stock market is regarded as a time variable functionthe efficiency of the historical data depends on its time Thenthe corresponding global error of all the data at each networkrepeated training set in the output layer is defined as

119864 =

1

119873

119873

sum

119899=1

119864 (119905119899) =

1

2119873

sdot

119873

sum

119899=1

1

120573

expint119905119899

1199050

120583 (119905) 119889119905 + int

119905119899

1199050

120590 (119905) 119889119861 (119905)

sdot (119889119905119899

minus 119910119905119899

)

2

(6)

The main objective of learning algorithm is to minimizethe value of cost function 119864 until it reaches the presetminimum value 120585 by repeated learning On each repetitionthe output is calculated and the global error 119864 is obtainedThe gradient of the cost function is given by Δ119864 = 120597119864120597119882For the weight nodes in the input layer the gradient of theconnective weight 119908

119894119895is given by

Δ119908119894119895= minus120578

120597119864 (119905119899)

120597119908119894119895

= 120578120576119905119899

V119895120593 (119905119899) 1198911015840

119867(net119895119905119899

) 119909119894119905119899

(7)

for the weight nodes in the recurrent layer the gradient of theconnective weight 119888

119895is given by

Δ119888119895= minus120578

120597119864 (119905119899)

120597119888119895

= 120578120576119905119899

V119895120593 (119905119899) 1198911015840

119867(net119895119905119899

) 119906119895119905119899

(8)

and for the weight nodes in the hidden layer the gradient ofthe connective weight V

119895is given by

ΔV119895= minus120578

120597119864 (119905119899)

120597V119895

= 120578120576119905119899

120593 (119905119899) 119891119867(net119895119905119899

) (9)

where 120578 is the learning rate and 1198911015840

119867(net119895119905119899

) is the derivativeof the activation function So the update rules for the weights119908119894119895 119888119895 and V

119895are given by

119908119896+1

119894119895= 119908119896

119894119895+ Δ119908119896

119894119895

= 119908119896

119894119895+ 120578120576119905119899

V119895120593 (119905119899) 1198911015840

119867(net119895119905119899

) 119909119894119905119899

119888119896+1

119895= 119888119896

119895+ Δ119888119896

119895= 119888119896

119895+ 120578120576119905119899

V119895120593 (119905119899) 1198911015840

119867(net119895119905119899

) 119906119895119905119899

V119896+1119895

= V119896119895+ ΔV119896119895= V119896119895+ 120578120576119905119899

120593 (119905119899) 119891119867(net119895119905119899

)

(10)

Note that the training aim of the stochastic time effectiveneural network is to modify the weights so as to minimizethe error between the networkrsquos prediction and the actualtarget In Figure 2 the training algorithm procedures of thestochastic time effective neural network are displayed whichare as follows Step 1 Perform input data normalization InST-ERNN model we choose four kinds of stock prices as

the input values in the input layer daily opening price dailyhighest price daily lowest price and daily closing price Theoutput layer is the closing price of the next trading dayThen determine parameters of the network such as learningrate 120578 which is between 0 and 1 the maximum trainingiterations number 119870 and initial connective weights Alsothe topology of the network architecture is the number ofneural nodes in the hidden layer in this paper Step 2 Atthe beginning of data processing connective weights 119908

119894119895

V119895 and 119888

119895follow the uniform distribution on (minus1 1) Step 3

Introduce the stochastic time effective function 120593(119905) in theerror function 119864 Choose the drift function 120583(119905) and thevolatility function 120590(119905) Give the transfer function from theinput layer to the hidden layer and the transfer function fromthe hidden layer to the output layer Step 4 Establish an erroracceptable model and set preset minimum error 120585 Basedon network training objective 119864 = (1119873)sum

119873

119899=1119864(119905119899) if 119864 is

below preset minimum error go to Step 6 otherwise go toStep 5 Step 5 Modify the connective weights calculate thegradient of the connective weights 119908

119894119895 Δ119908119896119894119895 V119895 ΔV119896119895 119888119895 and

Δ119888119896

119895 Then modify the weights from the layer to the previous

layer 119908119896+1119894119895

V119896+1119895

or 119888119896+1119895

Step 6 Output the predictive value119910119905+1

= 119891119879(sum119898

119895=1V119895119891119867(sum119899

119894=1119908119894119895119909119894119905+ sum119898

119895=1119888119895119906119895119905))

3 Forecasting and StatisticalAnalysis of Stock Price

31 Selecting and Preprocessing of the Data To evaluate theperformance of the proposed ST-ERNN forecasting modelwe select the daily data from Shanghai Stock Exchange (SSE)Composite Index Taiwan Stock Exchange CapitalizationWeighted Stock Index (TWSE) Korean Stock Price Index(KOSPI) and Nikkei 225 Index (Nikkei225) to analyze theforecasting models by comparison In Table 1 we show thatthe selected number of each index is 2000The SSE data coverthe time period from 16032006 up to 19032014 the TWSEis from 09022006 up to 19032014 and KOSPI used in thispaper is from 20022006 up to 19032014 while Nikkei225is from 27012006 up to 19032014 Usually the nontradingtime periods are treated as frozen such that we adopt onlythe time during trading hours To reduce the impact ofnoise in the financial market and finally lead to a betterprediction the collected data should be properly adjustedand normalized at the beginning of the modelling There aredifferent normalization methods that are tested to improvethe network training [27 28] which include ldquothe normalizeddata in the range of [0 1]rdquo in the following equation which isalso adopted in this work

119878 (119905)1015840=

119878 (119905) minusmin 119878 (119905)max 119878 (119905) minusmin 119878 (119905)

(11)

where the minimum and maximum values are obtained onthe training set during the training process In order to obtain

Computational Intelligence and Neuroscience 5

Table 1 Data selection

Index Date sets Total number Hidden number Learning rateSSE 16032006sim19032014 2000 9 0001TWSE 09022006sim19032014 2000 12 0001KOSPI 20022006sim19032014 2000 10 005Nikkei225 27012006sim19032014 2000 10 001

Start

Step 1 construct the ST-ERNNmodel

Step 2 initialize the connective

algorithm

Step 4 establish the erroraccuracy

If E lt 120585

Yes

Step 6 output predictive valueyt+1

Input preset minimumerror 120585

Compute the costfunction E

No

Set the topology of network architecture

Establish input vector xt and output yt+1

Set the training data

Introduce the stochastic time effectivefunction

Establish the update rule for the weights

Apply transfer function

Step 5 modify the connectiveweights

Step 3 set up the learning

weights wij j and cjwij j or cj

Modify the weights wij j or cj

Δj and Δcj

Calculate the gradient of weights Δwij

Figure 2 Training algorithm procedures of ST-ERNN

the true value after the forecasting we can revert the outputvariables as

119878 (119905) = 119878 (119905)1015840(max 119878 (119905) minusmin 119878 (119905)) +min 119878 (119905) (12)

32 Training and Forecasting by ST-ERNN Model In the ST-ERNNmodel after we have done the experiments repeatedlyon the different index data different number of neural nodes

in the hidden layer were chose as the optimal see Table 1The dataset was divide into two subsets the training set andthe testing set It is noteworthy that the lengths of the datapoints we chose for these four time series are the same thelengths of training data and testing data are also set the sameThe training set with 75of datawas used formodel buildingand the testing set with the last 25 was used to test on theout-of-sample set the predictive part of a modelThe training

6 Computational Intelligence and Neuroscience

set is totally 1500 data for the SSE is from 16032006 to22022012 while that for the TWSE is from 09022006 to08032012 for KOSPI is from 20022006 to 09032012 andfor Nikkei is from 27012006 to 09032012 The numberof the rest is 500 defined as the testing set We preset thelearning rate and the maximum training cycle by referring to[21 29 30] then we have done the experiment repeatedly onthe training set of different index data different numbers ofneural nodes in the hidden layer are chosen as the optimal seeTable 1 The maximum training iterations number 119870 is 300different dataset has different learning rate 120578 and here aftermany times experiments of the training data we choose 00010001 005 and 001 for SSE TWSE KOSPI and Nikkei225respectively And the predefined minimum training thresh-old 120585 = 10

minus5 When using the ER-STNNmodel to predict thedaily closing price of stock index we assume that 120583(119905) (thedrift function) and120590(119905) (the volatility function) are as follows

120583 (119905) =

1

(119888 minus 119905)2

120590 (119905) = [

1

119873 minus 1

119873

sum

119894=1

(119909 minus 119909)2]

12

(13)

where 119888 is the parameter which is equal to the number ofsamples in the datasets and 119909 is the mean of the sample dataThen the corresponding cost function can be written by

119864 =

1

119873

119873

sum

119899=1

119864 (119905119899) =

1

2119873

119873

sum

119899=1

1

120573

exp

int

119905119899

1199050

1

(119888 minus 119905)2119889119905

+ int

119905119899

1199050

[

1

119873 minus 1

119873

sum

119894=1

(119909 minus 119909)2]

12

119889119861 (119905)

(119889119905119899

minus 119910119905119899

)

2

(14)

Figure 3 shows the predicting results of training and test-ing data for SSE TWSE KOSPI and Nikkei225 with the ST-ERNN model correspondingly The curves of the actual dataand the predictive data are intuitively very approximatingIt means that with many times experiments the financialtime series have been well trained the forecasting results aredesired by ST-ERNNmodel

The plots of the real and the predictive data for thesefour price series are respectively shown in Figure 4Throughthe linear regression analysis we make a comparison of thepredictive value of the ST-ERNN model with the real pricedata It is known that the linear regression can be used to fita predictive model to an observed data set of 119884 and 119883 Thelinear equations of SSE TWSE KOSPI and Nikkei225 areexhibited respectively in Figures 4(a)ndash4(d) We can observethat all the slopes of the linear equations for them are drawnnear to 1 which implies that the predictive values and the realvalues are not deviating too much

Table 2 Linear regression parameters of market indices

Parameter SSE TWSE KOSPI Nikkei225119886 09873 09763 09614 09661119887 7582 173 3804 6535119877 0992 09952 09963 09971

A valuable numericalmeasure of association between twovariables is the correlation coefficient 119877 Table 2 shows thevalues of 119886 119887 and119877 for the above indices119877 is given as follows

119877 =

sum119873

119894=1(119889119905minus 119889119905) (119910119905minus 119910119905)

radicsum119873

119894=1(119889119905minus 119889119905)

2

sum119873

119894=1(119910119905minus 119910119905)2

(15)

where 119889119905is the actual value 119910

119905is the predicting value 119889

119905is

the mean of the actual value 119910119905is the mean of the predicting

value and119873 is the total number of the data

33 Comparisons of Forecasting Results We compare theproposed and conventional forecasting approaches (BPNNSTNN and ERNN model) on the four indices mentionedabove where STNN is based on the BPNN and combinedwith the stochastic effective function [19] For these fourdifferent models we set the same inputs of the networksincluding four kinds of series daily open price daily closingprice daily highest price and daily lowest priceThe networkoutput is the closing price of the next trading day In the stockmarkets the practical experience shows us that the abovefour kinds of data of the last trading day are very importantindicators when predicting the closing price of the nexttrading day To choose better parameters we have carried outmany experiments on these four different indices In order toachieve the optimal networks of each forecasting approachthe most appropriate numbers of neural nodes in the hiddenlayer are different the learning rates are also varying by train-ing different models see Table 3 In Table 3 ldquoHiddenrdquo standsfor the number of neural nodes in the hidden layer and ldquoLrrdquo stands for learning rateThe hidden number is also chosenby referring to [21 29 30] The experiments have been donerepeatedly to determine hidden nodes and training cycle inthe training processThedetails of principles of how to choosethe hidden number are as follows If the number of neuralnodes in the input layer is119873 the number of neural nodes inthe hidden layer is set to be nearly 2119873+1 and the number ofneural nodes in the output layer is 1 Since the ERNN modeland the ST-ERNN model have similar topology structuresin Table 3 the number of neural nodes in hidden layer andthe learning rate are chosen approximatly in the two modelsof training process Also the BPNN model and the STNNmodel are similar so the chosen parameters are basically thesame Figures 5(a)ndash5(d) show the predicting values of the fourindexes on the test set From these plots the predicting valuesof the ST-ERNNmodel are closer to the actual values than theothermodels curves To compare the training and forecasting

Computational Intelligence and Neuroscience 7

500 1000 1500 20000

500

1000

1500

Real valueST-ERNN

Test set

Training set

(a) SSEReal valueST-ERNN

500 1000 1500 20002000

4000

6000

8000

10000

12000

Training set

Test set

(b) TWSE

500 1000 1500 2000

400

600

800

1000

Training set

Test set

Real valueST-ERNN

(c) KOSPI

500 1000 1500 2000

1

15

2

25

3

35

4

Test setTraining set

Real valueST-ERNN

times104

(d) Nikkei225

Figure 3 Comparisons of the predictive data and the actual data for the forecasting models

results more clearly the performance measures RMSE MAEMAPE and MAPE(100) are uncovered in the next part

To analyze the forecasting performance of four consid-ered forecasting models deeply we use the following errorevaluation criteria [31ndash35] the mean absolute error (MAE)the root mean square error (RMSE) and the correlationcoefficient (MAPE) the corresponding definitions are givenas follows

MAE =

1

119873

119873

sum

119905=1

1003816100381610038161003816119889119905minus 119910119905

1003816100381610038161003816

RMSE = radic1

119873

119873

sum

119905=1

(119889119905minus 119910119905)2

MAPE = 100 times

1

119873

119873

sum

119905=1

10038161003816100381610038161003816100381610038161003816

119889119905minus 119910119905

119889119905

10038161003816100381610038161003816100381610038161003816

(16)

where 119889119905and 119910

119905are the real value and the predicting value

at time 119905 respectively 119873 is the total number of the dataNoting that MAE RMSE and MAPE are measures of thedeviation between the prediction values and the actual valuesthe prediction performance is better when the values of theseevaluation criteria are smaller However if the results are notconsistent among these criteria we choose the MAPE as thebenchmark since MAPE is relatively more stable than othercriteria [16]

Figures 6(a)ndash6(d) show the forecasting results of SSETWSE KOSPI and Nikkei225 for four forecasting mod-els The empirical research shows that the proposed ST-ERNN model has the best performance the ERNN and theSTNN both outperform the common BPNN model Thestock markets showed large fluctuations which are reflectedin Figure 6 we can see that the large fluctuation periodforecasting is relatively not accurate from these four models

8 Computational Intelligence and Neuroscience

200 400 600 800 1000 1200 1400

200

400

600

800

1000

1200

1400

Real value of SSE

Pred

ictiv

e val

ue o

f SSE

Predictive value pointsLinear fit

Y = 09873X + 7582

(a) SSE

4000 5000 6000 7000 8000 9000 10000

4000

5000

6000

7000

8000

9000

10000

Real value of TWSE

Pred

ictiv

e val

ue o

f TW

SE

Predictive value pointsLinear fit

Y = 09763X + 173

(b) TWSE

300 400 500 600 700 800 900 1000 1100

400

600

800

1000

Real value of KOSPI

Pred

ictiv

e val

ue o

f KO

SPI

Predictive value pointsLinear fit

Y = 09614X + 3804

(c) KOSPI

Predictive value pointsLinear fit

1 15 2 25 3 351

15

2

25

3

35

Real value of Nikkei225

Pred

ictiv

e val

ue o

f Nik

kei2

25times104

times104

Y = 09661X + 6535

(d) Nikkei225

Figure 4 Comparisons and linear regressions of the actual data and the predictive values for SSE TWSE KOSPI and Nikkei225

Table 3 Different parameters for different models

Index data BPNN STNN ERNN ST-ERNNHidden L r Hidden L r Hidden L r Hidden L r

SSE 8 001 8 001 10 0001 9 0001TWSE 10 001 10 001 12 0001 12 0001KOSPI 8 002 8 002 10 003 10 005Nikkei225 10 005 10 005 10 001 10 001

When the stock market is relatively stable the forecastingresult is nearer to the actual value Compared with theBPNN the STNN and the ERNN models the forecastingresults are also presented in Table 4 where the MAPE(100)stands for the latest 100 days of MAPE in the testing dataTable 4 shows that the evaluation criteria by the ST-ERNN

model are almost smaller than those by other models FromTable 4 and Figure 6 we can conclude that the proposedST-ERNN model is better than the other three models InTable 4 the evaluation criteria by the STNN model andERNNmodel are almost smaller than those by BPNN for fourconsidered indices It illustrates that the effect of financial

Computational Intelligence and Neuroscience 9

1500 1600 1700 1800 1900 2000

200

400

600

800

1000

1200

1400

STNNBPNN

ERNN

ST-ERNNReal value

(a) SSE

1500 1600 1700 1800 1900 20005000

6000

7000

8000

9000

10000

11000

STNNBPNN

ERNN

ST-ERNNReal value

(b) TWSE

1500 1600 1700 1800 1900 2000

300

400

500

600

700

800

900

1000

STNNBPNN

ERNN

ST-ERNNReal value

(c) KOSPI

STNNBPNN

ERNN

ST-ERNNReal value

1500 1600 1700 1800 1900 2000

095

1

105

11

115

12

125

13

135times104

(d) Nikkei225

Figure 5 Predictive values on the test set for SSE TWSE KOSPI and Nikkei225

time series forecasting of STNN model is superior to that ofBPNN model and the dynamic neural network is effectiverobust and precise than original BPNN for these four indicesBesides most values of MAPE(100) are smaller than thoseof MAPE in all stock indexes Therefore the short-termprediction outperforms the long-term prediction Overalltraining and testing results are consistent with the measureddata which demonstrates that the ST-ERNN predictor hashigher forecast accuracy

In Figures 7(a) 7(b) 7(c) and 7(d) we considered therelative errors of the ST-ERNN forecasting results Figure 7depicts that most of the predicting relative errors for thesefour price series are betweenminus01 and 01Moreover there aresome points with large relative errors of forecasting results infour models especially on the SSE index which can attribute

to the large fluctuation that leads to the large relative errorsThe definition of relative error is given as follows

119890 (119905) =

119889119905minus 119910119905

119889119905

(17)

where 119889119905and 119910

119905denote the actual value and the predicting

value respectively at time 119905 119905 = 1 2

4 CID and MCID Analysis

The analysis and forecast of time series have long been afocus of economic research for a more clear understanding ofmechanism and characteristics of financial markets [36ndash42]In this section we employ an efficient complexity invariantdistance (CID) for time series Reference [43] shows that

10 Computational Intelligence and Neuroscience

200 400 600 800 1000 1200 1400 1600 1800 20000

200

400

600

800

1000

1200

1400

1600

BPNNSTNNERNN

ST-ERNNReal value

1560 1580 1600 1620 1640

800

1000

1200

(a) SSE

BPNNSTNNERNN

ST-ERNNReal value

200 400 600 800 1000 1200 1400 1600 1800 20003000

4000

5000

6000

7000

8000

9000

10000

11000

1600 1620 1640 1660 1680 17005500

6000

6500

7000

7500

(b) TWSE

BPNNSTNNERNN

ST-ERNNReal value

200 400 600 800 1000 1200 1400 1600 1800 2000200

400

600

800

1000

1200

1800 1850 1900 1950

400500600700

(c) KOSPI

BPNNSTNNERNN

ST-ERNNReal value

200 400 600 800 1000 1200 1400 1600 1800 2000

1

15

2

25

3

35

4

1800 1850 1900 1950 2000

095

1

105

11

115times104

times104

(d) Nikkei225

Figure 6 Comparisons of the actual data and the predictive data for SSE TWSE KOSPI and Nikkei225

complexity invariant distancemeasure can produce improve-ments in classification and clustering in the vast majority ofcases

Complexity invariance uses information about complex-ity differences between two time series as a correction factorfor existing distance measures We begin by introducingEuclidean distance and use this as a starting point to bringin the definition of CID Suppose we have two time series 119875and 119876 of length 119899 Consider

119875 = 1199011 1199012 119901

119894 119901

119899

119876 = 1199021 1199022 119902

119894 119902

119899

(18)

The ubiquitous Euclidean distance is

ED (119875 119876) = radic

119899

sum

119894=1

(119901119894minus 119902119894)2 (19)

The Euclidean distance ED(119875 119876) between two time series 119875and 119876 can be made complexity invariant by introducing acorrection factor

CID (119875 119876) = ED (119875 119876) times CF (119875 119876) (20)

where CF is a complexity correction factor defined as

CF (119875 119876) = max (CE (119875) CE (119876))min (CE (119875) CE (119876))

(21)

and CE(119879) is a complexity estimate of a time series 119879 whichcan be computed as follows

CE (119879) = radic

119899minus1

sum

119894=1

(119905119894+1

minus 119905119894)2 (22)

Computational Intelligence and Neuroscience 11

200 400 600 800 1000 1200 1400 1600 1800 2000

minus02

minus01

0

01

02

Relat

ive e

rror

of S

SE

Relative error(a) SSE

Relative error

200 400 600 800 1000 1200 1400 1600 1800 2000

minus01

minus005

0

005

01

Rela

tive e

rror

of T

WSE

(b) TWSE

Relative error

200 400 600 800 1000 1200 1400 1600 1800 2000

minus02

minus015

minus01

minus005

0

005

01

015

Relat

ive e

rror

of K

OSP

I

(c) KOSPIRelative error

200 400 600 800 1000 1200 1400 1600 1800 2000minus01

minus005

0

005

01

Relat

ive e

rror

of N

ikke

i225

(d) Nikkei225

Figure 7 ((a) (b) (c) and (d)) Relative errors of forecasting results from the ST-ERNNmodel

It is worth noticing that CF accounts for differences inthe complexities of the time series being compared CF forcestime series with very different complexities to be furtherapart In the case that all time series have the same complexityCID simply degenerates to Euclidean distanceThepredictionperformance is better when the CID distance is smallerthat is to say the curve of the predictive data is closer tothe actual data The actual values can be seen as the series119875 and the predicting results as the series 119876 Table 5 showsCID distance between the real indices values of SSE TWSEKOSPI and Nikkei225 and the corresponding predictionsfrom each network model It is clear that the CID distancebetween the real index values and the prediction by ST-ERNNmodel is the smallest one moreover the distances by theSTNN model and the ERNN model are smaller than thoseby the BPNN for all the four considered indices

In general the complexity of a real system is not con-strained to a sole scale In this part we consider a developedCID analysis that is the multiscale CID (MCID) TheMCID analysis takes into account the multiple time scales

while measuring the predicting results and it is appliedto the stock prices analysis for the actual data and thepredicting data in this work The MCID analysis shouldcomprise two steps (i) Considering one-dimensional discretetime series 119909

1 1199092 119909

119894 119909

119873 we construct consecutive

coarse-grained time series 119910(120591) corresponding to the scalefactor 120591 according to the following formula

119910(120591)

119895=

1

120591

119895120591

sum

119894=(119895minus1)120591+1

119909119894 1 le 119895 le

119873

120591

(23)

For scale one the time series 119910(1) is simply the originaltime series The length of each coarse-grained time series isequal to the original time series divided by the scale factor120591 (ii) Calculate the CID for each coarse-grained time seriesand then plot as a function of the scale factor Figure 8shows the MCID values between the forecasting results andthe real market prices from BPNN ERNN STNN and ST-ERNNmodels In Figure 8 it is obvious that the MCID from

12 Computational Intelligence and Neuroscience

0 5 10 15 20 25 30 35 40

500

1000

1500

2000

2500

3000

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(a) SSE

0 5 10 15 20 25 30 35 400

05

1

15

2

25

3times104

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(b) TWSE

0 5 10 15 20 25 30 35 400

500

1000

1500

2000

2500

3000

3500

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(c) KOSPI

0 5 10 15 20 25 30 35 40

05

1

15

2

25

3

35

4

45

5times104

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(d) Nikkei225

Figure 8 ((a) (b) (c) and (d)) MCID values between the forecasting results and the real market prices from BPNN ERNN STNN andST-ERNNmodels

ST-ERNN with the actual value is the smallest one in anyscale that is the ST-ERNN (with the stochastic time effectivefunction) for forecasting stock prices is effective

5 Conclusion

The aim of this research is to develop a predictive modelto forecast the financial time series In this study we havedeveloped a predictive model by using an Elman recurrentneural network with the stochastic time effective function toforecast the indices of SSE TWSE KOSPI and Nikkei225Through the linear regression analysis it implies that thepredictive values and the real values are not deviating too

much Then we take the proposed model compared withBPNN STNN and ERNN forecasting models Empiricalexaminations of predicting precision for the price time series(by the comparisons of predicting measures as MAE RMSEMAPE and MAPE(100)) show that the proposed neuralnetwork model has the advantage of improving the precisionof forecasting and the forecasting of this proposed modelmuch approaches to the real financial market movementsFurthermore from the curve of the relative error it canmake a conclusion that the large fluctuation leads to thelarge relative errors In addition by calculating CID andMCID distance the conclusion was illustrated more clearlyThe study and the proposed model contribute significantly tothe time series literature on forecasting

Computational Intelligence and Neuroscience 13

Table 4 Comparisons of indicesrsquo predictions for different forecasting models

Index errors BPNN STNN ERNN ST-ERNNSSE

MAE 453701 249687 37262647 127390RMSE 544564 405437 493907 370693MAPE 201994 118947 182110 41353MAPE(100) 50644 36868 43176 26809

TWSEMAE 2527225 1405971 1512830 1056377RMSE 3168197 1868309 2054236 1361674MAPE 32017 17303 18449 13468MAPE(100) 22135 11494 13349 12601

KOSPIMAE 743073 563309 479296 182421RMSE 771528 582944 508174 210479MAPE 166084 124461 109608 42257MAPE(100) 74379 59664 49176 21788

Nikkei225MAE 2038034 1381857 1662480 685458RMSE 2385933 1697061 2073395 890378MAPE 18556 12580 15398 06010MAPE(100) 07674 05191 04962 04261

Table 5 CID distances for four network models

Index BPNN STNN ERNN ST-ERNNSSE 30521 17647 23202 16599TWSE 27805 98763 10830 61581KOSPI 33504 23128 25510 10060Nikkei225 44541 23726 32895 25421

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

The authors were supported in part by National NaturalScience Foundation of China under Grant nos 71271026 and10971010

References

[1] Y Kajitani K W Hipel and A I McLeod ldquoForecastingnonlinear time series with feed-forward neural networks a casestudy of Canadian lynx datardquo Journal of Forecasting vol 24 no2 pp 105ndash117 2005

[2] T Takahama S Sakai A Hara and N Iwane ldquoPredicting stockprice using neural networks optimized by differential evolutionwith degenerationrdquo International Journal of Innovative Comput-ing Information and Control vol 5 no 12 pp 5021ndash5031 2009

[3] K Huarng and T H-K Yu ldquoThe application of neural networksto forecast fuzzy time seriesrdquo Physica A vol 363 no 2 pp 481ndash491 2006

[4] F Wang and J Wang ldquoStatistical analysis and forecasting ofreturn interval for SSE and model by lattice percolation systemand neural networkrdquoComputers and Industrial Engineering vol62 no 1 pp 198ndash205 2012

[5] H-J Kim and K-S Shin ldquoA hybrid approach based on neuralnetworks and genetic algorithms for detecting temporal pat-terns in stock marketsrdquo Applied Soft Computing vol 7 no 2pp 569ndash576 2007

[6] M Ghiassi H Saidane and D K Zimbra ldquoA dynamic artificialneural network model for forecasting time series eventsrdquoInternational Journal of Forecasting vol 21 no 2 pp 341ndash3622005

[7] M R Hassan B Nath andM Kirley ldquoA fusionmodel of HMMANNandGA for stockmarket forecastingrdquo Expert Systems withApplications vol 33 no 1 pp 171ndash180 2007

[8] D Devaraj B Yegnanarayana and K Ramar ldquoRadial basisfunction networks for fast contingency rankingrdquo InternationalJournal of Electrical Power and Energy Systems vol 24 no 5 pp387ndash395 2002

[9] B A Garroa and R A Vazquez ldquoDesigning artificial neuralnetworks using particle swarm optimization algorithmsrdquo Com-putational Intelligence and Neuroscience vol 2015 Article ID369298 20 pages 2015

[10] Q Gan ldquoExponential synchronization of stochastic Cohen-Grossberg neural networks withmixed time-varying delays andreaction-diffusion via periodically intermittent controlrdquoNeuralNetworks vol 31 pp 12ndash21 2012

[11] D Xiao and J Wang ldquoModeling stock price dynamics bycontinuum percolation system and relevant complex systemsanalysisrdquo Physica A Statistical Mechanics and its Applicationsvol 391 no 20 pp 4827ndash4838 2012

[12] D Enke and N Mehdiyev ldquoStock market prediction usinga combination of stepwise regression analysis differentialevolution-based fuzzy clustering and a fuzzy inference Neural

14 Computational Intelligence and Neuroscience

Networkrdquo Intelligent Automation and Soft Computing vol 19no 4 pp 636ndash648 2013

[13] G Sermpinisa C Stasinakisa and C Dunisb ldquoStochastic andgenetic neural network combinations in trading and hybridtime-varying leverage effectsrdquo Journal of International FinancialMarkets Institutions amp Money vol 30 pp 21ndash54 2014

[14] R Ebrahimpour H Nikoo S Masoudnia M R Yousefi andM S Ghaemi ldquoMixture of mlp-experts for trend forecastingof time series a case study of the tehran stock exchangerdquoInternational Journal of Forecasting vol 27 no 3 pp 804ndash8162011

[15] A Bahrammirzaee ldquoA comparative survey of artificial intelli-gence applications in finance artificial neural networks expertsystem and hybrid intelligent systemsrdquo Neural Computing andApplications vol 19 no 8 pp 1165ndash1195 2010

[16] A Graves M Liwicki S Fernandez R Bertolami H Bunkeand J Schmidhuber ldquoA novel connectionist system for uncon-strained handwriting recognitionrdquo IEEE Transactions on Pat-tern Analysis and Machine Intelligence vol 31 no 5 pp 855ndash868 2009

[17] M Ardalani-Farsa and S Zolfaghari ldquoChaotic time seriesprediction with residual analysis method using hybrid Elman-NARX neural networksrdquoNeurocomputing vol 73 no 13ndash15 pp2540ndash2553 2010

[18] M Paliwal and U A Kumar ldquoNeural networks and statisticaltechniques a review of applicationsrdquo Expert Systems withApplications vol 36 no 1 pp 2ndash17 2009

[19] Z Liao and J Wang ldquoForecasting model of global stock indexby stochastic time effective neural networkrdquoExpert SystemswithApplications vol 37 no 1 pp 834ndash841 2010

[20] L Pan and J Cao ldquoRobust stability for uncertain stochasticneural network with delay and impulsesrdquo Neurocomputing vol94 pp 102ndash110 2012

[21] H F Liu and J Wang ldquoIntegrating independent componentanalysis and principal component analysis with neural networkto predict Chinese stock marketrdquo Mathematical Problems inEngineering vol 2011 Article ID 382659 15 pages 2011

[22] Z Q Guo H Q Wang and Q Liu ldquoFinancial time seriesforecasting using LPP and SVM optimized by PSOrdquo SoftComputing vol 17 no 5 pp 805ndash818 2013

[23] H L Niu and J Wang ldquoFinancial time series predictionby a random data-time effective RBF neural networkrdquo SoftComputing vol 18 no 3 pp 497ndash508 2014

[24] J L Elman ldquoFinding structure in timerdquo Cognitive Science vol14 no 2 pp 179ndash211 1990

[25] MCacciola GMegali D Pellicano and F CMorabito ldquoElmanneural networks for characterizing voids in welded strips astudyrdquo Neural Computing and Applications vol 21 no 5 pp869ndash875 2012

[26] R Chandra and M Zhang ldquoCooperative coevolution of Elmanrecurrent neural networks for chaotic time series predictionrdquoNeurocomputing vol 86 pp 116ndash123 2012

[27] D K Chaturvedi P S Satsangi and P K Kalra ldquoEffect of differ-ent mappings and normalization of neural network modelsrdquo inProceedings of the National Power Systems Conference vol 9 pp377ndash386 Indian Institute of Technology Kanpur India 1996

[28] S Makridakis ldquoAccuracy measures theoretical and practicalconcernsrdquo International Journal of Forecasting vol 9 no 4 pp527ndash529 1993

[29] L Q Han Design and Application of Artificial Neural NetworkChemical Industry Press 2002

[30] M Zounemat-Kermani ldquoPrincipal component analysis (PCA)for estimating chlorophyll concentration using forward andgeneralized regression neural networksrdquoApplied Artificial Intel-ligence vol 28 no 1 pp 16ndash29 2014

[31] D Olson and C Mossman ldquoNeural network forecasts ofCanadian stock returns using accounting ratiosrdquo InternationalJournal of Forecasting vol 19 no 3 pp 453ndash465 2003

[32] H Demuth and M Beale Network Toolbox For Use withMATLAB The Math Works Natick Mass USA 5th edition1998

[33] A P Plumb R C Rowe P York and M Brown ldquoOptimisationof the predictive ability of artificial neural network (ANN)models a comparison of three ANN programs and four classesof training algorithmrdquo European Journal of PharmaceuticalSciences vol 25 no 4-5 pp 395ndash405 2005

[34] D O Faruk ldquoA hybrid neural network and ARIMA model forwater quality time series predictionrdquo Engineering Applicationsof Artificial Intelligence vol 23 no 4 pp 586ndash594 2010

[35] M Tripathy ldquoPower transformer differential protection usingneural network principal component analysis and radial basisfunction neural networkrdquo Simulation Modelling Practice andTheory vol 18 no 5 pp 600ndash611 2010

[36] C E Martin and J A Reggia ldquoFusing swarm intelligenceand self-assembly for optimizing echo state networksrdquo Com-putational Intelligence and Neuroscience vol 2015 Article ID642429 15 pages 2015

[37] R S Tsay Analysis of Financial Time Series JohnWiley amp SonsHoboken NJ USA 2005

[38] P-C Chang D-D Wang and C-L Zhou ldquoA novel modelby evolving partially connected neural network for stock pricetrend forecastingrdquo Expert Systems with Applications vol 39 no1 pp 611ndash620 2012

[39] P Roy G S Mahapatra P Rani S K Pandey and K NDey ldquoRobust feed forward and recurrent neural network baseddynamic weighted combination models for software reliabilitypredictionrdquo Applied Soft Computing vol 22 pp 629ndash637 2014

[40] X Gabaix P Gopikrishnan V Plerou andH E Stanley ldquoA the-ory of power-law distributions in financial market fluctuationsrdquoNature vol 423 no 6937 pp 267ndash270 2003

[41] R N Mantegna and H E Stanley An Introduction to Econo-physics Correlations and Complexity in Finance CambridgeUniversity Press Cambridge UK 2000

[42] T H Roh ldquoForecasting the volatility of stock price indexrdquoExpert Systems with Applications vol 33 no 4 pp 916ndash9222007

[43] G E Batista E J KeoghOM Tataw andVM de Souza ldquoCIDan efficient complexity-invariant distance for time seriesrdquo DataMining and Knowledge Discovery vol 28 no 3 pp 634ndash6692014

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 3: Research Article Financial Time Series Prediction …downloads.hindawi.com/journals/cin/2016/4742515.pdfResearch Article Financial Time Series Prediction Using Elman Recurrent Random

Computational Intelligence and Neuroscience 3

x1t

x2t

xmt

u1t

u2t

u3t

unt

z1t

z2t

z3t

znt

cj

wij

j

yt+1

Output layer

Input layer

Hidden layer

Recurrent layer

Figure 1 Topology of Elman recurrent neural network

where the sigmoid function in hidden layer is selected as theactivation function 119891

119867(119909) = 1(1 + 119890

minus119909) The output of the

hidden layer is given as follows

119910119905+1 (

119896) = 119891119879(

119898

sum

119895=1

V119895119911119895119905 (119896)) (3)

where 119891119879(119909) is an identity map as the activation function

22 Algorithm of ERNN with a Stochastic Time Effective Func-tion (ST-ERNN) The backpropagation algorithm is a super-vised learning algorithm which minimizes the global error119864 by using the gradient descent method [18 21] For the ST-ERNNmodel we assume that the error of the output is givenby 120576119905119899

= 119889119905119899

minus 119910119905119899

and the error of the sample 119899 is defined as

119864 (119905119899) =

1

2

120593 (119905119899) (119889119905119899

minus 119910119905119899

)

2

(4)

where 119905119899is the time of the sample 119899 (119899 = 1 119873) 119889

119905119899

isthe actual value 119910

119905119899

is the output at time 119905119899 and 120593(119905

119899) is

the stochastic time effective function which endows eachhistorical data with a weight depending on the time at whichit occurs We define 120593(119905

119899) as follows

120593 (119905119899) =

1

120573

expint119905119899

1199050

120583 (119905) 119889119905 + int

119905119899

1199050

120590 (119905) 119889119861 (119905) (5)

where 120573 (gt 0) is the time strength coefficient 1199050is the

time of the newest data in the data training set and 119905119899is

an arbitrary time point in the data training set 120583(119905) is thedrift function 120590(119905) is the volatility function and 119861(119905) is thestandard Brownian motion

Intuitively the drift function is used to model determin-istic trends the volatility function is often used to model aset of unpredictable events occurring during thismotion andBrownian motion is usually thought as random motion ofa particle in liquid (where the future motion of the particleat any given time is not dependent on the past) Brownianmotion is a continuous time stochastic process and it isthe limit of or continuous version of random walks SinceBrownian motionrsquos time derivative is everywhere infinite itis an idealised approximation to actual random physical pro-cesses which always have a finite time scaleWe beginwith anexplicit definition A Brownian motion is a real-valued con-tinuous stochastic process 119884(119905) 119905 ge 0 on a probability space(ΩFP) with independent and stationary increments Indetail we have the following (a) continuity the map 119904 997891rarr

119884(119904) is continuousP as (b) independent increments if 119904 le 119905119884119905minus 119884119904is independent of F = (119884

119906 119906 le 119904) (c) stationary

increments if 119904 le 119905 119884119905minus119884119904and 119884

119905minus119904minus1198840have the same prob-

ability law From this definition if 119884(119905) 119905 ge 0 is a Brownianmotion then119884

119905minus1198840is a normal randomvariablewithmean 119903119905

and variance 1205902119905 where 119903 and 120590 are constant real numbers ABrownian motion is standard (we denote it by 119861(119905)) if 119861(0) =0 P as E[119861(119905)] = 0 and E[119861(119905)]

2= 119905 In the above random

4 Computational Intelligence and Neuroscience

data-time effective function the impact of the historical dataon the stock market is regarded as a time variable functionthe efficiency of the historical data depends on its time Thenthe corresponding global error of all the data at each networkrepeated training set in the output layer is defined as

119864 =

1

119873

119873

sum

119899=1

119864 (119905119899) =

1

2119873

sdot

119873

sum

119899=1

1

120573

expint119905119899

1199050

120583 (119905) 119889119905 + int

119905119899

1199050

120590 (119905) 119889119861 (119905)

sdot (119889119905119899

minus 119910119905119899

)

2

(6)

The main objective of learning algorithm is to minimizethe value of cost function 119864 until it reaches the presetminimum value 120585 by repeated learning On each repetitionthe output is calculated and the global error 119864 is obtainedThe gradient of the cost function is given by Δ119864 = 120597119864120597119882For the weight nodes in the input layer the gradient of theconnective weight 119908

119894119895is given by

Δ119908119894119895= minus120578

120597119864 (119905119899)

120597119908119894119895

= 120578120576119905119899

V119895120593 (119905119899) 1198911015840

119867(net119895119905119899

) 119909119894119905119899

(7)

for the weight nodes in the recurrent layer the gradient of theconnective weight 119888

119895is given by

Δ119888119895= minus120578

120597119864 (119905119899)

120597119888119895

= 120578120576119905119899

V119895120593 (119905119899) 1198911015840

119867(net119895119905119899

) 119906119895119905119899

(8)

and for the weight nodes in the hidden layer the gradient ofthe connective weight V

119895is given by

ΔV119895= minus120578

120597119864 (119905119899)

120597V119895

= 120578120576119905119899

120593 (119905119899) 119891119867(net119895119905119899

) (9)

where 120578 is the learning rate and 1198911015840

119867(net119895119905119899

) is the derivativeof the activation function So the update rules for the weights119908119894119895 119888119895 and V

119895are given by

119908119896+1

119894119895= 119908119896

119894119895+ Δ119908119896

119894119895

= 119908119896

119894119895+ 120578120576119905119899

V119895120593 (119905119899) 1198911015840

119867(net119895119905119899

) 119909119894119905119899

119888119896+1

119895= 119888119896

119895+ Δ119888119896

119895= 119888119896

119895+ 120578120576119905119899

V119895120593 (119905119899) 1198911015840

119867(net119895119905119899

) 119906119895119905119899

V119896+1119895

= V119896119895+ ΔV119896119895= V119896119895+ 120578120576119905119899

120593 (119905119899) 119891119867(net119895119905119899

)

(10)

Note that the training aim of the stochastic time effectiveneural network is to modify the weights so as to minimizethe error between the networkrsquos prediction and the actualtarget In Figure 2 the training algorithm procedures of thestochastic time effective neural network are displayed whichare as follows Step 1 Perform input data normalization InST-ERNN model we choose four kinds of stock prices as

the input values in the input layer daily opening price dailyhighest price daily lowest price and daily closing price Theoutput layer is the closing price of the next trading dayThen determine parameters of the network such as learningrate 120578 which is between 0 and 1 the maximum trainingiterations number 119870 and initial connective weights Alsothe topology of the network architecture is the number ofneural nodes in the hidden layer in this paper Step 2 Atthe beginning of data processing connective weights 119908

119894119895

V119895 and 119888

119895follow the uniform distribution on (minus1 1) Step 3

Introduce the stochastic time effective function 120593(119905) in theerror function 119864 Choose the drift function 120583(119905) and thevolatility function 120590(119905) Give the transfer function from theinput layer to the hidden layer and the transfer function fromthe hidden layer to the output layer Step 4 Establish an erroracceptable model and set preset minimum error 120585 Basedon network training objective 119864 = (1119873)sum

119873

119899=1119864(119905119899) if 119864 is

below preset minimum error go to Step 6 otherwise go toStep 5 Step 5 Modify the connective weights calculate thegradient of the connective weights 119908

119894119895 Δ119908119896119894119895 V119895 ΔV119896119895 119888119895 and

Δ119888119896

119895 Then modify the weights from the layer to the previous

layer 119908119896+1119894119895

V119896+1119895

or 119888119896+1119895

Step 6 Output the predictive value119910119905+1

= 119891119879(sum119898

119895=1V119895119891119867(sum119899

119894=1119908119894119895119909119894119905+ sum119898

119895=1119888119895119906119895119905))

3 Forecasting and StatisticalAnalysis of Stock Price

31 Selecting and Preprocessing of the Data To evaluate theperformance of the proposed ST-ERNN forecasting modelwe select the daily data from Shanghai Stock Exchange (SSE)Composite Index Taiwan Stock Exchange CapitalizationWeighted Stock Index (TWSE) Korean Stock Price Index(KOSPI) and Nikkei 225 Index (Nikkei225) to analyze theforecasting models by comparison In Table 1 we show thatthe selected number of each index is 2000The SSE data coverthe time period from 16032006 up to 19032014 the TWSEis from 09022006 up to 19032014 and KOSPI used in thispaper is from 20022006 up to 19032014 while Nikkei225is from 27012006 up to 19032014 Usually the nontradingtime periods are treated as frozen such that we adopt onlythe time during trading hours To reduce the impact ofnoise in the financial market and finally lead to a betterprediction the collected data should be properly adjustedand normalized at the beginning of the modelling There aredifferent normalization methods that are tested to improvethe network training [27 28] which include ldquothe normalizeddata in the range of [0 1]rdquo in the following equation which isalso adopted in this work

119878 (119905)1015840=

119878 (119905) minusmin 119878 (119905)max 119878 (119905) minusmin 119878 (119905)

(11)

where the minimum and maximum values are obtained onthe training set during the training process In order to obtain

Computational Intelligence and Neuroscience 5

Table 1 Data selection

Index Date sets Total number Hidden number Learning rateSSE 16032006sim19032014 2000 9 0001TWSE 09022006sim19032014 2000 12 0001KOSPI 20022006sim19032014 2000 10 005Nikkei225 27012006sim19032014 2000 10 001

Start

Step 1 construct the ST-ERNNmodel

Step 2 initialize the connective

algorithm

Step 4 establish the erroraccuracy

If E lt 120585

Yes

Step 6 output predictive valueyt+1

Input preset minimumerror 120585

Compute the costfunction E

No

Set the topology of network architecture

Establish input vector xt and output yt+1

Set the training data

Introduce the stochastic time effectivefunction

Establish the update rule for the weights

Apply transfer function

Step 5 modify the connectiveweights

Step 3 set up the learning

weights wij j and cjwij j or cj

Modify the weights wij j or cj

Δj and Δcj

Calculate the gradient of weights Δwij

Figure 2 Training algorithm procedures of ST-ERNN

the true value after the forecasting we can revert the outputvariables as

119878 (119905) = 119878 (119905)1015840(max 119878 (119905) minusmin 119878 (119905)) +min 119878 (119905) (12)

32 Training and Forecasting by ST-ERNN Model In the ST-ERNNmodel after we have done the experiments repeatedlyon the different index data different number of neural nodes

in the hidden layer were chose as the optimal see Table 1The dataset was divide into two subsets the training set andthe testing set It is noteworthy that the lengths of the datapoints we chose for these four time series are the same thelengths of training data and testing data are also set the sameThe training set with 75of datawas used formodel buildingand the testing set with the last 25 was used to test on theout-of-sample set the predictive part of a modelThe training

6 Computational Intelligence and Neuroscience

set is totally 1500 data for the SSE is from 16032006 to22022012 while that for the TWSE is from 09022006 to08032012 for KOSPI is from 20022006 to 09032012 andfor Nikkei is from 27012006 to 09032012 The numberof the rest is 500 defined as the testing set We preset thelearning rate and the maximum training cycle by referring to[21 29 30] then we have done the experiment repeatedly onthe training set of different index data different numbers ofneural nodes in the hidden layer are chosen as the optimal seeTable 1 The maximum training iterations number 119870 is 300different dataset has different learning rate 120578 and here aftermany times experiments of the training data we choose 00010001 005 and 001 for SSE TWSE KOSPI and Nikkei225respectively And the predefined minimum training thresh-old 120585 = 10

minus5 When using the ER-STNNmodel to predict thedaily closing price of stock index we assume that 120583(119905) (thedrift function) and120590(119905) (the volatility function) are as follows

120583 (119905) =

1

(119888 minus 119905)2

120590 (119905) = [

1

119873 minus 1

119873

sum

119894=1

(119909 minus 119909)2]

12

(13)

where 119888 is the parameter which is equal to the number ofsamples in the datasets and 119909 is the mean of the sample dataThen the corresponding cost function can be written by

119864 =

1

119873

119873

sum

119899=1

119864 (119905119899) =

1

2119873

119873

sum

119899=1

1

120573

exp

int

119905119899

1199050

1

(119888 minus 119905)2119889119905

+ int

119905119899

1199050

[

1

119873 minus 1

119873

sum

119894=1

(119909 minus 119909)2]

12

119889119861 (119905)

(119889119905119899

minus 119910119905119899

)

2

(14)

Figure 3 shows the predicting results of training and test-ing data for SSE TWSE KOSPI and Nikkei225 with the ST-ERNN model correspondingly The curves of the actual dataand the predictive data are intuitively very approximatingIt means that with many times experiments the financialtime series have been well trained the forecasting results aredesired by ST-ERNNmodel

The plots of the real and the predictive data for thesefour price series are respectively shown in Figure 4Throughthe linear regression analysis we make a comparison of thepredictive value of the ST-ERNN model with the real pricedata It is known that the linear regression can be used to fita predictive model to an observed data set of 119884 and 119883 Thelinear equations of SSE TWSE KOSPI and Nikkei225 areexhibited respectively in Figures 4(a)ndash4(d) We can observethat all the slopes of the linear equations for them are drawnnear to 1 which implies that the predictive values and the realvalues are not deviating too much

Table 2 Linear regression parameters of market indices

Parameter SSE TWSE KOSPI Nikkei225119886 09873 09763 09614 09661119887 7582 173 3804 6535119877 0992 09952 09963 09971

A valuable numericalmeasure of association between twovariables is the correlation coefficient 119877 Table 2 shows thevalues of 119886 119887 and119877 for the above indices119877 is given as follows

119877 =

sum119873

119894=1(119889119905minus 119889119905) (119910119905minus 119910119905)

radicsum119873

119894=1(119889119905minus 119889119905)

2

sum119873

119894=1(119910119905minus 119910119905)2

(15)

where 119889119905is the actual value 119910

119905is the predicting value 119889

119905is

the mean of the actual value 119910119905is the mean of the predicting

value and119873 is the total number of the data

33 Comparisons of Forecasting Results We compare theproposed and conventional forecasting approaches (BPNNSTNN and ERNN model) on the four indices mentionedabove where STNN is based on the BPNN and combinedwith the stochastic effective function [19] For these fourdifferent models we set the same inputs of the networksincluding four kinds of series daily open price daily closingprice daily highest price and daily lowest priceThe networkoutput is the closing price of the next trading day In the stockmarkets the practical experience shows us that the abovefour kinds of data of the last trading day are very importantindicators when predicting the closing price of the nexttrading day To choose better parameters we have carried outmany experiments on these four different indices In order toachieve the optimal networks of each forecasting approachthe most appropriate numbers of neural nodes in the hiddenlayer are different the learning rates are also varying by train-ing different models see Table 3 In Table 3 ldquoHiddenrdquo standsfor the number of neural nodes in the hidden layer and ldquoLrrdquo stands for learning rateThe hidden number is also chosenby referring to [21 29 30] The experiments have been donerepeatedly to determine hidden nodes and training cycle inthe training processThedetails of principles of how to choosethe hidden number are as follows If the number of neuralnodes in the input layer is119873 the number of neural nodes inthe hidden layer is set to be nearly 2119873+1 and the number ofneural nodes in the output layer is 1 Since the ERNN modeland the ST-ERNN model have similar topology structuresin Table 3 the number of neural nodes in hidden layer andthe learning rate are chosen approximatly in the two modelsof training process Also the BPNN model and the STNNmodel are similar so the chosen parameters are basically thesame Figures 5(a)ndash5(d) show the predicting values of the fourindexes on the test set From these plots the predicting valuesof the ST-ERNNmodel are closer to the actual values than theothermodels curves To compare the training and forecasting

Computational Intelligence and Neuroscience 7

500 1000 1500 20000

500

1000

1500

Real valueST-ERNN

Test set

Training set

(a) SSEReal valueST-ERNN

500 1000 1500 20002000

4000

6000

8000

10000

12000

Training set

Test set

(b) TWSE

500 1000 1500 2000

400

600

800

1000

Training set

Test set

Real valueST-ERNN

(c) KOSPI

500 1000 1500 2000

1

15

2

25

3

35

4

Test setTraining set

Real valueST-ERNN

times104

(d) Nikkei225

Figure 3 Comparisons of the predictive data and the actual data for the forecasting models

results more clearly the performance measures RMSE MAEMAPE and MAPE(100) are uncovered in the next part

To analyze the forecasting performance of four consid-ered forecasting models deeply we use the following errorevaluation criteria [31ndash35] the mean absolute error (MAE)the root mean square error (RMSE) and the correlationcoefficient (MAPE) the corresponding definitions are givenas follows

MAE =

1

119873

119873

sum

119905=1

1003816100381610038161003816119889119905minus 119910119905

1003816100381610038161003816

RMSE = radic1

119873

119873

sum

119905=1

(119889119905minus 119910119905)2

MAPE = 100 times

1

119873

119873

sum

119905=1

10038161003816100381610038161003816100381610038161003816

119889119905minus 119910119905

119889119905

10038161003816100381610038161003816100381610038161003816

(16)

where 119889119905and 119910

119905are the real value and the predicting value

at time 119905 respectively 119873 is the total number of the dataNoting that MAE RMSE and MAPE are measures of thedeviation between the prediction values and the actual valuesthe prediction performance is better when the values of theseevaluation criteria are smaller However if the results are notconsistent among these criteria we choose the MAPE as thebenchmark since MAPE is relatively more stable than othercriteria [16]

Figures 6(a)ndash6(d) show the forecasting results of SSETWSE KOSPI and Nikkei225 for four forecasting mod-els The empirical research shows that the proposed ST-ERNN model has the best performance the ERNN and theSTNN both outperform the common BPNN model Thestock markets showed large fluctuations which are reflectedin Figure 6 we can see that the large fluctuation periodforecasting is relatively not accurate from these four models

8 Computational Intelligence and Neuroscience

200 400 600 800 1000 1200 1400

200

400

600

800

1000

1200

1400

Real value of SSE

Pred

ictiv

e val

ue o

f SSE

Predictive value pointsLinear fit

Y = 09873X + 7582

(a) SSE

4000 5000 6000 7000 8000 9000 10000

4000

5000

6000

7000

8000

9000

10000

Real value of TWSE

Pred

ictiv

e val

ue o

f TW

SE

Predictive value pointsLinear fit

Y = 09763X + 173

(b) TWSE

300 400 500 600 700 800 900 1000 1100

400

600

800

1000

Real value of KOSPI

Pred

ictiv

e val

ue o

f KO

SPI

Predictive value pointsLinear fit

Y = 09614X + 3804

(c) KOSPI

Predictive value pointsLinear fit

1 15 2 25 3 351

15

2

25

3

35

Real value of Nikkei225

Pred

ictiv

e val

ue o

f Nik

kei2

25times104

times104

Y = 09661X + 6535

(d) Nikkei225

Figure 4 Comparisons and linear regressions of the actual data and the predictive values for SSE TWSE KOSPI and Nikkei225

Table 3 Different parameters for different models

Index data BPNN STNN ERNN ST-ERNNHidden L r Hidden L r Hidden L r Hidden L r

SSE 8 001 8 001 10 0001 9 0001TWSE 10 001 10 001 12 0001 12 0001KOSPI 8 002 8 002 10 003 10 005Nikkei225 10 005 10 005 10 001 10 001

When the stock market is relatively stable the forecastingresult is nearer to the actual value Compared with theBPNN the STNN and the ERNN models the forecastingresults are also presented in Table 4 where the MAPE(100)stands for the latest 100 days of MAPE in the testing dataTable 4 shows that the evaluation criteria by the ST-ERNN

model are almost smaller than those by other models FromTable 4 and Figure 6 we can conclude that the proposedST-ERNN model is better than the other three models InTable 4 the evaluation criteria by the STNN model andERNNmodel are almost smaller than those by BPNN for fourconsidered indices It illustrates that the effect of financial

Computational Intelligence and Neuroscience 9

1500 1600 1700 1800 1900 2000

200

400

600

800

1000

1200

1400

STNNBPNN

ERNN

ST-ERNNReal value

(a) SSE

1500 1600 1700 1800 1900 20005000

6000

7000

8000

9000

10000

11000

STNNBPNN

ERNN

ST-ERNNReal value

(b) TWSE

1500 1600 1700 1800 1900 2000

300

400

500

600

700

800

900

1000

STNNBPNN

ERNN

ST-ERNNReal value

(c) KOSPI

STNNBPNN

ERNN

ST-ERNNReal value

1500 1600 1700 1800 1900 2000

095

1

105

11

115

12

125

13

135times104

(d) Nikkei225

Figure 5 Predictive values on the test set for SSE TWSE KOSPI and Nikkei225

time series forecasting of STNN model is superior to that ofBPNN model and the dynamic neural network is effectiverobust and precise than original BPNN for these four indicesBesides most values of MAPE(100) are smaller than thoseof MAPE in all stock indexes Therefore the short-termprediction outperforms the long-term prediction Overalltraining and testing results are consistent with the measureddata which demonstrates that the ST-ERNN predictor hashigher forecast accuracy

In Figures 7(a) 7(b) 7(c) and 7(d) we considered therelative errors of the ST-ERNN forecasting results Figure 7depicts that most of the predicting relative errors for thesefour price series are betweenminus01 and 01Moreover there aresome points with large relative errors of forecasting results infour models especially on the SSE index which can attribute

to the large fluctuation that leads to the large relative errorsThe definition of relative error is given as follows

119890 (119905) =

119889119905minus 119910119905

119889119905

(17)

where 119889119905and 119910

119905denote the actual value and the predicting

value respectively at time 119905 119905 = 1 2

4 CID and MCID Analysis

The analysis and forecast of time series have long been afocus of economic research for a more clear understanding ofmechanism and characteristics of financial markets [36ndash42]In this section we employ an efficient complexity invariantdistance (CID) for time series Reference [43] shows that

10 Computational Intelligence and Neuroscience

200 400 600 800 1000 1200 1400 1600 1800 20000

200

400

600

800

1000

1200

1400

1600

BPNNSTNNERNN

ST-ERNNReal value

1560 1580 1600 1620 1640

800

1000

1200

(a) SSE

BPNNSTNNERNN

ST-ERNNReal value

200 400 600 800 1000 1200 1400 1600 1800 20003000

4000

5000

6000

7000

8000

9000

10000

11000

1600 1620 1640 1660 1680 17005500

6000

6500

7000

7500

(b) TWSE

BPNNSTNNERNN

ST-ERNNReal value

200 400 600 800 1000 1200 1400 1600 1800 2000200

400

600

800

1000

1200

1800 1850 1900 1950

400500600700

(c) KOSPI

BPNNSTNNERNN

ST-ERNNReal value

200 400 600 800 1000 1200 1400 1600 1800 2000

1

15

2

25

3

35

4

1800 1850 1900 1950 2000

095

1

105

11

115times104

times104

(d) Nikkei225

Figure 6 Comparisons of the actual data and the predictive data for SSE TWSE KOSPI and Nikkei225

complexity invariant distancemeasure can produce improve-ments in classification and clustering in the vast majority ofcases

Complexity invariance uses information about complex-ity differences between two time series as a correction factorfor existing distance measures We begin by introducingEuclidean distance and use this as a starting point to bringin the definition of CID Suppose we have two time series 119875and 119876 of length 119899 Consider

119875 = 1199011 1199012 119901

119894 119901

119899

119876 = 1199021 1199022 119902

119894 119902

119899

(18)

The ubiquitous Euclidean distance is

ED (119875 119876) = radic

119899

sum

119894=1

(119901119894minus 119902119894)2 (19)

The Euclidean distance ED(119875 119876) between two time series 119875and 119876 can be made complexity invariant by introducing acorrection factor

CID (119875 119876) = ED (119875 119876) times CF (119875 119876) (20)

where CF is a complexity correction factor defined as

CF (119875 119876) = max (CE (119875) CE (119876))min (CE (119875) CE (119876))

(21)

and CE(119879) is a complexity estimate of a time series 119879 whichcan be computed as follows

CE (119879) = radic

119899minus1

sum

119894=1

(119905119894+1

minus 119905119894)2 (22)

Computational Intelligence and Neuroscience 11

200 400 600 800 1000 1200 1400 1600 1800 2000

minus02

minus01

0

01

02

Relat

ive e

rror

of S

SE

Relative error(a) SSE

Relative error

200 400 600 800 1000 1200 1400 1600 1800 2000

minus01

minus005

0

005

01

Rela

tive e

rror

of T

WSE

(b) TWSE

Relative error

200 400 600 800 1000 1200 1400 1600 1800 2000

minus02

minus015

minus01

minus005

0

005

01

015

Relat

ive e

rror

of K

OSP

I

(c) KOSPIRelative error

200 400 600 800 1000 1200 1400 1600 1800 2000minus01

minus005

0

005

01

Relat

ive e

rror

of N

ikke

i225

(d) Nikkei225

Figure 7 ((a) (b) (c) and (d)) Relative errors of forecasting results from the ST-ERNNmodel

It is worth noticing that CF accounts for differences inthe complexities of the time series being compared CF forcestime series with very different complexities to be furtherapart In the case that all time series have the same complexityCID simply degenerates to Euclidean distanceThepredictionperformance is better when the CID distance is smallerthat is to say the curve of the predictive data is closer tothe actual data The actual values can be seen as the series119875 and the predicting results as the series 119876 Table 5 showsCID distance between the real indices values of SSE TWSEKOSPI and Nikkei225 and the corresponding predictionsfrom each network model It is clear that the CID distancebetween the real index values and the prediction by ST-ERNNmodel is the smallest one moreover the distances by theSTNN model and the ERNN model are smaller than thoseby the BPNN for all the four considered indices

In general the complexity of a real system is not con-strained to a sole scale In this part we consider a developedCID analysis that is the multiscale CID (MCID) TheMCID analysis takes into account the multiple time scales

while measuring the predicting results and it is appliedto the stock prices analysis for the actual data and thepredicting data in this work The MCID analysis shouldcomprise two steps (i) Considering one-dimensional discretetime series 119909

1 1199092 119909

119894 119909

119873 we construct consecutive

coarse-grained time series 119910(120591) corresponding to the scalefactor 120591 according to the following formula

119910(120591)

119895=

1

120591

119895120591

sum

119894=(119895minus1)120591+1

119909119894 1 le 119895 le

119873

120591

(23)

For scale one the time series 119910(1) is simply the originaltime series The length of each coarse-grained time series isequal to the original time series divided by the scale factor120591 (ii) Calculate the CID for each coarse-grained time seriesand then plot as a function of the scale factor Figure 8shows the MCID values between the forecasting results andthe real market prices from BPNN ERNN STNN and ST-ERNNmodels In Figure 8 it is obvious that the MCID from

12 Computational Intelligence and Neuroscience

0 5 10 15 20 25 30 35 40

500

1000

1500

2000

2500

3000

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(a) SSE

0 5 10 15 20 25 30 35 400

05

1

15

2

25

3times104

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(b) TWSE

0 5 10 15 20 25 30 35 400

500

1000

1500

2000

2500

3000

3500

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(c) KOSPI

0 5 10 15 20 25 30 35 40

05

1

15

2

25

3

35

4

45

5times104

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(d) Nikkei225

Figure 8 ((a) (b) (c) and (d)) MCID values between the forecasting results and the real market prices from BPNN ERNN STNN andST-ERNNmodels

ST-ERNN with the actual value is the smallest one in anyscale that is the ST-ERNN (with the stochastic time effectivefunction) for forecasting stock prices is effective

5 Conclusion

The aim of this research is to develop a predictive modelto forecast the financial time series In this study we havedeveloped a predictive model by using an Elman recurrentneural network with the stochastic time effective function toforecast the indices of SSE TWSE KOSPI and Nikkei225Through the linear regression analysis it implies that thepredictive values and the real values are not deviating too

much Then we take the proposed model compared withBPNN STNN and ERNN forecasting models Empiricalexaminations of predicting precision for the price time series(by the comparisons of predicting measures as MAE RMSEMAPE and MAPE(100)) show that the proposed neuralnetwork model has the advantage of improving the precisionof forecasting and the forecasting of this proposed modelmuch approaches to the real financial market movementsFurthermore from the curve of the relative error it canmake a conclusion that the large fluctuation leads to thelarge relative errors In addition by calculating CID andMCID distance the conclusion was illustrated more clearlyThe study and the proposed model contribute significantly tothe time series literature on forecasting

Computational Intelligence and Neuroscience 13

Table 4 Comparisons of indicesrsquo predictions for different forecasting models

Index errors BPNN STNN ERNN ST-ERNNSSE

MAE 453701 249687 37262647 127390RMSE 544564 405437 493907 370693MAPE 201994 118947 182110 41353MAPE(100) 50644 36868 43176 26809

TWSEMAE 2527225 1405971 1512830 1056377RMSE 3168197 1868309 2054236 1361674MAPE 32017 17303 18449 13468MAPE(100) 22135 11494 13349 12601

KOSPIMAE 743073 563309 479296 182421RMSE 771528 582944 508174 210479MAPE 166084 124461 109608 42257MAPE(100) 74379 59664 49176 21788

Nikkei225MAE 2038034 1381857 1662480 685458RMSE 2385933 1697061 2073395 890378MAPE 18556 12580 15398 06010MAPE(100) 07674 05191 04962 04261

Table 5 CID distances for four network models

Index BPNN STNN ERNN ST-ERNNSSE 30521 17647 23202 16599TWSE 27805 98763 10830 61581KOSPI 33504 23128 25510 10060Nikkei225 44541 23726 32895 25421

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

The authors were supported in part by National NaturalScience Foundation of China under Grant nos 71271026 and10971010

References

[1] Y Kajitani K W Hipel and A I McLeod ldquoForecastingnonlinear time series with feed-forward neural networks a casestudy of Canadian lynx datardquo Journal of Forecasting vol 24 no2 pp 105ndash117 2005

[2] T Takahama S Sakai A Hara and N Iwane ldquoPredicting stockprice using neural networks optimized by differential evolutionwith degenerationrdquo International Journal of Innovative Comput-ing Information and Control vol 5 no 12 pp 5021ndash5031 2009

[3] K Huarng and T H-K Yu ldquoThe application of neural networksto forecast fuzzy time seriesrdquo Physica A vol 363 no 2 pp 481ndash491 2006

[4] F Wang and J Wang ldquoStatistical analysis and forecasting ofreturn interval for SSE and model by lattice percolation systemand neural networkrdquoComputers and Industrial Engineering vol62 no 1 pp 198ndash205 2012

[5] H-J Kim and K-S Shin ldquoA hybrid approach based on neuralnetworks and genetic algorithms for detecting temporal pat-terns in stock marketsrdquo Applied Soft Computing vol 7 no 2pp 569ndash576 2007

[6] M Ghiassi H Saidane and D K Zimbra ldquoA dynamic artificialneural network model for forecasting time series eventsrdquoInternational Journal of Forecasting vol 21 no 2 pp 341ndash3622005

[7] M R Hassan B Nath andM Kirley ldquoA fusionmodel of HMMANNandGA for stockmarket forecastingrdquo Expert Systems withApplications vol 33 no 1 pp 171ndash180 2007

[8] D Devaraj B Yegnanarayana and K Ramar ldquoRadial basisfunction networks for fast contingency rankingrdquo InternationalJournal of Electrical Power and Energy Systems vol 24 no 5 pp387ndash395 2002

[9] B A Garroa and R A Vazquez ldquoDesigning artificial neuralnetworks using particle swarm optimization algorithmsrdquo Com-putational Intelligence and Neuroscience vol 2015 Article ID369298 20 pages 2015

[10] Q Gan ldquoExponential synchronization of stochastic Cohen-Grossberg neural networks withmixed time-varying delays andreaction-diffusion via periodically intermittent controlrdquoNeuralNetworks vol 31 pp 12ndash21 2012

[11] D Xiao and J Wang ldquoModeling stock price dynamics bycontinuum percolation system and relevant complex systemsanalysisrdquo Physica A Statistical Mechanics and its Applicationsvol 391 no 20 pp 4827ndash4838 2012

[12] D Enke and N Mehdiyev ldquoStock market prediction usinga combination of stepwise regression analysis differentialevolution-based fuzzy clustering and a fuzzy inference Neural

14 Computational Intelligence and Neuroscience

Networkrdquo Intelligent Automation and Soft Computing vol 19no 4 pp 636ndash648 2013

[13] G Sermpinisa C Stasinakisa and C Dunisb ldquoStochastic andgenetic neural network combinations in trading and hybridtime-varying leverage effectsrdquo Journal of International FinancialMarkets Institutions amp Money vol 30 pp 21ndash54 2014

[14] R Ebrahimpour H Nikoo S Masoudnia M R Yousefi andM S Ghaemi ldquoMixture of mlp-experts for trend forecastingof time series a case study of the tehran stock exchangerdquoInternational Journal of Forecasting vol 27 no 3 pp 804ndash8162011

[15] A Bahrammirzaee ldquoA comparative survey of artificial intelli-gence applications in finance artificial neural networks expertsystem and hybrid intelligent systemsrdquo Neural Computing andApplications vol 19 no 8 pp 1165ndash1195 2010

[16] A Graves M Liwicki S Fernandez R Bertolami H Bunkeand J Schmidhuber ldquoA novel connectionist system for uncon-strained handwriting recognitionrdquo IEEE Transactions on Pat-tern Analysis and Machine Intelligence vol 31 no 5 pp 855ndash868 2009

[17] M Ardalani-Farsa and S Zolfaghari ldquoChaotic time seriesprediction with residual analysis method using hybrid Elman-NARX neural networksrdquoNeurocomputing vol 73 no 13ndash15 pp2540ndash2553 2010

[18] M Paliwal and U A Kumar ldquoNeural networks and statisticaltechniques a review of applicationsrdquo Expert Systems withApplications vol 36 no 1 pp 2ndash17 2009

[19] Z Liao and J Wang ldquoForecasting model of global stock indexby stochastic time effective neural networkrdquoExpert SystemswithApplications vol 37 no 1 pp 834ndash841 2010

[20] L Pan and J Cao ldquoRobust stability for uncertain stochasticneural network with delay and impulsesrdquo Neurocomputing vol94 pp 102ndash110 2012

[21] H F Liu and J Wang ldquoIntegrating independent componentanalysis and principal component analysis with neural networkto predict Chinese stock marketrdquo Mathematical Problems inEngineering vol 2011 Article ID 382659 15 pages 2011

[22] Z Q Guo H Q Wang and Q Liu ldquoFinancial time seriesforecasting using LPP and SVM optimized by PSOrdquo SoftComputing vol 17 no 5 pp 805ndash818 2013

[23] H L Niu and J Wang ldquoFinancial time series predictionby a random data-time effective RBF neural networkrdquo SoftComputing vol 18 no 3 pp 497ndash508 2014

[24] J L Elman ldquoFinding structure in timerdquo Cognitive Science vol14 no 2 pp 179ndash211 1990

[25] MCacciola GMegali D Pellicano and F CMorabito ldquoElmanneural networks for characterizing voids in welded strips astudyrdquo Neural Computing and Applications vol 21 no 5 pp869ndash875 2012

[26] R Chandra and M Zhang ldquoCooperative coevolution of Elmanrecurrent neural networks for chaotic time series predictionrdquoNeurocomputing vol 86 pp 116ndash123 2012

[27] D K Chaturvedi P S Satsangi and P K Kalra ldquoEffect of differ-ent mappings and normalization of neural network modelsrdquo inProceedings of the National Power Systems Conference vol 9 pp377ndash386 Indian Institute of Technology Kanpur India 1996

[28] S Makridakis ldquoAccuracy measures theoretical and practicalconcernsrdquo International Journal of Forecasting vol 9 no 4 pp527ndash529 1993

[29] L Q Han Design and Application of Artificial Neural NetworkChemical Industry Press 2002

[30] M Zounemat-Kermani ldquoPrincipal component analysis (PCA)for estimating chlorophyll concentration using forward andgeneralized regression neural networksrdquoApplied Artificial Intel-ligence vol 28 no 1 pp 16ndash29 2014

[31] D Olson and C Mossman ldquoNeural network forecasts ofCanadian stock returns using accounting ratiosrdquo InternationalJournal of Forecasting vol 19 no 3 pp 453ndash465 2003

[32] H Demuth and M Beale Network Toolbox For Use withMATLAB The Math Works Natick Mass USA 5th edition1998

[33] A P Plumb R C Rowe P York and M Brown ldquoOptimisationof the predictive ability of artificial neural network (ANN)models a comparison of three ANN programs and four classesof training algorithmrdquo European Journal of PharmaceuticalSciences vol 25 no 4-5 pp 395ndash405 2005

[34] D O Faruk ldquoA hybrid neural network and ARIMA model forwater quality time series predictionrdquo Engineering Applicationsof Artificial Intelligence vol 23 no 4 pp 586ndash594 2010

[35] M Tripathy ldquoPower transformer differential protection usingneural network principal component analysis and radial basisfunction neural networkrdquo Simulation Modelling Practice andTheory vol 18 no 5 pp 600ndash611 2010

[36] C E Martin and J A Reggia ldquoFusing swarm intelligenceand self-assembly for optimizing echo state networksrdquo Com-putational Intelligence and Neuroscience vol 2015 Article ID642429 15 pages 2015

[37] R S Tsay Analysis of Financial Time Series JohnWiley amp SonsHoboken NJ USA 2005

[38] P-C Chang D-D Wang and C-L Zhou ldquoA novel modelby evolving partially connected neural network for stock pricetrend forecastingrdquo Expert Systems with Applications vol 39 no1 pp 611ndash620 2012

[39] P Roy G S Mahapatra P Rani S K Pandey and K NDey ldquoRobust feed forward and recurrent neural network baseddynamic weighted combination models for software reliabilitypredictionrdquo Applied Soft Computing vol 22 pp 629ndash637 2014

[40] X Gabaix P Gopikrishnan V Plerou andH E Stanley ldquoA the-ory of power-law distributions in financial market fluctuationsrdquoNature vol 423 no 6937 pp 267ndash270 2003

[41] R N Mantegna and H E Stanley An Introduction to Econo-physics Correlations and Complexity in Finance CambridgeUniversity Press Cambridge UK 2000

[42] T H Roh ldquoForecasting the volatility of stock price indexrdquoExpert Systems with Applications vol 33 no 4 pp 916ndash9222007

[43] G E Batista E J KeoghOM Tataw andVM de Souza ldquoCIDan efficient complexity-invariant distance for time seriesrdquo DataMining and Knowledge Discovery vol 28 no 3 pp 634ndash6692014

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 4: Research Article Financial Time Series Prediction …downloads.hindawi.com/journals/cin/2016/4742515.pdfResearch Article Financial Time Series Prediction Using Elman Recurrent Random

4 Computational Intelligence and Neuroscience

data-time effective function the impact of the historical dataon the stock market is regarded as a time variable functionthe efficiency of the historical data depends on its time Thenthe corresponding global error of all the data at each networkrepeated training set in the output layer is defined as

119864 =

1

119873

119873

sum

119899=1

119864 (119905119899) =

1

2119873

sdot

119873

sum

119899=1

1

120573

expint119905119899

1199050

120583 (119905) 119889119905 + int

119905119899

1199050

120590 (119905) 119889119861 (119905)

sdot (119889119905119899

minus 119910119905119899

)

2

(6)

The main objective of learning algorithm is to minimizethe value of cost function 119864 until it reaches the presetminimum value 120585 by repeated learning On each repetitionthe output is calculated and the global error 119864 is obtainedThe gradient of the cost function is given by Δ119864 = 120597119864120597119882For the weight nodes in the input layer the gradient of theconnective weight 119908

119894119895is given by

Δ119908119894119895= minus120578

120597119864 (119905119899)

120597119908119894119895

= 120578120576119905119899

V119895120593 (119905119899) 1198911015840

119867(net119895119905119899

) 119909119894119905119899

(7)

for the weight nodes in the recurrent layer the gradient of theconnective weight 119888

119895is given by

Δ119888119895= minus120578

120597119864 (119905119899)

120597119888119895

= 120578120576119905119899

V119895120593 (119905119899) 1198911015840

119867(net119895119905119899

) 119906119895119905119899

(8)

and for the weight nodes in the hidden layer the gradient ofthe connective weight V

119895is given by

ΔV119895= minus120578

120597119864 (119905119899)

120597V119895

= 120578120576119905119899

120593 (119905119899) 119891119867(net119895119905119899

) (9)

where 120578 is the learning rate and 1198911015840

119867(net119895119905119899

) is the derivativeof the activation function So the update rules for the weights119908119894119895 119888119895 and V

119895are given by

119908119896+1

119894119895= 119908119896

119894119895+ Δ119908119896

119894119895

= 119908119896

119894119895+ 120578120576119905119899

V119895120593 (119905119899) 1198911015840

119867(net119895119905119899

) 119909119894119905119899

119888119896+1

119895= 119888119896

119895+ Δ119888119896

119895= 119888119896

119895+ 120578120576119905119899

V119895120593 (119905119899) 1198911015840

119867(net119895119905119899

) 119906119895119905119899

V119896+1119895

= V119896119895+ ΔV119896119895= V119896119895+ 120578120576119905119899

120593 (119905119899) 119891119867(net119895119905119899

)

(10)

Note that the training aim of the stochastic time effectiveneural network is to modify the weights so as to minimizethe error between the networkrsquos prediction and the actualtarget In Figure 2 the training algorithm procedures of thestochastic time effective neural network are displayed whichare as follows Step 1 Perform input data normalization InST-ERNN model we choose four kinds of stock prices as

the input values in the input layer daily opening price dailyhighest price daily lowest price and daily closing price Theoutput layer is the closing price of the next trading dayThen determine parameters of the network such as learningrate 120578 which is between 0 and 1 the maximum trainingiterations number 119870 and initial connective weights Alsothe topology of the network architecture is the number ofneural nodes in the hidden layer in this paper Step 2 Atthe beginning of data processing connective weights 119908

119894119895

V119895 and 119888

119895follow the uniform distribution on (minus1 1) Step 3

Introduce the stochastic time effective function 120593(119905) in theerror function 119864 Choose the drift function 120583(119905) and thevolatility function 120590(119905) Give the transfer function from theinput layer to the hidden layer and the transfer function fromthe hidden layer to the output layer Step 4 Establish an erroracceptable model and set preset minimum error 120585 Basedon network training objective 119864 = (1119873)sum

119873

119899=1119864(119905119899) if 119864 is

below preset minimum error go to Step 6 otherwise go toStep 5 Step 5 Modify the connective weights calculate thegradient of the connective weights 119908

119894119895 Δ119908119896119894119895 V119895 ΔV119896119895 119888119895 and

Δ119888119896

119895 Then modify the weights from the layer to the previous

layer 119908119896+1119894119895

V119896+1119895

or 119888119896+1119895

Step 6 Output the predictive value119910119905+1

= 119891119879(sum119898

119895=1V119895119891119867(sum119899

119894=1119908119894119895119909119894119905+ sum119898

119895=1119888119895119906119895119905))

3 Forecasting and StatisticalAnalysis of Stock Price

31 Selecting and Preprocessing of the Data To evaluate theperformance of the proposed ST-ERNN forecasting modelwe select the daily data from Shanghai Stock Exchange (SSE)Composite Index Taiwan Stock Exchange CapitalizationWeighted Stock Index (TWSE) Korean Stock Price Index(KOSPI) and Nikkei 225 Index (Nikkei225) to analyze theforecasting models by comparison In Table 1 we show thatthe selected number of each index is 2000The SSE data coverthe time period from 16032006 up to 19032014 the TWSEis from 09022006 up to 19032014 and KOSPI used in thispaper is from 20022006 up to 19032014 while Nikkei225is from 27012006 up to 19032014 Usually the nontradingtime periods are treated as frozen such that we adopt onlythe time during trading hours To reduce the impact ofnoise in the financial market and finally lead to a betterprediction the collected data should be properly adjustedand normalized at the beginning of the modelling There aredifferent normalization methods that are tested to improvethe network training [27 28] which include ldquothe normalizeddata in the range of [0 1]rdquo in the following equation which isalso adopted in this work

119878 (119905)1015840=

119878 (119905) minusmin 119878 (119905)max 119878 (119905) minusmin 119878 (119905)

(11)

where the minimum and maximum values are obtained onthe training set during the training process In order to obtain

Computational Intelligence and Neuroscience 5

Table 1 Data selection

Index Date sets Total number Hidden number Learning rateSSE 16032006sim19032014 2000 9 0001TWSE 09022006sim19032014 2000 12 0001KOSPI 20022006sim19032014 2000 10 005Nikkei225 27012006sim19032014 2000 10 001

Start

Step 1 construct the ST-ERNNmodel

Step 2 initialize the connective

algorithm

Step 4 establish the erroraccuracy

If E lt 120585

Yes

Step 6 output predictive valueyt+1

Input preset minimumerror 120585

Compute the costfunction E

No

Set the topology of network architecture

Establish input vector xt and output yt+1

Set the training data

Introduce the stochastic time effectivefunction

Establish the update rule for the weights

Apply transfer function

Step 5 modify the connectiveweights

Step 3 set up the learning

weights wij j and cjwij j or cj

Modify the weights wij j or cj

Δj and Δcj

Calculate the gradient of weights Δwij

Figure 2 Training algorithm procedures of ST-ERNN

the true value after the forecasting we can revert the outputvariables as

119878 (119905) = 119878 (119905)1015840(max 119878 (119905) minusmin 119878 (119905)) +min 119878 (119905) (12)

32 Training and Forecasting by ST-ERNN Model In the ST-ERNNmodel after we have done the experiments repeatedlyon the different index data different number of neural nodes

in the hidden layer were chose as the optimal see Table 1The dataset was divide into two subsets the training set andthe testing set It is noteworthy that the lengths of the datapoints we chose for these four time series are the same thelengths of training data and testing data are also set the sameThe training set with 75of datawas used formodel buildingand the testing set with the last 25 was used to test on theout-of-sample set the predictive part of a modelThe training

6 Computational Intelligence and Neuroscience

set is totally 1500 data for the SSE is from 16032006 to22022012 while that for the TWSE is from 09022006 to08032012 for KOSPI is from 20022006 to 09032012 andfor Nikkei is from 27012006 to 09032012 The numberof the rest is 500 defined as the testing set We preset thelearning rate and the maximum training cycle by referring to[21 29 30] then we have done the experiment repeatedly onthe training set of different index data different numbers ofneural nodes in the hidden layer are chosen as the optimal seeTable 1 The maximum training iterations number 119870 is 300different dataset has different learning rate 120578 and here aftermany times experiments of the training data we choose 00010001 005 and 001 for SSE TWSE KOSPI and Nikkei225respectively And the predefined minimum training thresh-old 120585 = 10

minus5 When using the ER-STNNmodel to predict thedaily closing price of stock index we assume that 120583(119905) (thedrift function) and120590(119905) (the volatility function) are as follows

120583 (119905) =

1

(119888 minus 119905)2

120590 (119905) = [

1

119873 minus 1

119873

sum

119894=1

(119909 minus 119909)2]

12

(13)

where 119888 is the parameter which is equal to the number ofsamples in the datasets and 119909 is the mean of the sample dataThen the corresponding cost function can be written by

119864 =

1

119873

119873

sum

119899=1

119864 (119905119899) =

1

2119873

119873

sum

119899=1

1

120573

exp

int

119905119899

1199050

1

(119888 minus 119905)2119889119905

+ int

119905119899

1199050

[

1

119873 minus 1

119873

sum

119894=1

(119909 minus 119909)2]

12

119889119861 (119905)

(119889119905119899

minus 119910119905119899

)

2

(14)

Figure 3 shows the predicting results of training and test-ing data for SSE TWSE KOSPI and Nikkei225 with the ST-ERNN model correspondingly The curves of the actual dataand the predictive data are intuitively very approximatingIt means that with many times experiments the financialtime series have been well trained the forecasting results aredesired by ST-ERNNmodel

The plots of the real and the predictive data for thesefour price series are respectively shown in Figure 4Throughthe linear regression analysis we make a comparison of thepredictive value of the ST-ERNN model with the real pricedata It is known that the linear regression can be used to fita predictive model to an observed data set of 119884 and 119883 Thelinear equations of SSE TWSE KOSPI and Nikkei225 areexhibited respectively in Figures 4(a)ndash4(d) We can observethat all the slopes of the linear equations for them are drawnnear to 1 which implies that the predictive values and the realvalues are not deviating too much

Table 2 Linear regression parameters of market indices

Parameter SSE TWSE KOSPI Nikkei225119886 09873 09763 09614 09661119887 7582 173 3804 6535119877 0992 09952 09963 09971

A valuable numericalmeasure of association between twovariables is the correlation coefficient 119877 Table 2 shows thevalues of 119886 119887 and119877 for the above indices119877 is given as follows

119877 =

sum119873

119894=1(119889119905minus 119889119905) (119910119905minus 119910119905)

radicsum119873

119894=1(119889119905minus 119889119905)

2

sum119873

119894=1(119910119905minus 119910119905)2

(15)

where 119889119905is the actual value 119910

119905is the predicting value 119889

119905is

the mean of the actual value 119910119905is the mean of the predicting

value and119873 is the total number of the data

33 Comparisons of Forecasting Results We compare theproposed and conventional forecasting approaches (BPNNSTNN and ERNN model) on the four indices mentionedabove where STNN is based on the BPNN and combinedwith the stochastic effective function [19] For these fourdifferent models we set the same inputs of the networksincluding four kinds of series daily open price daily closingprice daily highest price and daily lowest priceThe networkoutput is the closing price of the next trading day In the stockmarkets the practical experience shows us that the abovefour kinds of data of the last trading day are very importantindicators when predicting the closing price of the nexttrading day To choose better parameters we have carried outmany experiments on these four different indices In order toachieve the optimal networks of each forecasting approachthe most appropriate numbers of neural nodes in the hiddenlayer are different the learning rates are also varying by train-ing different models see Table 3 In Table 3 ldquoHiddenrdquo standsfor the number of neural nodes in the hidden layer and ldquoLrrdquo stands for learning rateThe hidden number is also chosenby referring to [21 29 30] The experiments have been donerepeatedly to determine hidden nodes and training cycle inthe training processThedetails of principles of how to choosethe hidden number are as follows If the number of neuralnodes in the input layer is119873 the number of neural nodes inthe hidden layer is set to be nearly 2119873+1 and the number ofneural nodes in the output layer is 1 Since the ERNN modeland the ST-ERNN model have similar topology structuresin Table 3 the number of neural nodes in hidden layer andthe learning rate are chosen approximatly in the two modelsof training process Also the BPNN model and the STNNmodel are similar so the chosen parameters are basically thesame Figures 5(a)ndash5(d) show the predicting values of the fourindexes on the test set From these plots the predicting valuesof the ST-ERNNmodel are closer to the actual values than theothermodels curves To compare the training and forecasting

Computational Intelligence and Neuroscience 7

500 1000 1500 20000

500

1000

1500

Real valueST-ERNN

Test set

Training set

(a) SSEReal valueST-ERNN

500 1000 1500 20002000

4000

6000

8000

10000

12000

Training set

Test set

(b) TWSE

500 1000 1500 2000

400

600

800

1000

Training set

Test set

Real valueST-ERNN

(c) KOSPI

500 1000 1500 2000

1

15

2

25

3

35

4

Test setTraining set

Real valueST-ERNN

times104

(d) Nikkei225

Figure 3 Comparisons of the predictive data and the actual data for the forecasting models

results more clearly the performance measures RMSE MAEMAPE and MAPE(100) are uncovered in the next part

To analyze the forecasting performance of four consid-ered forecasting models deeply we use the following errorevaluation criteria [31ndash35] the mean absolute error (MAE)the root mean square error (RMSE) and the correlationcoefficient (MAPE) the corresponding definitions are givenas follows

MAE =

1

119873

119873

sum

119905=1

1003816100381610038161003816119889119905minus 119910119905

1003816100381610038161003816

RMSE = radic1

119873

119873

sum

119905=1

(119889119905minus 119910119905)2

MAPE = 100 times

1

119873

119873

sum

119905=1

10038161003816100381610038161003816100381610038161003816

119889119905minus 119910119905

119889119905

10038161003816100381610038161003816100381610038161003816

(16)

where 119889119905and 119910

119905are the real value and the predicting value

at time 119905 respectively 119873 is the total number of the dataNoting that MAE RMSE and MAPE are measures of thedeviation between the prediction values and the actual valuesthe prediction performance is better when the values of theseevaluation criteria are smaller However if the results are notconsistent among these criteria we choose the MAPE as thebenchmark since MAPE is relatively more stable than othercriteria [16]

Figures 6(a)ndash6(d) show the forecasting results of SSETWSE KOSPI and Nikkei225 for four forecasting mod-els The empirical research shows that the proposed ST-ERNN model has the best performance the ERNN and theSTNN both outperform the common BPNN model Thestock markets showed large fluctuations which are reflectedin Figure 6 we can see that the large fluctuation periodforecasting is relatively not accurate from these four models

8 Computational Intelligence and Neuroscience

200 400 600 800 1000 1200 1400

200

400

600

800

1000

1200

1400

Real value of SSE

Pred

ictiv

e val

ue o

f SSE

Predictive value pointsLinear fit

Y = 09873X + 7582

(a) SSE

4000 5000 6000 7000 8000 9000 10000

4000

5000

6000

7000

8000

9000

10000

Real value of TWSE

Pred

ictiv

e val

ue o

f TW

SE

Predictive value pointsLinear fit

Y = 09763X + 173

(b) TWSE

300 400 500 600 700 800 900 1000 1100

400

600

800

1000

Real value of KOSPI

Pred

ictiv

e val

ue o

f KO

SPI

Predictive value pointsLinear fit

Y = 09614X + 3804

(c) KOSPI

Predictive value pointsLinear fit

1 15 2 25 3 351

15

2

25

3

35

Real value of Nikkei225

Pred

ictiv

e val

ue o

f Nik

kei2

25times104

times104

Y = 09661X + 6535

(d) Nikkei225

Figure 4 Comparisons and linear regressions of the actual data and the predictive values for SSE TWSE KOSPI and Nikkei225

Table 3 Different parameters for different models

Index data BPNN STNN ERNN ST-ERNNHidden L r Hidden L r Hidden L r Hidden L r

SSE 8 001 8 001 10 0001 9 0001TWSE 10 001 10 001 12 0001 12 0001KOSPI 8 002 8 002 10 003 10 005Nikkei225 10 005 10 005 10 001 10 001

When the stock market is relatively stable the forecastingresult is nearer to the actual value Compared with theBPNN the STNN and the ERNN models the forecastingresults are also presented in Table 4 where the MAPE(100)stands for the latest 100 days of MAPE in the testing dataTable 4 shows that the evaluation criteria by the ST-ERNN

model are almost smaller than those by other models FromTable 4 and Figure 6 we can conclude that the proposedST-ERNN model is better than the other three models InTable 4 the evaluation criteria by the STNN model andERNNmodel are almost smaller than those by BPNN for fourconsidered indices It illustrates that the effect of financial

Computational Intelligence and Neuroscience 9

1500 1600 1700 1800 1900 2000

200

400

600

800

1000

1200

1400

STNNBPNN

ERNN

ST-ERNNReal value

(a) SSE

1500 1600 1700 1800 1900 20005000

6000

7000

8000

9000

10000

11000

STNNBPNN

ERNN

ST-ERNNReal value

(b) TWSE

1500 1600 1700 1800 1900 2000

300

400

500

600

700

800

900

1000

STNNBPNN

ERNN

ST-ERNNReal value

(c) KOSPI

STNNBPNN

ERNN

ST-ERNNReal value

1500 1600 1700 1800 1900 2000

095

1

105

11

115

12

125

13

135times104

(d) Nikkei225

Figure 5 Predictive values on the test set for SSE TWSE KOSPI and Nikkei225

time series forecasting of STNN model is superior to that ofBPNN model and the dynamic neural network is effectiverobust and precise than original BPNN for these four indicesBesides most values of MAPE(100) are smaller than thoseof MAPE in all stock indexes Therefore the short-termprediction outperforms the long-term prediction Overalltraining and testing results are consistent with the measureddata which demonstrates that the ST-ERNN predictor hashigher forecast accuracy

In Figures 7(a) 7(b) 7(c) and 7(d) we considered therelative errors of the ST-ERNN forecasting results Figure 7depicts that most of the predicting relative errors for thesefour price series are betweenminus01 and 01Moreover there aresome points with large relative errors of forecasting results infour models especially on the SSE index which can attribute

to the large fluctuation that leads to the large relative errorsThe definition of relative error is given as follows

119890 (119905) =

119889119905minus 119910119905

119889119905

(17)

where 119889119905and 119910

119905denote the actual value and the predicting

value respectively at time 119905 119905 = 1 2

4 CID and MCID Analysis

The analysis and forecast of time series have long been afocus of economic research for a more clear understanding ofmechanism and characteristics of financial markets [36ndash42]In this section we employ an efficient complexity invariantdistance (CID) for time series Reference [43] shows that

10 Computational Intelligence and Neuroscience

200 400 600 800 1000 1200 1400 1600 1800 20000

200

400

600

800

1000

1200

1400

1600

BPNNSTNNERNN

ST-ERNNReal value

1560 1580 1600 1620 1640

800

1000

1200

(a) SSE

BPNNSTNNERNN

ST-ERNNReal value

200 400 600 800 1000 1200 1400 1600 1800 20003000

4000

5000

6000

7000

8000

9000

10000

11000

1600 1620 1640 1660 1680 17005500

6000

6500

7000

7500

(b) TWSE

BPNNSTNNERNN

ST-ERNNReal value

200 400 600 800 1000 1200 1400 1600 1800 2000200

400

600

800

1000

1200

1800 1850 1900 1950

400500600700

(c) KOSPI

BPNNSTNNERNN

ST-ERNNReal value

200 400 600 800 1000 1200 1400 1600 1800 2000

1

15

2

25

3

35

4

1800 1850 1900 1950 2000

095

1

105

11

115times104

times104

(d) Nikkei225

Figure 6 Comparisons of the actual data and the predictive data for SSE TWSE KOSPI and Nikkei225

complexity invariant distancemeasure can produce improve-ments in classification and clustering in the vast majority ofcases

Complexity invariance uses information about complex-ity differences between two time series as a correction factorfor existing distance measures We begin by introducingEuclidean distance and use this as a starting point to bringin the definition of CID Suppose we have two time series 119875and 119876 of length 119899 Consider

119875 = 1199011 1199012 119901

119894 119901

119899

119876 = 1199021 1199022 119902

119894 119902

119899

(18)

The ubiquitous Euclidean distance is

ED (119875 119876) = radic

119899

sum

119894=1

(119901119894minus 119902119894)2 (19)

The Euclidean distance ED(119875 119876) between two time series 119875and 119876 can be made complexity invariant by introducing acorrection factor

CID (119875 119876) = ED (119875 119876) times CF (119875 119876) (20)

where CF is a complexity correction factor defined as

CF (119875 119876) = max (CE (119875) CE (119876))min (CE (119875) CE (119876))

(21)

and CE(119879) is a complexity estimate of a time series 119879 whichcan be computed as follows

CE (119879) = radic

119899minus1

sum

119894=1

(119905119894+1

minus 119905119894)2 (22)

Computational Intelligence and Neuroscience 11

200 400 600 800 1000 1200 1400 1600 1800 2000

minus02

minus01

0

01

02

Relat

ive e

rror

of S

SE

Relative error(a) SSE

Relative error

200 400 600 800 1000 1200 1400 1600 1800 2000

minus01

minus005

0

005

01

Rela

tive e

rror

of T

WSE

(b) TWSE

Relative error

200 400 600 800 1000 1200 1400 1600 1800 2000

minus02

minus015

minus01

minus005

0

005

01

015

Relat

ive e

rror

of K

OSP

I

(c) KOSPIRelative error

200 400 600 800 1000 1200 1400 1600 1800 2000minus01

minus005

0

005

01

Relat

ive e

rror

of N

ikke

i225

(d) Nikkei225

Figure 7 ((a) (b) (c) and (d)) Relative errors of forecasting results from the ST-ERNNmodel

It is worth noticing that CF accounts for differences inthe complexities of the time series being compared CF forcestime series with very different complexities to be furtherapart In the case that all time series have the same complexityCID simply degenerates to Euclidean distanceThepredictionperformance is better when the CID distance is smallerthat is to say the curve of the predictive data is closer tothe actual data The actual values can be seen as the series119875 and the predicting results as the series 119876 Table 5 showsCID distance between the real indices values of SSE TWSEKOSPI and Nikkei225 and the corresponding predictionsfrom each network model It is clear that the CID distancebetween the real index values and the prediction by ST-ERNNmodel is the smallest one moreover the distances by theSTNN model and the ERNN model are smaller than thoseby the BPNN for all the four considered indices

In general the complexity of a real system is not con-strained to a sole scale In this part we consider a developedCID analysis that is the multiscale CID (MCID) TheMCID analysis takes into account the multiple time scales

while measuring the predicting results and it is appliedto the stock prices analysis for the actual data and thepredicting data in this work The MCID analysis shouldcomprise two steps (i) Considering one-dimensional discretetime series 119909

1 1199092 119909

119894 119909

119873 we construct consecutive

coarse-grained time series 119910(120591) corresponding to the scalefactor 120591 according to the following formula

119910(120591)

119895=

1

120591

119895120591

sum

119894=(119895minus1)120591+1

119909119894 1 le 119895 le

119873

120591

(23)

For scale one the time series 119910(1) is simply the originaltime series The length of each coarse-grained time series isequal to the original time series divided by the scale factor120591 (ii) Calculate the CID for each coarse-grained time seriesand then plot as a function of the scale factor Figure 8shows the MCID values between the forecasting results andthe real market prices from BPNN ERNN STNN and ST-ERNNmodels In Figure 8 it is obvious that the MCID from

12 Computational Intelligence and Neuroscience

0 5 10 15 20 25 30 35 40

500

1000

1500

2000

2500

3000

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(a) SSE

0 5 10 15 20 25 30 35 400

05

1

15

2

25

3times104

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(b) TWSE

0 5 10 15 20 25 30 35 400

500

1000

1500

2000

2500

3000

3500

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(c) KOSPI

0 5 10 15 20 25 30 35 40

05

1

15

2

25

3

35

4

45

5times104

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(d) Nikkei225

Figure 8 ((a) (b) (c) and (d)) MCID values between the forecasting results and the real market prices from BPNN ERNN STNN andST-ERNNmodels

ST-ERNN with the actual value is the smallest one in anyscale that is the ST-ERNN (with the stochastic time effectivefunction) for forecasting stock prices is effective

5 Conclusion

The aim of this research is to develop a predictive modelto forecast the financial time series In this study we havedeveloped a predictive model by using an Elman recurrentneural network with the stochastic time effective function toforecast the indices of SSE TWSE KOSPI and Nikkei225Through the linear regression analysis it implies that thepredictive values and the real values are not deviating too

much Then we take the proposed model compared withBPNN STNN and ERNN forecasting models Empiricalexaminations of predicting precision for the price time series(by the comparisons of predicting measures as MAE RMSEMAPE and MAPE(100)) show that the proposed neuralnetwork model has the advantage of improving the precisionof forecasting and the forecasting of this proposed modelmuch approaches to the real financial market movementsFurthermore from the curve of the relative error it canmake a conclusion that the large fluctuation leads to thelarge relative errors In addition by calculating CID andMCID distance the conclusion was illustrated more clearlyThe study and the proposed model contribute significantly tothe time series literature on forecasting

Computational Intelligence and Neuroscience 13

Table 4 Comparisons of indicesrsquo predictions for different forecasting models

Index errors BPNN STNN ERNN ST-ERNNSSE

MAE 453701 249687 37262647 127390RMSE 544564 405437 493907 370693MAPE 201994 118947 182110 41353MAPE(100) 50644 36868 43176 26809

TWSEMAE 2527225 1405971 1512830 1056377RMSE 3168197 1868309 2054236 1361674MAPE 32017 17303 18449 13468MAPE(100) 22135 11494 13349 12601

KOSPIMAE 743073 563309 479296 182421RMSE 771528 582944 508174 210479MAPE 166084 124461 109608 42257MAPE(100) 74379 59664 49176 21788

Nikkei225MAE 2038034 1381857 1662480 685458RMSE 2385933 1697061 2073395 890378MAPE 18556 12580 15398 06010MAPE(100) 07674 05191 04962 04261

Table 5 CID distances for four network models

Index BPNN STNN ERNN ST-ERNNSSE 30521 17647 23202 16599TWSE 27805 98763 10830 61581KOSPI 33504 23128 25510 10060Nikkei225 44541 23726 32895 25421

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

The authors were supported in part by National NaturalScience Foundation of China under Grant nos 71271026 and10971010

References

[1] Y Kajitani K W Hipel and A I McLeod ldquoForecastingnonlinear time series with feed-forward neural networks a casestudy of Canadian lynx datardquo Journal of Forecasting vol 24 no2 pp 105ndash117 2005

[2] T Takahama S Sakai A Hara and N Iwane ldquoPredicting stockprice using neural networks optimized by differential evolutionwith degenerationrdquo International Journal of Innovative Comput-ing Information and Control vol 5 no 12 pp 5021ndash5031 2009

[3] K Huarng and T H-K Yu ldquoThe application of neural networksto forecast fuzzy time seriesrdquo Physica A vol 363 no 2 pp 481ndash491 2006

[4] F Wang and J Wang ldquoStatistical analysis and forecasting ofreturn interval for SSE and model by lattice percolation systemand neural networkrdquoComputers and Industrial Engineering vol62 no 1 pp 198ndash205 2012

[5] H-J Kim and K-S Shin ldquoA hybrid approach based on neuralnetworks and genetic algorithms for detecting temporal pat-terns in stock marketsrdquo Applied Soft Computing vol 7 no 2pp 569ndash576 2007

[6] M Ghiassi H Saidane and D K Zimbra ldquoA dynamic artificialneural network model for forecasting time series eventsrdquoInternational Journal of Forecasting vol 21 no 2 pp 341ndash3622005

[7] M R Hassan B Nath andM Kirley ldquoA fusionmodel of HMMANNandGA for stockmarket forecastingrdquo Expert Systems withApplications vol 33 no 1 pp 171ndash180 2007

[8] D Devaraj B Yegnanarayana and K Ramar ldquoRadial basisfunction networks for fast contingency rankingrdquo InternationalJournal of Electrical Power and Energy Systems vol 24 no 5 pp387ndash395 2002

[9] B A Garroa and R A Vazquez ldquoDesigning artificial neuralnetworks using particle swarm optimization algorithmsrdquo Com-putational Intelligence and Neuroscience vol 2015 Article ID369298 20 pages 2015

[10] Q Gan ldquoExponential synchronization of stochastic Cohen-Grossberg neural networks withmixed time-varying delays andreaction-diffusion via periodically intermittent controlrdquoNeuralNetworks vol 31 pp 12ndash21 2012

[11] D Xiao and J Wang ldquoModeling stock price dynamics bycontinuum percolation system and relevant complex systemsanalysisrdquo Physica A Statistical Mechanics and its Applicationsvol 391 no 20 pp 4827ndash4838 2012

[12] D Enke and N Mehdiyev ldquoStock market prediction usinga combination of stepwise regression analysis differentialevolution-based fuzzy clustering and a fuzzy inference Neural

14 Computational Intelligence and Neuroscience

Networkrdquo Intelligent Automation and Soft Computing vol 19no 4 pp 636ndash648 2013

[13] G Sermpinisa C Stasinakisa and C Dunisb ldquoStochastic andgenetic neural network combinations in trading and hybridtime-varying leverage effectsrdquo Journal of International FinancialMarkets Institutions amp Money vol 30 pp 21ndash54 2014

[14] R Ebrahimpour H Nikoo S Masoudnia M R Yousefi andM S Ghaemi ldquoMixture of mlp-experts for trend forecastingof time series a case study of the tehran stock exchangerdquoInternational Journal of Forecasting vol 27 no 3 pp 804ndash8162011

[15] A Bahrammirzaee ldquoA comparative survey of artificial intelli-gence applications in finance artificial neural networks expertsystem and hybrid intelligent systemsrdquo Neural Computing andApplications vol 19 no 8 pp 1165ndash1195 2010

[16] A Graves M Liwicki S Fernandez R Bertolami H Bunkeand J Schmidhuber ldquoA novel connectionist system for uncon-strained handwriting recognitionrdquo IEEE Transactions on Pat-tern Analysis and Machine Intelligence vol 31 no 5 pp 855ndash868 2009

[17] M Ardalani-Farsa and S Zolfaghari ldquoChaotic time seriesprediction with residual analysis method using hybrid Elman-NARX neural networksrdquoNeurocomputing vol 73 no 13ndash15 pp2540ndash2553 2010

[18] M Paliwal and U A Kumar ldquoNeural networks and statisticaltechniques a review of applicationsrdquo Expert Systems withApplications vol 36 no 1 pp 2ndash17 2009

[19] Z Liao and J Wang ldquoForecasting model of global stock indexby stochastic time effective neural networkrdquoExpert SystemswithApplications vol 37 no 1 pp 834ndash841 2010

[20] L Pan and J Cao ldquoRobust stability for uncertain stochasticneural network with delay and impulsesrdquo Neurocomputing vol94 pp 102ndash110 2012

[21] H F Liu and J Wang ldquoIntegrating independent componentanalysis and principal component analysis with neural networkto predict Chinese stock marketrdquo Mathematical Problems inEngineering vol 2011 Article ID 382659 15 pages 2011

[22] Z Q Guo H Q Wang and Q Liu ldquoFinancial time seriesforecasting using LPP and SVM optimized by PSOrdquo SoftComputing vol 17 no 5 pp 805ndash818 2013

[23] H L Niu and J Wang ldquoFinancial time series predictionby a random data-time effective RBF neural networkrdquo SoftComputing vol 18 no 3 pp 497ndash508 2014

[24] J L Elman ldquoFinding structure in timerdquo Cognitive Science vol14 no 2 pp 179ndash211 1990

[25] MCacciola GMegali D Pellicano and F CMorabito ldquoElmanneural networks for characterizing voids in welded strips astudyrdquo Neural Computing and Applications vol 21 no 5 pp869ndash875 2012

[26] R Chandra and M Zhang ldquoCooperative coevolution of Elmanrecurrent neural networks for chaotic time series predictionrdquoNeurocomputing vol 86 pp 116ndash123 2012

[27] D K Chaturvedi P S Satsangi and P K Kalra ldquoEffect of differ-ent mappings and normalization of neural network modelsrdquo inProceedings of the National Power Systems Conference vol 9 pp377ndash386 Indian Institute of Technology Kanpur India 1996

[28] S Makridakis ldquoAccuracy measures theoretical and practicalconcernsrdquo International Journal of Forecasting vol 9 no 4 pp527ndash529 1993

[29] L Q Han Design and Application of Artificial Neural NetworkChemical Industry Press 2002

[30] M Zounemat-Kermani ldquoPrincipal component analysis (PCA)for estimating chlorophyll concentration using forward andgeneralized regression neural networksrdquoApplied Artificial Intel-ligence vol 28 no 1 pp 16ndash29 2014

[31] D Olson and C Mossman ldquoNeural network forecasts ofCanadian stock returns using accounting ratiosrdquo InternationalJournal of Forecasting vol 19 no 3 pp 453ndash465 2003

[32] H Demuth and M Beale Network Toolbox For Use withMATLAB The Math Works Natick Mass USA 5th edition1998

[33] A P Plumb R C Rowe P York and M Brown ldquoOptimisationof the predictive ability of artificial neural network (ANN)models a comparison of three ANN programs and four classesof training algorithmrdquo European Journal of PharmaceuticalSciences vol 25 no 4-5 pp 395ndash405 2005

[34] D O Faruk ldquoA hybrid neural network and ARIMA model forwater quality time series predictionrdquo Engineering Applicationsof Artificial Intelligence vol 23 no 4 pp 586ndash594 2010

[35] M Tripathy ldquoPower transformer differential protection usingneural network principal component analysis and radial basisfunction neural networkrdquo Simulation Modelling Practice andTheory vol 18 no 5 pp 600ndash611 2010

[36] C E Martin and J A Reggia ldquoFusing swarm intelligenceand self-assembly for optimizing echo state networksrdquo Com-putational Intelligence and Neuroscience vol 2015 Article ID642429 15 pages 2015

[37] R S Tsay Analysis of Financial Time Series JohnWiley amp SonsHoboken NJ USA 2005

[38] P-C Chang D-D Wang and C-L Zhou ldquoA novel modelby evolving partially connected neural network for stock pricetrend forecastingrdquo Expert Systems with Applications vol 39 no1 pp 611ndash620 2012

[39] P Roy G S Mahapatra P Rani S K Pandey and K NDey ldquoRobust feed forward and recurrent neural network baseddynamic weighted combination models for software reliabilitypredictionrdquo Applied Soft Computing vol 22 pp 629ndash637 2014

[40] X Gabaix P Gopikrishnan V Plerou andH E Stanley ldquoA the-ory of power-law distributions in financial market fluctuationsrdquoNature vol 423 no 6937 pp 267ndash270 2003

[41] R N Mantegna and H E Stanley An Introduction to Econo-physics Correlations and Complexity in Finance CambridgeUniversity Press Cambridge UK 2000

[42] T H Roh ldquoForecasting the volatility of stock price indexrdquoExpert Systems with Applications vol 33 no 4 pp 916ndash9222007

[43] G E Batista E J KeoghOM Tataw andVM de Souza ldquoCIDan efficient complexity-invariant distance for time seriesrdquo DataMining and Knowledge Discovery vol 28 no 3 pp 634ndash6692014

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 5: Research Article Financial Time Series Prediction …downloads.hindawi.com/journals/cin/2016/4742515.pdfResearch Article Financial Time Series Prediction Using Elman Recurrent Random

Computational Intelligence and Neuroscience 5

Table 1 Data selection

Index Date sets Total number Hidden number Learning rateSSE 16032006sim19032014 2000 9 0001TWSE 09022006sim19032014 2000 12 0001KOSPI 20022006sim19032014 2000 10 005Nikkei225 27012006sim19032014 2000 10 001

Start

Step 1 construct the ST-ERNNmodel

Step 2 initialize the connective

algorithm

Step 4 establish the erroraccuracy

If E lt 120585

Yes

Step 6 output predictive valueyt+1

Input preset minimumerror 120585

Compute the costfunction E

No

Set the topology of network architecture

Establish input vector xt and output yt+1

Set the training data

Introduce the stochastic time effectivefunction

Establish the update rule for the weights

Apply transfer function

Step 5 modify the connectiveweights

Step 3 set up the learning

weights wij j and cjwij j or cj

Modify the weights wij j or cj

Δj and Δcj

Calculate the gradient of weights Δwij

Figure 2 Training algorithm procedures of ST-ERNN

the true value after the forecasting we can revert the outputvariables as

119878 (119905) = 119878 (119905)1015840(max 119878 (119905) minusmin 119878 (119905)) +min 119878 (119905) (12)

32 Training and Forecasting by ST-ERNN Model In the ST-ERNNmodel after we have done the experiments repeatedlyon the different index data different number of neural nodes

in the hidden layer were chose as the optimal see Table 1The dataset was divide into two subsets the training set andthe testing set It is noteworthy that the lengths of the datapoints we chose for these four time series are the same thelengths of training data and testing data are also set the sameThe training set with 75of datawas used formodel buildingand the testing set with the last 25 was used to test on theout-of-sample set the predictive part of a modelThe training

6 Computational Intelligence and Neuroscience

set is totally 1500 data for the SSE is from 16032006 to22022012 while that for the TWSE is from 09022006 to08032012 for KOSPI is from 20022006 to 09032012 andfor Nikkei is from 27012006 to 09032012 The numberof the rest is 500 defined as the testing set We preset thelearning rate and the maximum training cycle by referring to[21 29 30] then we have done the experiment repeatedly onthe training set of different index data different numbers ofneural nodes in the hidden layer are chosen as the optimal seeTable 1 The maximum training iterations number 119870 is 300different dataset has different learning rate 120578 and here aftermany times experiments of the training data we choose 00010001 005 and 001 for SSE TWSE KOSPI and Nikkei225respectively And the predefined minimum training thresh-old 120585 = 10

minus5 When using the ER-STNNmodel to predict thedaily closing price of stock index we assume that 120583(119905) (thedrift function) and120590(119905) (the volatility function) are as follows

120583 (119905) =

1

(119888 minus 119905)2

120590 (119905) = [

1

119873 minus 1

119873

sum

119894=1

(119909 minus 119909)2]

12

(13)

where 119888 is the parameter which is equal to the number ofsamples in the datasets and 119909 is the mean of the sample dataThen the corresponding cost function can be written by

119864 =

1

119873

119873

sum

119899=1

119864 (119905119899) =

1

2119873

119873

sum

119899=1

1

120573

exp

int

119905119899

1199050

1

(119888 minus 119905)2119889119905

+ int

119905119899

1199050

[

1

119873 minus 1

119873

sum

119894=1

(119909 minus 119909)2]

12

119889119861 (119905)

(119889119905119899

minus 119910119905119899

)

2

(14)

Figure 3 shows the predicting results of training and test-ing data for SSE TWSE KOSPI and Nikkei225 with the ST-ERNN model correspondingly The curves of the actual dataand the predictive data are intuitively very approximatingIt means that with many times experiments the financialtime series have been well trained the forecasting results aredesired by ST-ERNNmodel

The plots of the real and the predictive data for thesefour price series are respectively shown in Figure 4Throughthe linear regression analysis we make a comparison of thepredictive value of the ST-ERNN model with the real pricedata It is known that the linear regression can be used to fita predictive model to an observed data set of 119884 and 119883 Thelinear equations of SSE TWSE KOSPI and Nikkei225 areexhibited respectively in Figures 4(a)ndash4(d) We can observethat all the slopes of the linear equations for them are drawnnear to 1 which implies that the predictive values and the realvalues are not deviating too much

Table 2 Linear regression parameters of market indices

Parameter SSE TWSE KOSPI Nikkei225119886 09873 09763 09614 09661119887 7582 173 3804 6535119877 0992 09952 09963 09971

A valuable numericalmeasure of association between twovariables is the correlation coefficient 119877 Table 2 shows thevalues of 119886 119887 and119877 for the above indices119877 is given as follows

119877 =

sum119873

119894=1(119889119905minus 119889119905) (119910119905minus 119910119905)

radicsum119873

119894=1(119889119905minus 119889119905)

2

sum119873

119894=1(119910119905minus 119910119905)2

(15)

where 119889119905is the actual value 119910

119905is the predicting value 119889

119905is

the mean of the actual value 119910119905is the mean of the predicting

value and119873 is the total number of the data

33 Comparisons of Forecasting Results We compare theproposed and conventional forecasting approaches (BPNNSTNN and ERNN model) on the four indices mentionedabove where STNN is based on the BPNN and combinedwith the stochastic effective function [19] For these fourdifferent models we set the same inputs of the networksincluding four kinds of series daily open price daily closingprice daily highest price and daily lowest priceThe networkoutput is the closing price of the next trading day In the stockmarkets the practical experience shows us that the abovefour kinds of data of the last trading day are very importantindicators when predicting the closing price of the nexttrading day To choose better parameters we have carried outmany experiments on these four different indices In order toachieve the optimal networks of each forecasting approachthe most appropriate numbers of neural nodes in the hiddenlayer are different the learning rates are also varying by train-ing different models see Table 3 In Table 3 ldquoHiddenrdquo standsfor the number of neural nodes in the hidden layer and ldquoLrrdquo stands for learning rateThe hidden number is also chosenby referring to [21 29 30] The experiments have been donerepeatedly to determine hidden nodes and training cycle inthe training processThedetails of principles of how to choosethe hidden number are as follows If the number of neuralnodes in the input layer is119873 the number of neural nodes inthe hidden layer is set to be nearly 2119873+1 and the number ofneural nodes in the output layer is 1 Since the ERNN modeland the ST-ERNN model have similar topology structuresin Table 3 the number of neural nodes in hidden layer andthe learning rate are chosen approximatly in the two modelsof training process Also the BPNN model and the STNNmodel are similar so the chosen parameters are basically thesame Figures 5(a)ndash5(d) show the predicting values of the fourindexes on the test set From these plots the predicting valuesof the ST-ERNNmodel are closer to the actual values than theothermodels curves To compare the training and forecasting

Computational Intelligence and Neuroscience 7

500 1000 1500 20000

500

1000

1500

Real valueST-ERNN

Test set

Training set

(a) SSEReal valueST-ERNN

500 1000 1500 20002000

4000

6000

8000

10000

12000

Training set

Test set

(b) TWSE

500 1000 1500 2000

400

600

800

1000

Training set

Test set

Real valueST-ERNN

(c) KOSPI

500 1000 1500 2000

1

15

2

25

3

35

4

Test setTraining set

Real valueST-ERNN

times104

(d) Nikkei225

Figure 3 Comparisons of the predictive data and the actual data for the forecasting models

results more clearly the performance measures RMSE MAEMAPE and MAPE(100) are uncovered in the next part

To analyze the forecasting performance of four consid-ered forecasting models deeply we use the following errorevaluation criteria [31ndash35] the mean absolute error (MAE)the root mean square error (RMSE) and the correlationcoefficient (MAPE) the corresponding definitions are givenas follows

MAE =

1

119873

119873

sum

119905=1

1003816100381610038161003816119889119905minus 119910119905

1003816100381610038161003816

RMSE = radic1

119873

119873

sum

119905=1

(119889119905minus 119910119905)2

MAPE = 100 times

1

119873

119873

sum

119905=1

10038161003816100381610038161003816100381610038161003816

119889119905minus 119910119905

119889119905

10038161003816100381610038161003816100381610038161003816

(16)

where 119889119905and 119910

119905are the real value and the predicting value

at time 119905 respectively 119873 is the total number of the dataNoting that MAE RMSE and MAPE are measures of thedeviation between the prediction values and the actual valuesthe prediction performance is better when the values of theseevaluation criteria are smaller However if the results are notconsistent among these criteria we choose the MAPE as thebenchmark since MAPE is relatively more stable than othercriteria [16]

Figures 6(a)ndash6(d) show the forecasting results of SSETWSE KOSPI and Nikkei225 for four forecasting mod-els The empirical research shows that the proposed ST-ERNN model has the best performance the ERNN and theSTNN both outperform the common BPNN model Thestock markets showed large fluctuations which are reflectedin Figure 6 we can see that the large fluctuation periodforecasting is relatively not accurate from these four models

8 Computational Intelligence and Neuroscience

200 400 600 800 1000 1200 1400

200

400

600

800

1000

1200

1400

Real value of SSE

Pred

ictiv

e val

ue o

f SSE

Predictive value pointsLinear fit

Y = 09873X + 7582

(a) SSE

4000 5000 6000 7000 8000 9000 10000

4000

5000

6000

7000

8000

9000

10000

Real value of TWSE

Pred

ictiv

e val

ue o

f TW

SE

Predictive value pointsLinear fit

Y = 09763X + 173

(b) TWSE

300 400 500 600 700 800 900 1000 1100

400

600

800

1000

Real value of KOSPI

Pred

ictiv

e val

ue o

f KO

SPI

Predictive value pointsLinear fit

Y = 09614X + 3804

(c) KOSPI

Predictive value pointsLinear fit

1 15 2 25 3 351

15

2

25

3

35

Real value of Nikkei225

Pred

ictiv

e val

ue o

f Nik

kei2

25times104

times104

Y = 09661X + 6535

(d) Nikkei225

Figure 4 Comparisons and linear regressions of the actual data and the predictive values for SSE TWSE KOSPI and Nikkei225

Table 3 Different parameters for different models

Index data BPNN STNN ERNN ST-ERNNHidden L r Hidden L r Hidden L r Hidden L r

SSE 8 001 8 001 10 0001 9 0001TWSE 10 001 10 001 12 0001 12 0001KOSPI 8 002 8 002 10 003 10 005Nikkei225 10 005 10 005 10 001 10 001

When the stock market is relatively stable the forecastingresult is nearer to the actual value Compared with theBPNN the STNN and the ERNN models the forecastingresults are also presented in Table 4 where the MAPE(100)stands for the latest 100 days of MAPE in the testing dataTable 4 shows that the evaluation criteria by the ST-ERNN

model are almost smaller than those by other models FromTable 4 and Figure 6 we can conclude that the proposedST-ERNN model is better than the other three models InTable 4 the evaluation criteria by the STNN model andERNNmodel are almost smaller than those by BPNN for fourconsidered indices It illustrates that the effect of financial

Computational Intelligence and Neuroscience 9

1500 1600 1700 1800 1900 2000

200

400

600

800

1000

1200

1400

STNNBPNN

ERNN

ST-ERNNReal value

(a) SSE

1500 1600 1700 1800 1900 20005000

6000

7000

8000

9000

10000

11000

STNNBPNN

ERNN

ST-ERNNReal value

(b) TWSE

1500 1600 1700 1800 1900 2000

300

400

500

600

700

800

900

1000

STNNBPNN

ERNN

ST-ERNNReal value

(c) KOSPI

STNNBPNN

ERNN

ST-ERNNReal value

1500 1600 1700 1800 1900 2000

095

1

105

11

115

12

125

13

135times104

(d) Nikkei225

Figure 5 Predictive values on the test set for SSE TWSE KOSPI and Nikkei225

time series forecasting of STNN model is superior to that ofBPNN model and the dynamic neural network is effectiverobust and precise than original BPNN for these four indicesBesides most values of MAPE(100) are smaller than thoseof MAPE in all stock indexes Therefore the short-termprediction outperforms the long-term prediction Overalltraining and testing results are consistent with the measureddata which demonstrates that the ST-ERNN predictor hashigher forecast accuracy

In Figures 7(a) 7(b) 7(c) and 7(d) we considered therelative errors of the ST-ERNN forecasting results Figure 7depicts that most of the predicting relative errors for thesefour price series are betweenminus01 and 01Moreover there aresome points with large relative errors of forecasting results infour models especially on the SSE index which can attribute

to the large fluctuation that leads to the large relative errorsThe definition of relative error is given as follows

119890 (119905) =

119889119905minus 119910119905

119889119905

(17)

where 119889119905and 119910

119905denote the actual value and the predicting

value respectively at time 119905 119905 = 1 2

4 CID and MCID Analysis

The analysis and forecast of time series have long been afocus of economic research for a more clear understanding ofmechanism and characteristics of financial markets [36ndash42]In this section we employ an efficient complexity invariantdistance (CID) for time series Reference [43] shows that

10 Computational Intelligence and Neuroscience

200 400 600 800 1000 1200 1400 1600 1800 20000

200

400

600

800

1000

1200

1400

1600

BPNNSTNNERNN

ST-ERNNReal value

1560 1580 1600 1620 1640

800

1000

1200

(a) SSE

BPNNSTNNERNN

ST-ERNNReal value

200 400 600 800 1000 1200 1400 1600 1800 20003000

4000

5000

6000

7000

8000

9000

10000

11000

1600 1620 1640 1660 1680 17005500

6000

6500

7000

7500

(b) TWSE

BPNNSTNNERNN

ST-ERNNReal value

200 400 600 800 1000 1200 1400 1600 1800 2000200

400

600

800

1000

1200

1800 1850 1900 1950

400500600700

(c) KOSPI

BPNNSTNNERNN

ST-ERNNReal value

200 400 600 800 1000 1200 1400 1600 1800 2000

1

15

2

25

3

35

4

1800 1850 1900 1950 2000

095

1

105

11

115times104

times104

(d) Nikkei225

Figure 6 Comparisons of the actual data and the predictive data for SSE TWSE KOSPI and Nikkei225

complexity invariant distancemeasure can produce improve-ments in classification and clustering in the vast majority ofcases

Complexity invariance uses information about complex-ity differences between two time series as a correction factorfor existing distance measures We begin by introducingEuclidean distance and use this as a starting point to bringin the definition of CID Suppose we have two time series 119875and 119876 of length 119899 Consider

119875 = 1199011 1199012 119901

119894 119901

119899

119876 = 1199021 1199022 119902

119894 119902

119899

(18)

The ubiquitous Euclidean distance is

ED (119875 119876) = radic

119899

sum

119894=1

(119901119894minus 119902119894)2 (19)

The Euclidean distance ED(119875 119876) between two time series 119875and 119876 can be made complexity invariant by introducing acorrection factor

CID (119875 119876) = ED (119875 119876) times CF (119875 119876) (20)

where CF is a complexity correction factor defined as

CF (119875 119876) = max (CE (119875) CE (119876))min (CE (119875) CE (119876))

(21)

and CE(119879) is a complexity estimate of a time series 119879 whichcan be computed as follows

CE (119879) = radic

119899minus1

sum

119894=1

(119905119894+1

minus 119905119894)2 (22)

Computational Intelligence and Neuroscience 11

200 400 600 800 1000 1200 1400 1600 1800 2000

minus02

minus01

0

01

02

Relat

ive e

rror

of S

SE

Relative error(a) SSE

Relative error

200 400 600 800 1000 1200 1400 1600 1800 2000

minus01

minus005

0

005

01

Rela

tive e

rror

of T

WSE

(b) TWSE

Relative error

200 400 600 800 1000 1200 1400 1600 1800 2000

minus02

minus015

minus01

minus005

0

005

01

015

Relat

ive e

rror

of K

OSP

I

(c) KOSPIRelative error

200 400 600 800 1000 1200 1400 1600 1800 2000minus01

minus005

0

005

01

Relat

ive e

rror

of N

ikke

i225

(d) Nikkei225

Figure 7 ((a) (b) (c) and (d)) Relative errors of forecasting results from the ST-ERNNmodel

It is worth noticing that CF accounts for differences inthe complexities of the time series being compared CF forcestime series with very different complexities to be furtherapart In the case that all time series have the same complexityCID simply degenerates to Euclidean distanceThepredictionperformance is better when the CID distance is smallerthat is to say the curve of the predictive data is closer tothe actual data The actual values can be seen as the series119875 and the predicting results as the series 119876 Table 5 showsCID distance between the real indices values of SSE TWSEKOSPI and Nikkei225 and the corresponding predictionsfrom each network model It is clear that the CID distancebetween the real index values and the prediction by ST-ERNNmodel is the smallest one moreover the distances by theSTNN model and the ERNN model are smaller than thoseby the BPNN for all the four considered indices

In general the complexity of a real system is not con-strained to a sole scale In this part we consider a developedCID analysis that is the multiscale CID (MCID) TheMCID analysis takes into account the multiple time scales

while measuring the predicting results and it is appliedto the stock prices analysis for the actual data and thepredicting data in this work The MCID analysis shouldcomprise two steps (i) Considering one-dimensional discretetime series 119909

1 1199092 119909

119894 119909

119873 we construct consecutive

coarse-grained time series 119910(120591) corresponding to the scalefactor 120591 according to the following formula

119910(120591)

119895=

1

120591

119895120591

sum

119894=(119895minus1)120591+1

119909119894 1 le 119895 le

119873

120591

(23)

For scale one the time series 119910(1) is simply the originaltime series The length of each coarse-grained time series isequal to the original time series divided by the scale factor120591 (ii) Calculate the CID for each coarse-grained time seriesand then plot as a function of the scale factor Figure 8shows the MCID values between the forecasting results andthe real market prices from BPNN ERNN STNN and ST-ERNNmodels In Figure 8 it is obvious that the MCID from

12 Computational Intelligence and Neuroscience

0 5 10 15 20 25 30 35 40

500

1000

1500

2000

2500

3000

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(a) SSE

0 5 10 15 20 25 30 35 400

05

1

15

2

25

3times104

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(b) TWSE

0 5 10 15 20 25 30 35 400

500

1000

1500

2000

2500

3000

3500

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(c) KOSPI

0 5 10 15 20 25 30 35 40

05

1

15

2

25

3

35

4

45

5times104

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(d) Nikkei225

Figure 8 ((a) (b) (c) and (d)) MCID values between the forecasting results and the real market prices from BPNN ERNN STNN andST-ERNNmodels

ST-ERNN with the actual value is the smallest one in anyscale that is the ST-ERNN (with the stochastic time effectivefunction) for forecasting stock prices is effective

5 Conclusion

The aim of this research is to develop a predictive modelto forecast the financial time series In this study we havedeveloped a predictive model by using an Elman recurrentneural network with the stochastic time effective function toforecast the indices of SSE TWSE KOSPI and Nikkei225Through the linear regression analysis it implies that thepredictive values and the real values are not deviating too

much Then we take the proposed model compared withBPNN STNN and ERNN forecasting models Empiricalexaminations of predicting precision for the price time series(by the comparisons of predicting measures as MAE RMSEMAPE and MAPE(100)) show that the proposed neuralnetwork model has the advantage of improving the precisionof forecasting and the forecasting of this proposed modelmuch approaches to the real financial market movementsFurthermore from the curve of the relative error it canmake a conclusion that the large fluctuation leads to thelarge relative errors In addition by calculating CID andMCID distance the conclusion was illustrated more clearlyThe study and the proposed model contribute significantly tothe time series literature on forecasting

Computational Intelligence and Neuroscience 13

Table 4 Comparisons of indicesrsquo predictions for different forecasting models

Index errors BPNN STNN ERNN ST-ERNNSSE

MAE 453701 249687 37262647 127390RMSE 544564 405437 493907 370693MAPE 201994 118947 182110 41353MAPE(100) 50644 36868 43176 26809

TWSEMAE 2527225 1405971 1512830 1056377RMSE 3168197 1868309 2054236 1361674MAPE 32017 17303 18449 13468MAPE(100) 22135 11494 13349 12601

KOSPIMAE 743073 563309 479296 182421RMSE 771528 582944 508174 210479MAPE 166084 124461 109608 42257MAPE(100) 74379 59664 49176 21788

Nikkei225MAE 2038034 1381857 1662480 685458RMSE 2385933 1697061 2073395 890378MAPE 18556 12580 15398 06010MAPE(100) 07674 05191 04962 04261

Table 5 CID distances for four network models

Index BPNN STNN ERNN ST-ERNNSSE 30521 17647 23202 16599TWSE 27805 98763 10830 61581KOSPI 33504 23128 25510 10060Nikkei225 44541 23726 32895 25421

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

The authors were supported in part by National NaturalScience Foundation of China under Grant nos 71271026 and10971010

References

[1] Y Kajitani K W Hipel and A I McLeod ldquoForecastingnonlinear time series with feed-forward neural networks a casestudy of Canadian lynx datardquo Journal of Forecasting vol 24 no2 pp 105ndash117 2005

[2] T Takahama S Sakai A Hara and N Iwane ldquoPredicting stockprice using neural networks optimized by differential evolutionwith degenerationrdquo International Journal of Innovative Comput-ing Information and Control vol 5 no 12 pp 5021ndash5031 2009

[3] K Huarng and T H-K Yu ldquoThe application of neural networksto forecast fuzzy time seriesrdquo Physica A vol 363 no 2 pp 481ndash491 2006

[4] F Wang and J Wang ldquoStatistical analysis and forecasting ofreturn interval for SSE and model by lattice percolation systemand neural networkrdquoComputers and Industrial Engineering vol62 no 1 pp 198ndash205 2012

[5] H-J Kim and K-S Shin ldquoA hybrid approach based on neuralnetworks and genetic algorithms for detecting temporal pat-terns in stock marketsrdquo Applied Soft Computing vol 7 no 2pp 569ndash576 2007

[6] M Ghiassi H Saidane and D K Zimbra ldquoA dynamic artificialneural network model for forecasting time series eventsrdquoInternational Journal of Forecasting vol 21 no 2 pp 341ndash3622005

[7] M R Hassan B Nath andM Kirley ldquoA fusionmodel of HMMANNandGA for stockmarket forecastingrdquo Expert Systems withApplications vol 33 no 1 pp 171ndash180 2007

[8] D Devaraj B Yegnanarayana and K Ramar ldquoRadial basisfunction networks for fast contingency rankingrdquo InternationalJournal of Electrical Power and Energy Systems vol 24 no 5 pp387ndash395 2002

[9] B A Garroa and R A Vazquez ldquoDesigning artificial neuralnetworks using particle swarm optimization algorithmsrdquo Com-putational Intelligence and Neuroscience vol 2015 Article ID369298 20 pages 2015

[10] Q Gan ldquoExponential synchronization of stochastic Cohen-Grossberg neural networks withmixed time-varying delays andreaction-diffusion via periodically intermittent controlrdquoNeuralNetworks vol 31 pp 12ndash21 2012

[11] D Xiao and J Wang ldquoModeling stock price dynamics bycontinuum percolation system and relevant complex systemsanalysisrdquo Physica A Statistical Mechanics and its Applicationsvol 391 no 20 pp 4827ndash4838 2012

[12] D Enke and N Mehdiyev ldquoStock market prediction usinga combination of stepwise regression analysis differentialevolution-based fuzzy clustering and a fuzzy inference Neural

14 Computational Intelligence and Neuroscience

Networkrdquo Intelligent Automation and Soft Computing vol 19no 4 pp 636ndash648 2013

[13] G Sermpinisa C Stasinakisa and C Dunisb ldquoStochastic andgenetic neural network combinations in trading and hybridtime-varying leverage effectsrdquo Journal of International FinancialMarkets Institutions amp Money vol 30 pp 21ndash54 2014

[14] R Ebrahimpour H Nikoo S Masoudnia M R Yousefi andM S Ghaemi ldquoMixture of mlp-experts for trend forecastingof time series a case study of the tehran stock exchangerdquoInternational Journal of Forecasting vol 27 no 3 pp 804ndash8162011

[15] A Bahrammirzaee ldquoA comparative survey of artificial intelli-gence applications in finance artificial neural networks expertsystem and hybrid intelligent systemsrdquo Neural Computing andApplications vol 19 no 8 pp 1165ndash1195 2010

[16] A Graves M Liwicki S Fernandez R Bertolami H Bunkeand J Schmidhuber ldquoA novel connectionist system for uncon-strained handwriting recognitionrdquo IEEE Transactions on Pat-tern Analysis and Machine Intelligence vol 31 no 5 pp 855ndash868 2009

[17] M Ardalani-Farsa and S Zolfaghari ldquoChaotic time seriesprediction with residual analysis method using hybrid Elman-NARX neural networksrdquoNeurocomputing vol 73 no 13ndash15 pp2540ndash2553 2010

[18] M Paliwal and U A Kumar ldquoNeural networks and statisticaltechniques a review of applicationsrdquo Expert Systems withApplications vol 36 no 1 pp 2ndash17 2009

[19] Z Liao and J Wang ldquoForecasting model of global stock indexby stochastic time effective neural networkrdquoExpert SystemswithApplications vol 37 no 1 pp 834ndash841 2010

[20] L Pan and J Cao ldquoRobust stability for uncertain stochasticneural network with delay and impulsesrdquo Neurocomputing vol94 pp 102ndash110 2012

[21] H F Liu and J Wang ldquoIntegrating independent componentanalysis and principal component analysis with neural networkto predict Chinese stock marketrdquo Mathematical Problems inEngineering vol 2011 Article ID 382659 15 pages 2011

[22] Z Q Guo H Q Wang and Q Liu ldquoFinancial time seriesforecasting using LPP and SVM optimized by PSOrdquo SoftComputing vol 17 no 5 pp 805ndash818 2013

[23] H L Niu and J Wang ldquoFinancial time series predictionby a random data-time effective RBF neural networkrdquo SoftComputing vol 18 no 3 pp 497ndash508 2014

[24] J L Elman ldquoFinding structure in timerdquo Cognitive Science vol14 no 2 pp 179ndash211 1990

[25] MCacciola GMegali D Pellicano and F CMorabito ldquoElmanneural networks for characterizing voids in welded strips astudyrdquo Neural Computing and Applications vol 21 no 5 pp869ndash875 2012

[26] R Chandra and M Zhang ldquoCooperative coevolution of Elmanrecurrent neural networks for chaotic time series predictionrdquoNeurocomputing vol 86 pp 116ndash123 2012

[27] D K Chaturvedi P S Satsangi and P K Kalra ldquoEffect of differ-ent mappings and normalization of neural network modelsrdquo inProceedings of the National Power Systems Conference vol 9 pp377ndash386 Indian Institute of Technology Kanpur India 1996

[28] S Makridakis ldquoAccuracy measures theoretical and practicalconcernsrdquo International Journal of Forecasting vol 9 no 4 pp527ndash529 1993

[29] L Q Han Design and Application of Artificial Neural NetworkChemical Industry Press 2002

[30] M Zounemat-Kermani ldquoPrincipal component analysis (PCA)for estimating chlorophyll concentration using forward andgeneralized regression neural networksrdquoApplied Artificial Intel-ligence vol 28 no 1 pp 16ndash29 2014

[31] D Olson and C Mossman ldquoNeural network forecasts ofCanadian stock returns using accounting ratiosrdquo InternationalJournal of Forecasting vol 19 no 3 pp 453ndash465 2003

[32] H Demuth and M Beale Network Toolbox For Use withMATLAB The Math Works Natick Mass USA 5th edition1998

[33] A P Plumb R C Rowe P York and M Brown ldquoOptimisationof the predictive ability of artificial neural network (ANN)models a comparison of three ANN programs and four classesof training algorithmrdquo European Journal of PharmaceuticalSciences vol 25 no 4-5 pp 395ndash405 2005

[34] D O Faruk ldquoA hybrid neural network and ARIMA model forwater quality time series predictionrdquo Engineering Applicationsof Artificial Intelligence vol 23 no 4 pp 586ndash594 2010

[35] M Tripathy ldquoPower transformer differential protection usingneural network principal component analysis and radial basisfunction neural networkrdquo Simulation Modelling Practice andTheory vol 18 no 5 pp 600ndash611 2010

[36] C E Martin and J A Reggia ldquoFusing swarm intelligenceand self-assembly for optimizing echo state networksrdquo Com-putational Intelligence and Neuroscience vol 2015 Article ID642429 15 pages 2015

[37] R S Tsay Analysis of Financial Time Series JohnWiley amp SonsHoboken NJ USA 2005

[38] P-C Chang D-D Wang and C-L Zhou ldquoA novel modelby evolving partially connected neural network for stock pricetrend forecastingrdquo Expert Systems with Applications vol 39 no1 pp 611ndash620 2012

[39] P Roy G S Mahapatra P Rani S K Pandey and K NDey ldquoRobust feed forward and recurrent neural network baseddynamic weighted combination models for software reliabilitypredictionrdquo Applied Soft Computing vol 22 pp 629ndash637 2014

[40] X Gabaix P Gopikrishnan V Plerou andH E Stanley ldquoA the-ory of power-law distributions in financial market fluctuationsrdquoNature vol 423 no 6937 pp 267ndash270 2003

[41] R N Mantegna and H E Stanley An Introduction to Econo-physics Correlations and Complexity in Finance CambridgeUniversity Press Cambridge UK 2000

[42] T H Roh ldquoForecasting the volatility of stock price indexrdquoExpert Systems with Applications vol 33 no 4 pp 916ndash9222007

[43] G E Batista E J KeoghOM Tataw andVM de Souza ldquoCIDan efficient complexity-invariant distance for time seriesrdquo DataMining and Knowledge Discovery vol 28 no 3 pp 634ndash6692014

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 6: Research Article Financial Time Series Prediction …downloads.hindawi.com/journals/cin/2016/4742515.pdfResearch Article Financial Time Series Prediction Using Elman Recurrent Random

6 Computational Intelligence and Neuroscience

set is totally 1500 data for the SSE is from 16032006 to22022012 while that for the TWSE is from 09022006 to08032012 for KOSPI is from 20022006 to 09032012 andfor Nikkei is from 27012006 to 09032012 The numberof the rest is 500 defined as the testing set We preset thelearning rate and the maximum training cycle by referring to[21 29 30] then we have done the experiment repeatedly onthe training set of different index data different numbers ofneural nodes in the hidden layer are chosen as the optimal seeTable 1 The maximum training iterations number 119870 is 300different dataset has different learning rate 120578 and here aftermany times experiments of the training data we choose 00010001 005 and 001 for SSE TWSE KOSPI and Nikkei225respectively And the predefined minimum training thresh-old 120585 = 10

minus5 When using the ER-STNNmodel to predict thedaily closing price of stock index we assume that 120583(119905) (thedrift function) and120590(119905) (the volatility function) are as follows

120583 (119905) =

1

(119888 minus 119905)2

120590 (119905) = [

1

119873 minus 1

119873

sum

119894=1

(119909 minus 119909)2]

12

(13)

where 119888 is the parameter which is equal to the number ofsamples in the datasets and 119909 is the mean of the sample dataThen the corresponding cost function can be written by

119864 =

1

119873

119873

sum

119899=1

119864 (119905119899) =

1

2119873

119873

sum

119899=1

1

120573

exp

int

119905119899

1199050

1

(119888 minus 119905)2119889119905

+ int

119905119899

1199050

[

1

119873 minus 1

119873

sum

119894=1

(119909 minus 119909)2]

12

119889119861 (119905)

(119889119905119899

minus 119910119905119899

)

2

(14)

Figure 3 shows the predicting results of training and test-ing data for SSE TWSE KOSPI and Nikkei225 with the ST-ERNN model correspondingly The curves of the actual dataand the predictive data are intuitively very approximatingIt means that with many times experiments the financialtime series have been well trained the forecasting results aredesired by ST-ERNNmodel

The plots of the real and the predictive data for thesefour price series are respectively shown in Figure 4Throughthe linear regression analysis we make a comparison of thepredictive value of the ST-ERNN model with the real pricedata It is known that the linear regression can be used to fita predictive model to an observed data set of 119884 and 119883 Thelinear equations of SSE TWSE KOSPI and Nikkei225 areexhibited respectively in Figures 4(a)ndash4(d) We can observethat all the slopes of the linear equations for them are drawnnear to 1 which implies that the predictive values and the realvalues are not deviating too much

Table 2 Linear regression parameters of market indices

Parameter SSE TWSE KOSPI Nikkei225119886 09873 09763 09614 09661119887 7582 173 3804 6535119877 0992 09952 09963 09971

A valuable numericalmeasure of association between twovariables is the correlation coefficient 119877 Table 2 shows thevalues of 119886 119887 and119877 for the above indices119877 is given as follows

119877 =

sum119873

119894=1(119889119905minus 119889119905) (119910119905minus 119910119905)

radicsum119873

119894=1(119889119905minus 119889119905)

2

sum119873

119894=1(119910119905minus 119910119905)2

(15)

where 119889119905is the actual value 119910

119905is the predicting value 119889

119905is

the mean of the actual value 119910119905is the mean of the predicting

value and119873 is the total number of the data

33 Comparisons of Forecasting Results We compare theproposed and conventional forecasting approaches (BPNNSTNN and ERNN model) on the four indices mentionedabove where STNN is based on the BPNN and combinedwith the stochastic effective function [19] For these fourdifferent models we set the same inputs of the networksincluding four kinds of series daily open price daily closingprice daily highest price and daily lowest priceThe networkoutput is the closing price of the next trading day In the stockmarkets the practical experience shows us that the abovefour kinds of data of the last trading day are very importantindicators when predicting the closing price of the nexttrading day To choose better parameters we have carried outmany experiments on these four different indices In order toachieve the optimal networks of each forecasting approachthe most appropriate numbers of neural nodes in the hiddenlayer are different the learning rates are also varying by train-ing different models see Table 3 In Table 3 ldquoHiddenrdquo standsfor the number of neural nodes in the hidden layer and ldquoLrrdquo stands for learning rateThe hidden number is also chosenby referring to [21 29 30] The experiments have been donerepeatedly to determine hidden nodes and training cycle inthe training processThedetails of principles of how to choosethe hidden number are as follows If the number of neuralnodes in the input layer is119873 the number of neural nodes inthe hidden layer is set to be nearly 2119873+1 and the number ofneural nodes in the output layer is 1 Since the ERNN modeland the ST-ERNN model have similar topology structuresin Table 3 the number of neural nodes in hidden layer andthe learning rate are chosen approximatly in the two modelsof training process Also the BPNN model and the STNNmodel are similar so the chosen parameters are basically thesame Figures 5(a)ndash5(d) show the predicting values of the fourindexes on the test set From these plots the predicting valuesof the ST-ERNNmodel are closer to the actual values than theothermodels curves To compare the training and forecasting

Computational Intelligence and Neuroscience 7

500 1000 1500 20000

500

1000

1500

Real valueST-ERNN

Test set

Training set

(a) SSEReal valueST-ERNN

500 1000 1500 20002000

4000

6000

8000

10000

12000

Training set

Test set

(b) TWSE

500 1000 1500 2000

400

600

800

1000

Training set

Test set

Real valueST-ERNN

(c) KOSPI

500 1000 1500 2000

1

15

2

25

3

35

4

Test setTraining set

Real valueST-ERNN

times104

(d) Nikkei225

Figure 3 Comparisons of the predictive data and the actual data for the forecasting models

results more clearly the performance measures RMSE MAEMAPE and MAPE(100) are uncovered in the next part

To analyze the forecasting performance of four consid-ered forecasting models deeply we use the following errorevaluation criteria [31ndash35] the mean absolute error (MAE)the root mean square error (RMSE) and the correlationcoefficient (MAPE) the corresponding definitions are givenas follows

MAE =

1

119873

119873

sum

119905=1

1003816100381610038161003816119889119905minus 119910119905

1003816100381610038161003816

RMSE = radic1

119873

119873

sum

119905=1

(119889119905minus 119910119905)2

MAPE = 100 times

1

119873

119873

sum

119905=1

10038161003816100381610038161003816100381610038161003816

119889119905minus 119910119905

119889119905

10038161003816100381610038161003816100381610038161003816

(16)

where 119889119905and 119910

119905are the real value and the predicting value

at time 119905 respectively 119873 is the total number of the dataNoting that MAE RMSE and MAPE are measures of thedeviation between the prediction values and the actual valuesthe prediction performance is better when the values of theseevaluation criteria are smaller However if the results are notconsistent among these criteria we choose the MAPE as thebenchmark since MAPE is relatively more stable than othercriteria [16]

Figures 6(a)ndash6(d) show the forecasting results of SSETWSE KOSPI and Nikkei225 for four forecasting mod-els The empirical research shows that the proposed ST-ERNN model has the best performance the ERNN and theSTNN both outperform the common BPNN model Thestock markets showed large fluctuations which are reflectedin Figure 6 we can see that the large fluctuation periodforecasting is relatively not accurate from these four models

8 Computational Intelligence and Neuroscience

200 400 600 800 1000 1200 1400

200

400

600

800

1000

1200

1400

Real value of SSE

Pred

ictiv

e val

ue o

f SSE

Predictive value pointsLinear fit

Y = 09873X + 7582

(a) SSE

4000 5000 6000 7000 8000 9000 10000

4000

5000

6000

7000

8000

9000

10000

Real value of TWSE

Pred

ictiv

e val

ue o

f TW

SE

Predictive value pointsLinear fit

Y = 09763X + 173

(b) TWSE

300 400 500 600 700 800 900 1000 1100

400

600

800

1000

Real value of KOSPI

Pred

ictiv

e val

ue o

f KO

SPI

Predictive value pointsLinear fit

Y = 09614X + 3804

(c) KOSPI

Predictive value pointsLinear fit

1 15 2 25 3 351

15

2

25

3

35

Real value of Nikkei225

Pred

ictiv

e val

ue o

f Nik

kei2

25times104

times104

Y = 09661X + 6535

(d) Nikkei225

Figure 4 Comparisons and linear regressions of the actual data and the predictive values for SSE TWSE KOSPI and Nikkei225

Table 3 Different parameters for different models

Index data BPNN STNN ERNN ST-ERNNHidden L r Hidden L r Hidden L r Hidden L r

SSE 8 001 8 001 10 0001 9 0001TWSE 10 001 10 001 12 0001 12 0001KOSPI 8 002 8 002 10 003 10 005Nikkei225 10 005 10 005 10 001 10 001

When the stock market is relatively stable the forecastingresult is nearer to the actual value Compared with theBPNN the STNN and the ERNN models the forecastingresults are also presented in Table 4 where the MAPE(100)stands for the latest 100 days of MAPE in the testing dataTable 4 shows that the evaluation criteria by the ST-ERNN

model are almost smaller than those by other models FromTable 4 and Figure 6 we can conclude that the proposedST-ERNN model is better than the other three models InTable 4 the evaluation criteria by the STNN model andERNNmodel are almost smaller than those by BPNN for fourconsidered indices It illustrates that the effect of financial

Computational Intelligence and Neuroscience 9

1500 1600 1700 1800 1900 2000

200

400

600

800

1000

1200

1400

STNNBPNN

ERNN

ST-ERNNReal value

(a) SSE

1500 1600 1700 1800 1900 20005000

6000

7000

8000

9000

10000

11000

STNNBPNN

ERNN

ST-ERNNReal value

(b) TWSE

1500 1600 1700 1800 1900 2000

300

400

500

600

700

800

900

1000

STNNBPNN

ERNN

ST-ERNNReal value

(c) KOSPI

STNNBPNN

ERNN

ST-ERNNReal value

1500 1600 1700 1800 1900 2000

095

1

105

11

115

12

125

13

135times104

(d) Nikkei225

Figure 5 Predictive values on the test set for SSE TWSE KOSPI and Nikkei225

time series forecasting of STNN model is superior to that ofBPNN model and the dynamic neural network is effectiverobust and precise than original BPNN for these four indicesBesides most values of MAPE(100) are smaller than thoseof MAPE in all stock indexes Therefore the short-termprediction outperforms the long-term prediction Overalltraining and testing results are consistent with the measureddata which demonstrates that the ST-ERNN predictor hashigher forecast accuracy

In Figures 7(a) 7(b) 7(c) and 7(d) we considered therelative errors of the ST-ERNN forecasting results Figure 7depicts that most of the predicting relative errors for thesefour price series are betweenminus01 and 01Moreover there aresome points with large relative errors of forecasting results infour models especially on the SSE index which can attribute

to the large fluctuation that leads to the large relative errorsThe definition of relative error is given as follows

119890 (119905) =

119889119905minus 119910119905

119889119905

(17)

where 119889119905and 119910

119905denote the actual value and the predicting

value respectively at time 119905 119905 = 1 2

4 CID and MCID Analysis

The analysis and forecast of time series have long been afocus of economic research for a more clear understanding ofmechanism and characteristics of financial markets [36ndash42]In this section we employ an efficient complexity invariantdistance (CID) for time series Reference [43] shows that

10 Computational Intelligence and Neuroscience

200 400 600 800 1000 1200 1400 1600 1800 20000

200

400

600

800

1000

1200

1400

1600

BPNNSTNNERNN

ST-ERNNReal value

1560 1580 1600 1620 1640

800

1000

1200

(a) SSE

BPNNSTNNERNN

ST-ERNNReal value

200 400 600 800 1000 1200 1400 1600 1800 20003000

4000

5000

6000

7000

8000

9000

10000

11000

1600 1620 1640 1660 1680 17005500

6000

6500

7000

7500

(b) TWSE

BPNNSTNNERNN

ST-ERNNReal value

200 400 600 800 1000 1200 1400 1600 1800 2000200

400

600

800

1000

1200

1800 1850 1900 1950

400500600700

(c) KOSPI

BPNNSTNNERNN

ST-ERNNReal value

200 400 600 800 1000 1200 1400 1600 1800 2000

1

15

2

25

3

35

4

1800 1850 1900 1950 2000

095

1

105

11

115times104

times104

(d) Nikkei225

Figure 6 Comparisons of the actual data and the predictive data for SSE TWSE KOSPI and Nikkei225

complexity invariant distancemeasure can produce improve-ments in classification and clustering in the vast majority ofcases

Complexity invariance uses information about complex-ity differences between two time series as a correction factorfor existing distance measures We begin by introducingEuclidean distance and use this as a starting point to bringin the definition of CID Suppose we have two time series 119875and 119876 of length 119899 Consider

119875 = 1199011 1199012 119901

119894 119901

119899

119876 = 1199021 1199022 119902

119894 119902

119899

(18)

The ubiquitous Euclidean distance is

ED (119875 119876) = radic

119899

sum

119894=1

(119901119894minus 119902119894)2 (19)

The Euclidean distance ED(119875 119876) between two time series 119875and 119876 can be made complexity invariant by introducing acorrection factor

CID (119875 119876) = ED (119875 119876) times CF (119875 119876) (20)

where CF is a complexity correction factor defined as

CF (119875 119876) = max (CE (119875) CE (119876))min (CE (119875) CE (119876))

(21)

and CE(119879) is a complexity estimate of a time series 119879 whichcan be computed as follows

CE (119879) = radic

119899minus1

sum

119894=1

(119905119894+1

minus 119905119894)2 (22)

Computational Intelligence and Neuroscience 11

200 400 600 800 1000 1200 1400 1600 1800 2000

minus02

minus01

0

01

02

Relat

ive e

rror

of S

SE

Relative error(a) SSE

Relative error

200 400 600 800 1000 1200 1400 1600 1800 2000

minus01

minus005

0

005

01

Rela

tive e

rror

of T

WSE

(b) TWSE

Relative error

200 400 600 800 1000 1200 1400 1600 1800 2000

minus02

minus015

minus01

minus005

0

005

01

015

Relat

ive e

rror

of K

OSP

I

(c) KOSPIRelative error

200 400 600 800 1000 1200 1400 1600 1800 2000minus01

minus005

0

005

01

Relat

ive e

rror

of N

ikke

i225

(d) Nikkei225

Figure 7 ((a) (b) (c) and (d)) Relative errors of forecasting results from the ST-ERNNmodel

It is worth noticing that CF accounts for differences inthe complexities of the time series being compared CF forcestime series with very different complexities to be furtherapart In the case that all time series have the same complexityCID simply degenerates to Euclidean distanceThepredictionperformance is better when the CID distance is smallerthat is to say the curve of the predictive data is closer tothe actual data The actual values can be seen as the series119875 and the predicting results as the series 119876 Table 5 showsCID distance between the real indices values of SSE TWSEKOSPI and Nikkei225 and the corresponding predictionsfrom each network model It is clear that the CID distancebetween the real index values and the prediction by ST-ERNNmodel is the smallest one moreover the distances by theSTNN model and the ERNN model are smaller than thoseby the BPNN for all the four considered indices

In general the complexity of a real system is not con-strained to a sole scale In this part we consider a developedCID analysis that is the multiscale CID (MCID) TheMCID analysis takes into account the multiple time scales

while measuring the predicting results and it is appliedto the stock prices analysis for the actual data and thepredicting data in this work The MCID analysis shouldcomprise two steps (i) Considering one-dimensional discretetime series 119909

1 1199092 119909

119894 119909

119873 we construct consecutive

coarse-grained time series 119910(120591) corresponding to the scalefactor 120591 according to the following formula

119910(120591)

119895=

1

120591

119895120591

sum

119894=(119895minus1)120591+1

119909119894 1 le 119895 le

119873

120591

(23)

For scale one the time series 119910(1) is simply the originaltime series The length of each coarse-grained time series isequal to the original time series divided by the scale factor120591 (ii) Calculate the CID for each coarse-grained time seriesand then plot as a function of the scale factor Figure 8shows the MCID values between the forecasting results andthe real market prices from BPNN ERNN STNN and ST-ERNNmodels In Figure 8 it is obvious that the MCID from

12 Computational Intelligence and Neuroscience

0 5 10 15 20 25 30 35 40

500

1000

1500

2000

2500

3000

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(a) SSE

0 5 10 15 20 25 30 35 400

05

1

15

2

25

3times104

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(b) TWSE

0 5 10 15 20 25 30 35 400

500

1000

1500

2000

2500

3000

3500

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(c) KOSPI

0 5 10 15 20 25 30 35 40

05

1

15

2

25

3

35

4

45

5times104

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(d) Nikkei225

Figure 8 ((a) (b) (c) and (d)) MCID values between the forecasting results and the real market prices from BPNN ERNN STNN andST-ERNNmodels

ST-ERNN with the actual value is the smallest one in anyscale that is the ST-ERNN (with the stochastic time effectivefunction) for forecasting stock prices is effective

5 Conclusion

The aim of this research is to develop a predictive modelto forecast the financial time series In this study we havedeveloped a predictive model by using an Elman recurrentneural network with the stochastic time effective function toforecast the indices of SSE TWSE KOSPI and Nikkei225Through the linear regression analysis it implies that thepredictive values and the real values are not deviating too

much Then we take the proposed model compared withBPNN STNN and ERNN forecasting models Empiricalexaminations of predicting precision for the price time series(by the comparisons of predicting measures as MAE RMSEMAPE and MAPE(100)) show that the proposed neuralnetwork model has the advantage of improving the precisionof forecasting and the forecasting of this proposed modelmuch approaches to the real financial market movementsFurthermore from the curve of the relative error it canmake a conclusion that the large fluctuation leads to thelarge relative errors In addition by calculating CID andMCID distance the conclusion was illustrated more clearlyThe study and the proposed model contribute significantly tothe time series literature on forecasting

Computational Intelligence and Neuroscience 13

Table 4 Comparisons of indicesrsquo predictions for different forecasting models

Index errors BPNN STNN ERNN ST-ERNNSSE

MAE 453701 249687 37262647 127390RMSE 544564 405437 493907 370693MAPE 201994 118947 182110 41353MAPE(100) 50644 36868 43176 26809

TWSEMAE 2527225 1405971 1512830 1056377RMSE 3168197 1868309 2054236 1361674MAPE 32017 17303 18449 13468MAPE(100) 22135 11494 13349 12601

KOSPIMAE 743073 563309 479296 182421RMSE 771528 582944 508174 210479MAPE 166084 124461 109608 42257MAPE(100) 74379 59664 49176 21788

Nikkei225MAE 2038034 1381857 1662480 685458RMSE 2385933 1697061 2073395 890378MAPE 18556 12580 15398 06010MAPE(100) 07674 05191 04962 04261

Table 5 CID distances for four network models

Index BPNN STNN ERNN ST-ERNNSSE 30521 17647 23202 16599TWSE 27805 98763 10830 61581KOSPI 33504 23128 25510 10060Nikkei225 44541 23726 32895 25421

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

The authors were supported in part by National NaturalScience Foundation of China under Grant nos 71271026 and10971010

References

[1] Y Kajitani K W Hipel and A I McLeod ldquoForecastingnonlinear time series with feed-forward neural networks a casestudy of Canadian lynx datardquo Journal of Forecasting vol 24 no2 pp 105ndash117 2005

[2] T Takahama S Sakai A Hara and N Iwane ldquoPredicting stockprice using neural networks optimized by differential evolutionwith degenerationrdquo International Journal of Innovative Comput-ing Information and Control vol 5 no 12 pp 5021ndash5031 2009

[3] K Huarng and T H-K Yu ldquoThe application of neural networksto forecast fuzzy time seriesrdquo Physica A vol 363 no 2 pp 481ndash491 2006

[4] F Wang and J Wang ldquoStatistical analysis and forecasting ofreturn interval for SSE and model by lattice percolation systemand neural networkrdquoComputers and Industrial Engineering vol62 no 1 pp 198ndash205 2012

[5] H-J Kim and K-S Shin ldquoA hybrid approach based on neuralnetworks and genetic algorithms for detecting temporal pat-terns in stock marketsrdquo Applied Soft Computing vol 7 no 2pp 569ndash576 2007

[6] M Ghiassi H Saidane and D K Zimbra ldquoA dynamic artificialneural network model for forecasting time series eventsrdquoInternational Journal of Forecasting vol 21 no 2 pp 341ndash3622005

[7] M R Hassan B Nath andM Kirley ldquoA fusionmodel of HMMANNandGA for stockmarket forecastingrdquo Expert Systems withApplications vol 33 no 1 pp 171ndash180 2007

[8] D Devaraj B Yegnanarayana and K Ramar ldquoRadial basisfunction networks for fast contingency rankingrdquo InternationalJournal of Electrical Power and Energy Systems vol 24 no 5 pp387ndash395 2002

[9] B A Garroa and R A Vazquez ldquoDesigning artificial neuralnetworks using particle swarm optimization algorithmsrdquo Com-putational Intelligence and Neuroscience vol 2015 Article ID369298 20 pages 2015

[10] Q Gan ldquoExponential synchronization of stochastic Cohen-Grossberg neural networks withmixed time-varying delays andreaction-diffusion via periodically intermittent controlrdquoNeuralNetworks vol 31 pp 12ndash21 2012

[11] D Xiao and J Wang ldquoModeling stock price dynamics bycontinuum percolation system and relevant complex systemsanalysisrdquo Physica A Statistical Mechanics and its Applicationsvol 391 no 20 pp 4827ndash4838 2012

[12] D Enke and N Mehdiyev ldquoStock market prediction usinga combination of stepwise regression analysis differentialevolution-based fuzzy clustering and a fuzzy inference Neural

14 Computational Intelligence and Neuroscience

Networkrdquo Intelligent Automation and Soft Computing vol 19no 4 pp 636ndash648 2013

[13] G Sermpinisa C Stasinakisa and C Dunisb ldquoStochastic andgenetic neural network combinations in trading and hybridtime-varying leverage effectsrdquo Journal of International FinancialMarkets Institutions amp Money vol 30 pp 21ndash54 2014

[14] R Ebrahimpour H Nikoo S Masoudnia M R Yousefi andM S Ghaemi ldquoMixture of mlp-experts for trend forecastingof time series a case study of the tehran stock exchangerdquoInternational Journal of Forecasting vol 27 no 3 pp 804ndash8162011

[15] A Bahrammirzaee ldquoA comparative survey of artificial intelli-gence applications in finance artificial neural networks expertsystem and hybrid intelligent systemsrdquo Neural Computing andApplications vol 19 no 8 pp 1165ndash1195 2010

[16] A Graves M Liwicki S Fernandez R Bertolami H Bunkeand J Schmidhuber ldquoA novel connectionist system for uncon-strained handwriting recognitionrdquo IEEE Transactions on Pat-tern Analysis and Machine Intelligence vol 31 no 5 pp 855ndash868 2009

[17] M Ardalani-Farsa and S Zolfaghari ldquoChaotic time seriesprediction with residual analysis method using hybrid Elman-NARX neural networksrdquoNeurocomputing vol 73 no 13ndash15 pp2540ndash2553 2010

[18] M Paliwal and U A Kumar ldquoNeural networks and statisticaltechniques a review of applicationsrdquo Expert Systems withApplications vol 36 no 1 pp 2ndash17 2009

[19] Z Liao and J Wang ldquoForecasting model of global stock indexby stochastic time effective neural networkrdquoExpert SystemswithApplications vol 37 no 1 pp 834ndash841 2010

[20] L Pan and J Cao ldquoRobust stability for uncertain stochasticneural network with delay and impulsesrdquo Neurocomputing vol94 pp 102ndash110 2012

[21] H F Liu and J Wang ldquoIntegrating independent componentanalysis and principal component analysis with neural networkto predict Chinese stock marketrdquo Mathematical Problems inEngineering vol 2011 Article ID 382659 15 pages 2011

[22] Z Q Guo H Q Wang and Q Liu ldquoFinancial time seriesforecasting using LPP and SVM optimized by PSOrdquo SoftComputing vol 17 no 5 pp 805ndash818 2013

[23] H L Niu and J Wang ldquoFinancial time series predictionby a random data-time effective RBF neural networkrdquo SoftComputing vol 18 no 3 pp 497ndash508 2014

[24] J L Elman ldquoFinding structure in timerdquo Cognitive Science vol14 no 2 pp 179ndash211 1990

[25] MCacciola GMegali D Pellicano and F CMorabito ldquoElmanneural networks for characterizing voids in welded strips astudyrdquo Neural Computing and Applications vol 21 no 5 pp869ndash875 2012

[26] R Chandra and M Zhang ldquoCooperative coevolution of Elmanrecurrent neural networks for chaotic time series predictionrdquoNeurocomputing vol 86 pp 116ndash123 2012

[27] D K Chaturvedi P S Satsangi and P K Kalra ldquoEffect of differ-ent mappings and normalization of neural network modelsrdquo inProceedings of the National Power Systems Conference vol 9 pp377ndash386 Indian Institute of Technology Kanpur India 1996

[28] S Makridakis ldquoAccuracy measures theoretical and practicalconcernsrdquo International Journal of Forecasting vol 9 no 4 pp527ndash529 1993

[29] L Q Han Design and Application of Artificial Neural NetworkChemical Industry Press 2002

[30] M Zounemat-Kermani ldquoPrincipal component analysis (PCA)for estimating chlorophyll concentration using forward andgeneralized regression neural networksrdquoApplied Artificial Intel-ligence vol 28 no 1 pp 16ndash29 2014

[31] D Olson and C Mossman ldquoNeural network forecasts ofCanadian stock returns using accounting ratiosrdquo InternationalJournal of Forecasting vol 19 no 3 pp 453ndash465 2003

[32] H Demuth and M Beale Network Toolbox For Use withMATLAB The Math Works Natick Mass USA 5th edition1998

[33] A P Plumb R C Rowe P York and M Brown ldquoOptimisationof the predictive ability of artificial neural network (ANN)models a comparison of three ANN programs and four classesof training algorithmrdquo European Journal of PharmaceuticalSciences vol 25 no 4-5 pp 395ndash405 2005

[34] D O Faruk ldquoA hybrid neural network and ARIMA model forwater quality time series predictionrdquo Engineering Applicationsof Artificial Intelligence vol 23 no 4 pp 586ndash594 2010

[35] M Tripathy ldquoPower transformer differential protection usingneural network principal component analysis and radial basisfunction neural networkrdquo Simulation Modelling Practice andTheory vol 18 no 5 pp 600ndash611 2010

[36] C E Martin and J A Reggia ldquoFusing swarm intelligenceand self-assembly for optimizing echo state networksrdquo Com-putational Intelligence and Neuroscience vol 2015 Article ID642429 15 pages 2015

[37] R S Tsay Analysis of Financial Time Series JohnWiley amp SonsHoboken NJ USA 2005

[38] P-C Chang D-D Wang and C-L Zhou ldquoA novel modelby evolving partially connected neural network for stock pricetrend forecastingrdquo Expert Systems with Applications vol 39 no1 pp 611ndash620 2012

[39] P Roy G S Mahapatra P Rani S K Pandey and K NDey ldquoRobust feed forward and recurrent neural network baseddynamic weighted combination models for software reliabilitypredictionrdquo Applied Soft Computing vol 22 pp 629ndash637 2014

[40] X Gabaix P Gopikrishnan V Plerou andH E Stanley ldquoA the-ory of power-law distributions in financial market fluctuationsrdquoNature vol 423 no 6937 pp 267ndash270 2003

[41] R N Mantegna and H E Stanley An Introduction to Econo-physics Correlations and Complexity in Finance CambridgeUniversity Press Cambridge UK 2000

[42] T H Roh ldquoForecasting the volatility of stock price indexrdquoExpert Systems with Applications vol 33 no 4 pp 916ndash9222007

[43] G E Batista E J KeoghOM Tataw andVM de Souza ldquoCIDan efficient complexity-invariant distance for time seriesrdquo DataMining and Knowledge Discovery vol 28 no 3 pp 634ndash6692014

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 7: Research Article Financial Time Series Prediction …downloads.hindawi.com/journals/cin/2016/4742515.pdfResearch Article Financial Time Series Prediction Using Elman Recurrent Random

Computational Intelligence and Neuroscience 7

500 1000 1500 20000

500

1000

1500

Real valueST-ERNN

Test set

Training set

(a) SSEReal valueST-ERNN

500 1000 1500 20002000

4000

6000

8000

10000

12000

Training set

Test set

(b) TWSE

500 1000 1500 2000

400

600

800

1000

Training set

Test set

Real valueST-ERNN

(c) KOSPI

500 1000 1500 2000

1

15

2

25

3

35

4

Test setTraining set

Real valueST-ERNN

times104

(d) Nikkei225

Figure 3 Comparisons of the predictive data and the actual data for the forecasting models

results more clearly the performance measures RMSE MAEMAPE and MAPE(100) are uncovered in the next part

To analyze the forecasting performance of four consid-ered forecasting models deeply we use the following errorevaluation criteria [31ndash35] the mean absolute error (MAE)the root mean square error (RMSE) and the correlationcoefficient (MAPE) the corresponding definitions are givenas follows

MAE =

1

119873

119873

sum

119905=1

1003816100381610038161003816119889119905minus 119910119905

1003816100381610038161003816

RMSE = radic1

119873

119873

sum

119905=1

(119889119905minus 119910119905)2

MAPE = 100 times

1

119873

119873

sum

119905=1

10038161003816100381610038161003816100381610038161003816

119889119905minus 119910119905

119889119905

10038161003816100381610038161003816100381610038161003816

(16)

where 119889119905and 119910

119905are the real value and the predicting value

at time 119905 respectively 119873 is the total number of the dataNoting that MAE RMSE and MAPE are measures of thedeviation between the prediction values and the actual valuesthe prediction performance is better when the values of theseevaluation criteria are smaller However if the results are notconsistent among these criteria we choose the MAPE as thebenchmark since MAPE is relatively more stable than othercriteria [16]

Figures 6(a)ndash6(d) show the forecasting results of SSETWSE KOSPI and Nikkei225 for four forecasting mod-els The empirical research shows that the proposed ST-ERNN model has the best performance the ERNN and theSTNN both outperform the common BPNN model Thestock markets showed large fluctuations which are reflectedin Figure 6 we can see that the large fluctuation periodforecasting is relatively not accurate from these four models

8 Computational Intelligence and Neuroscience

200 400 600 800 1000 1200 1400

200

400

600

800

1000

1200

1400

Real value of SSE

Pred

ictiv

e val

ue o

f SSE

Predictive value pointsLinear fit

Y = 09873X + 7582

(a) SSE

4000 5000 6000 7000 8000 9000 10000

4000

5000

6000

7000

8000

9000

10000

Real value of TWSE

Pred

ictiv

e val

ue o

f TW

SE

Predictive value pointsLinear fit

Y = 09763X + 173

(b) TWSE

300 400 500 600 700 800 900 1000 1100

400

600

800

1000

Real value of KOSPI

Pred

ictiv

e val

ue o

f KO

SPI

Predictive value pointsLinear fit

Y = 09614X + 3804

(c) KOSPI

Predictive value pointsLinear fit

1 15 2 25 3 351

15

2

25

3

35

Real value of Nikkei225

Pred

ictiv

e val

ue o

f Nik

kei2

25times104

times104

Y = 09661X + 6535

(d) Nikkei225

Figure 4 Comparisons and linear regressions of the actual data and the predictive values for SSE TWSE KOSPI and Nikkei225

Table 3 Different parameters for different models

Index data BPNN STNN ERNN ST-ERNNHidden L r Hidden L r Hidden L r Hidden L r

SSE 8 001 8 001 10 0001 9 0001TWSE 10 001 10 001 12 0001 12 0001KOSPI 8 002 8 002 10 003 10 005Nikkei225 10 005 10 005 10 001 10 001

When the stock market is relatively stable the forecastingresult is nearer to the actual value Compared with theBPNN the STNN and the ERNN models the forecastingresults are also presented in Table 4 where the MAPE(100)stands for the latest 100 days of MAPE in the testing dataTable 4 shows that the evaluation criteria by the ST-ERNN

model are almost smaller than those by other models FromTable 4 and Figure 6 we can conclude that the proposedST-ERNN model is better than the other three models InTable 4 the evaluation criteria by the STNN model andERNNmodel are almost smaller than those by BPNN for fourconsidered indices It illustrates that the effect of financial

Computational Intelligence and Neuroscience 9

1500 1600 1700 1800 1900 2000

200

400

600

800

1000

1200

1400

STNNBPNN

ERNN

ST-ERNNReal value

(a) SSE

1500 1600 1700 1800 1900 20005000

6000

7000

8000

9000

10000

11000

STNNBPNN

ERNN

ST-ERNNReal value

(b) TWSE

1500 1600 1700 1800 1900 2000

300

400

500

600

700

800

900

1000

STNNBPNN

ERNN

ST-ERNNReal value

(c) KOSPI

STNNBPNN

ERNN

ST-ERNNReal value

1500 1600 1700 1800 1900 2000

095

1

105

11

115

12

125

13

135times104

(d) Nikkei225

Figure 5 Predictive values on the test set for SSE TWSE KOSPI and Nikkei225

time series forecasting of STNN model is superior to that ofBPNN model and the dynamic neural network is effectiverobust and precise than original BPNN for these four indicesBesides most values of MAPE(100) are smaller than thoseof MAPE in all stock indexes Therefore the short-termprediction outperforms the long-term prediction Overalltraining and testing results are consistent with the measureddata which demonstrates that the ST-ERNN predictor hashigher forecast accuracy

In Figures 7(a) 7(b) 7(c) and 7(d) we considered therelative errors of the ST-ERNN forecasting results Figure 7depicts that most of the predicting relative errors for thesefour price series are betweenminus01 and 01Moreover there aresome points with large relative errors of forecasting results infour models especially on the SSE index which can attribute

to the large fluctuation that leads to the large relative errorsThe definition of relative error is given as follows

119890 (119905) =

119889119905minus 119910119905

119889119905

(17)

where 119889119905and 119910

119905denote the actual value and the predicting

value respectively at time 119905 119905 = 1 2

4 CID and MCID Analysis

The analysis and forecast of time series have long been afocus of economic research for a more clear understanding ofmechanism and characteristics of financial markets [36ndash42]In this section we employ an efficient complexity invariantdistance (CID) for time series Reference [43] shows that

10 Computational Intelligence and Neuroscience

200 400 600 800 1000 1200 1400 1600 1800 20000

200

400

600

800

1000

1200

1400

1600

BPNNSTNNERNN

ST-ERNNReal value

1560 1580 1600 1620 1640

800

1000

1200

(a) SSE

BPNNSTNNERNN

ST-ERNNReal value

200 400 600 800 1000 1200 1400 1600 1800 20003000

4000

5000

6000

7000

8000

9000

10000

11000

1600 1620 1640 1660 1680 17005500

6000

6500

7000

7500

(b) TWSE

BPNNSTNNERNN

ST-ERNNReal value

200 400 600 800 1000 1200 1400 1600 1800 2000200

400

600

800

1000

1200

1800 1850 1900 1950

400500600700

(c) KOSPI

BPNNSTNNERNN

ST-ERNNReal value

200 400 600 800 1000 1200 1400 1600 1800 2000

1

15

2

25

3

35

4

1800 1850 1900 1950 2000

095

1

105

11

115times104

times104

(d) Nikkei225

Figure 6 Comparisons of the actual data and the predictive data for SSE TWSE KOSPI and Nikkei225

complexity invariant distancemeasure can produce improve-ments in classification and clustering in the vast majority ofcases

Complexity invariance uses information about complex-ity differences between two time series as a correction factorfor existing distance measures We begin by introducingEuclidean distance and use this as a starting point to bringin the definition of CID Suppose we have two time series 119875and 119876 of length 119899 Consider

119875 = 1199011 1199012 119901

119894 119901

119899

119876 = 1199021 1199022 119902

119894 119902

119899

(18)

The ubiquitous Euclidean distance is

ED (119875 119876) = radic

119899

sum

119894=1

(119901119894minus 119902119894)2 (19)

The Euclidean distance ED(119875 119876) between two time series 119875and 119876 can be made complexity invariant by introducing acorrection factor

CID (119875 119876) = ED (119875 119876) times CF (119875 119876) (20)

where CF is a complexity correction factor defined as

CF (119875 119876) = max (CE (119875) CE (119876))min (CE (119875) CE (119876))

(21)

and CE(119879) is a complexity estimate of a time series 119879 whichcan be computed as follows

CE (119879) = radic

119899minus1

sum

119894=1

(119905119894+1

minus 119905119894)2 (22)

Computational Intelligence and Neuroscience 11

200 400 600 800 1000 1200 1400 1600 1800 2000

minus02

minus01

0

01

02

Relat

ive e

rror

of S

SE

Relative error(a) SSE

Relative error

200 400 600 800 1000 1200 1400 1600 1800 2000

minus01

minus005

0

005

01

Rela

tive e

rror

of T

WSE

(b) TWSE

Relative error

200 400 600 800 1000 1200 1400 1600 1800 2000

minus02

minus015

minus01

minus005

0

005

01

015

Relat

ive e

rror

of K

OSP

I

(c) KOSPIRelative error

200 400 600 800 1000 1200 1400 1600 1800 2000minus01

minus005

0

005

01

Relat

ive e

rror

of N

ikke

i225

(d) Nikkei225

Figure 7 ((a) (b) (c) and (d)) Relative errors of forecasting results from the ST-ERNNmodel

It is worth noticing that CF accounts for differences inthe complexities of the time series being compared CF forcestime series with very different complexities to be furtherapart In the case that all time series have the same complexityCID simply degenerates to Euclidean distanceThepredictionperformance is better when the CID distance is smallerthat is to say the curve of the predictive data is closer tothe actual data The actual values can be seen as the series119875 and the predicting results as the series 119876 Table 5 showsCID distance between the real indices values of SSE TWSEKOSPI and Nikkei225 and the corresponding predictionsfrom each network model It is clear that the CID distancebetween the real index values and the prediction by ST-ERNNmodel is the smallest one moreover the distances by theSTNN model and the ERNN model are smaller than thoseby the BPNN for all the four considered indices

In general the complexity of a real system is not con-strained to a sole scale In this part we consider a developedCID analysis that is the multiscale CID (MCID) TheMCID analysis takes into account the multiple time scales

while measuring the predicting results and it is appliedto the stock prices analysis for the actual data and thepredicting data in this work The MCID analysis shouldcomprise two steps (i) Considering one-dimensional discretetime series 119909

1 1199092 119909

119894 119909

119873 we construct consecutive

coarse-grained time series 119910(120591) corresponding to the scalefactor 120591 according to the following formula

119910(120591)

119895=

1

120591

119895120591

sum

119894=(119895minus1)120591+1

119909119894 1 le 119895 le

119873

120591

(23)

For scale one the time series 119910(1) is simply the originaltime series The length of each coarse-grained time series isequal to the original time series divided by the scale factor120591 (ii) Calculate the CID for each coarse-grained time seriesand then plot as a function of the scale factor Figure 8shows the MCID values between the forecasting results andthe real market prices from BPNN ERNN STNN and ST-ERNNmodels In Figure 8 it is obvious that the MCID from

12 Computational Intelligence and Neuroscience

0 5 10 15 20 25 30 35 40

500

1000

1500

2000

2500

3000

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(a) SSE

0 5 10 15 20 25 30 35 400

05

1

15

2

25

3times104

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(b) TWSE

0 5 10 15 20 25 30 35 400

500

1000

1500

2000

2500

3000

3500

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(c) KOSPI

0 5 10 15 20 25 30 35 40

05

1

15

2

25

3

35

4

45

5times104

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(d) Nikkei225

Figure 8 ((a) (b) (c) and (d)) MCID values between the forecasting results and the real market prices from BPNN ERNN STNN andST-ERNNmodels

ST-ERNN with the actual value is the smallest one in anyscale that is the ST-ERNN (with the stochastic time effectivefunction) for forecasting stock prices is effective

5 Conclusion

The aim of this research is to develop a predictive modelto forecast the financial time series In this study we havedeveloped a predictive model by using an Elman recurrentneural network with the stochastic time effective function toforecast the indices of SSE TWSE KOSPI and Nikkei225Through the linear regression analysis it implies that thepredictive values and the real values are not deviating too

much Then we take the proposed model compared withBPNN STNN and ERNN forecasting models Empiricalexaminations of predicting precision for the price time series(by the comparisons of predicting measures as MAE RMSEMAPE and MAPE(100)) show that the proposed neuralnetwork model has the advantage of improving the precisionof forecasting and the forecasting of this proposed modelmuch approaches to the real financial market movementsFurthermore from the curve of the relative error it canmake a conclusion that the large fluctuation leads to thelarge relative errors In addition by calculating CID andMCID distance the conclusion was illustrated more clearlyThe study and the proposed model contribute significantly tothe time series literature on forecasting

Computational Intelligence and Neuroscience 13

Table 4 Comparisons of indicesrsquo predictions for different forecasting models

Index errors BPNN STNN ERNN ST-ERNNSSE

MAE 453701 249687 37262647 127390RMSE 544564 405437 493907 370693MAPE 201994 118947 182110 41353MAPE(100) 50644 36868 43176 26809

TWSEMAE 2527225 1405971 1512830 1056377RMSE 3168197 1868309 2054236 1361674MAPE 32017 17303 18449 13468MAPE(100) 22135 11494 13349 12601

KOSPIMAE 743073 563309 479296 182421RMSE 771528 582944 508174 210479MAPE 166084 124461 109608 42257MAPE(100) 74379 59664 49176 21788

Nikkei225MAE 2038034 1381857 1662480 685458RMSE 2385933 1697061 2073395 890378MAPE 18556 12580 15398 06010MAPE(100) 07674 05191 04962 04261

Table 5 CID distances for four network models

Index BPNN STNN ERNN ST-ERNNSSE 30521 17647 23202 16599TWSE 27805 98763 10830 61581KOSPI 33504 23128 25510 10060Nikkei225 44541 23726 32895 25421

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

The authors were supported in part by National NaturalScience Foundation of China under Grant nos 71271026 and10971010

References

[1] Y Kajitani K W Hipel and A I McLeod ldquoForecastingnonlinear time series with feed-forward neural networks a casestudy of Canadian lynx datardquo Journal of Forecasting vol 24 no2 pp 105ndash117 2005

[2] T Takahama S Sakai A Hara and N Iwane ldquoPredicting stockprice using neural networks optimized by differential evolutionwith degenerationrdquo International Journal of Innovative Comput-ing Information and Control vol 5 no 12 pp 5021ndash5031 2009

[3] K Huarng and T H-K Yu ldquoThe application of neural networksto forecast fuzzy time seriesrdquo Physica A vol 363 no 2 pp 481ndash491 2006

[4] F Wang and J Wang ldquoStatistical analysis and forecasting ofreturn interval for SSE and model by lattice percolation systemand neural networkrdquoComputers and Industrial Engineering vol62 no 1 pp 198ndash205 2012

[5] H-J Kim and K-S Shin ldquoA hybrid approach based on neuralnetworks and genetic algorithms for detecting temporal pat-terns in stock marketsrdquo Applied Soft Computing vol 7 no 2pp 569ndash576 2007

[6] M Ghiassi H Saidane and D K Zimbra ldquoA dynamic artificialneural network model for forecasting time series eventsrdquoInternational Journal of Forecasting vol 21 no 2 pp 341ndash3622005

[7] M R Hassan B Nath andM Kirley ldquoA fusionmodel of HMMANNandGA for stockmarket forecastingrdquo Expert Systems withApplications vol 33 no 1 pp 171ndash180 2007

[8] D Devaraj B Yegnanarayana and K Ramar ldquoRadial basisfunction networks for fast contingency rankingrdquo InternationalJournal of Electrical Power and Energy Systems vol 24 no 5 pp387ndash395 2002

[9] B A Garroa and R A Vazquez ldquoDesigning artificial neuralnetworks using particle swarm optimization algorithmsrdquo Com-putational Intelligence and Neuroscience vol 2015 Article ID369298 20 pages 2015

[10] Q Gan ldquoExponential synchronization of stochastic Cohen-Grossberg neural networks withmixed time-varying delays andreaction-diffusion via periodically intermittent controlrdquoNeuralNetworks vol 31 pp 12ndash21 2012

[11] D Xiao and J Wang ldquoModeling stock price dynamics bycontinuum percolation system and relevant complex systemsanalysisrdquo Physica A Statistical Mechanics and its Applicationsvol 391 no 20 pp 4827ndash4838 2012

[12] D Enke and N Mehdiyev ldquoStock market prediction usinga combination of stepwise regression analysis differentialevolution-based fuzzy clustering and a fuzzy inference Neural

14 Computational Intelligence and Neuroscience

Networkrdquo Intelligent Automation and Soft Computing vol 19no 4 pp 636ndash648 2013

[13] G Sermpinisa C Stasinakisa and C Dunisb ldquoStochastic andgenetic neural network combinations in trading and hybridtime-varying leverage effectsrdquo Journal of International FinancialMarkets Institutions amp Money vol 30 pp 21ndash54 2014

[14] R Ebrahimpour H Nikoo S Masoudnia M R Yousefi andM S Ghaemi ldquoMixture of mlp-experts for trend forecastingof time series a case study of the tehran stock exchangerdquoInternational Journal of Forecasting vol 27 no 3 pp 804ndash8162011

[15] A Bahrammirzaee ldquoA comparative survey of artificial intelli-gence applications in finance artificial neural networks expertsystem and hybrid intelligent systemsrdquo Neural Computing andApplications vol 19 no 8 pp 1165ndash1195 2010

[16] A Graves M Liwicki S Fernandez R Bertolami H Bunkeand J Schmidhuber ldquoA novel connectionist system for uncon-strained handwriting recognitionrdquo IEEE Transactions on Pat-tern Analysis and Machine Intelligence vol 31 no 5 pp 855ndash868 2009

[17] M Ardalani-Farsa and S Zolfaghari ldquoChaotic time seriesprediction with residual analysis method using hybrid Elman-NARX neural networksrdquoNeurocomputing vol 73 no 13ndash15 pp2540ndash2553 2010

[18] M Paliwal and U A Kumar ldquoNeural networks and statisticaltechniques a review of applicationsrdquo Expert Systems withApplications vol 36 no 1 pp 2ndash17 2009

[19] Z Liao and J Wang ldquoForecasting model of global stock indexby stochastic time effective neural networkrdquoExpert SystemswithApplications vol 37 no 1 pp 834ndash841 2010

[20] L Pan and J Cao ldquoRobust stability for uncertain stochasticneural network with delay and impulsesrdquo Neurocomputing vol94 pp 102ndash110 2012

[21] H F Liu and J Wang ldquoIntegrating independent componentanalysis and principal component analysis with neural networkto predict Chinese stock marketrdquo Mathematical Problems inEngineering vol 2011 Article ID 382659 15 pages 2011

[22] Z Q Guo H Q Wang and Q Liu ldquoFinancial time seriesforecasting using LPP and SVM optimized by PSOrdquo SoftComputing vol 17 no 5 pp 805ndash818 2013

[23] H L Niu and J Wang ldquoFinancial time series predictionby a random data-time effective RBF neural networkrdquo SoftComputing vol 18 no 3 pp 497ndash508 2014

[24] J L Elman ldquoFinding structure in timerdquo Cognitive Science vol14 no 2 pp 179ndash211 1990

[25] MCacciola GMegali D Pellicano and F CMorabito ldquoElmanneural networks for characterizing voids in welded strips astudyrdquo Neural Computing and Applications vol 21 no 5 pp869ndash875 2012

[26] R Chandra and M Zhang ldquoCooperative coevolution of Elmanrecurrent neural networks for chaotic time series predictionrdquoNeurocomputing vol 86 pp 116ndash123 2012

[27] D K Chaturvedi P S Satsangi and P K Kalra ldquoEffect of differ-ent mappings and normalization of neural network modelsrdquo inProceedings of the National Power Systems Conference vol 9 pp377ndash386 Indian Institute of Technology Kanpur India 1996

[28] S Makridakis ldquoAccuracy measures theoretical and practicalconcernsrdquo International Journal of Forecasting vol 9 no 4 pp527ndash529 1993

[29] L Q Han Design and Application of Artificial Neural NetworkChemical Industry Press 2002

[30] M Zounemat-Kermani ldquoPrincipal component analysis (PCA)for estimating chlorophyll concentration using forward andgeneralized regression neural networksrdquoApplied Artificial Intel-ligence vol 28 no 1 pp 16ndash29 2014

[31] D Olson and C Mossman ldquoNeural network forecasts ofCanadian stock returns using accounting ratiosrdquo InternationalJournal of Forecasting vol 19 no 3 pp 453ndash465 2003

[32] H Demuth and M Beale Network Toolbox For Use withMATLAB The Math Works Natick Mass USA 5th edition1998

[33] A P Plumb R C Rowe P York and M Brown ldquoOptimisationof the predictive ability of artificial neural network (ANN)models a comparison of three ANN programs and four classesof training algorithmrdquo European Journal of PharmaceuticalSciences vol 25 no 4-5 pp 395ndash405 2005

[34] D O Faruk ldquoA hybrid neural network and ARIMA model forwater quality time series predictionrdquo Engineering Applicationsof Artificial Intelligence vol 23 no 4 pp 586ndash594 2010

[35] M Tripathy ldquoPower transformer differential protection usingneural network principal component analysis and radial basisfunction neural networkrdquo Simulation Modelling Practice andTheory vol 18 no 5 pp 600ndash611 2010

[36] C E Martin and J A Reggia ldquoFusing swarm intelligenceand self-assembly for optimizing echo state networksrdquo Com-putational Intelligence and Neuroscience vol 2015 Article ID642429 15 pages 2015

[37] R S Tsay Analysis of Financial Time Series JohnWiley amp SonsHoboken NJ USA 2005

[38] P-C Chang D-D Wang and C-L Zhou ldquoA novel modelby evolving partially connected neural network for stock pricetrend forecastingrdquo Expert Systems with Applications vol 39 no1 pp 611ndash620 2012

[39] P Roy G S Mahapatra P Rani S K Pandey and K NDey ldquoRobust feed forward and recurrent neural network baseddynamic weighted combination models for software reliabilitypredictionrdquo Applied Soft Computing vol 22 pp 629ndash637 2014

[40] X Gabaix P Gopikrishnan V Plerou andH E Stanley ldquoA the-ory of power-law distributions in financial market fluctuationsrdquoNature vol 423 no 6937 pp 267ndash270 2003

[41] R N Mantegna and H E Stanley An Introduction to Econo-physics Correlations and Complexity in Finance CambridgeUniversity Press Cambridge UK 2000

[42] T H Roh ldquoForecasting the volatility of stock price indexrdquoExpert Systems with Applications vol 33 no 4 pp 916ndash9222007

[43] G E Batista E J KeoghOM Tataw andVM de Souza ldquoCIDan efficient complexity-invariant distance for time seriesrdquo DataMining and Knowledge Discovery vol 28 no 3 pp 634ndash6692014

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 8: Research Article Financial Time Series Prediction …downloads.hindawi.com/journals/cin/2016/4742515.pdfResearch Article Financial Time Series Prediction Using Elman Recurrent Random

8 Computational Intelligence and Neuroscience

200 400 600 800 1000 1200 1400

200

400

600

800

1000

1200

1400

Real value of SSE

Pred

ictiv

e val

ue o

f SSE

Predictive value pointsLinear fit

Y = 09873X + 7582

(a) SSE

4000 5000 6000 7000 8000 9000 10000

4000

5000

6000

7000

8000

9000

10000

Real value of TWSE

Pred

ictiv

e val

ue o

f TW

SE

Predictive value pointsLinear fit

Y = 09763X + 173

(b) TWSE

300 400 500 600 700 800 900 1000 1100

400

600

800

1000

Real value of KOSPI

Pred

ictiv

e val

ue o

f KO

SPI

Predictive value pointsLinear fit

Y = 09614X + 3804

(c) KOSPI

Predictive value pointsLinear fit

1 15 2 25 3 351

15

2

25

3

35

Real value of Nikkei225

Pred

ictiv

e val

ue o

f Nik

kei2

25times104

times104

Y = 09661X + 6535

(d) Nikkei225

Figure 4 Comparisons and linear regressions of the actual data and the predictive values for SSE TWSE KOSPI and Nikkei225

Table 3 Different parameters for different models

Index data BPNN STNN ERNN ST-ERNNHidden L r Hidden L r Hidden L r Hidden L r

SSE 8 001 8 001 10 0001 9 0001TWSE 10 001 10 001 12 0001 12 0001KOSPI 8 002 8 002 10 003 10 005Nikkei225 10 005 10 005 10 001 10 001

When the stock market is relatively stable the forecastingresult is nearer to the actual value Compared with theBPNN the STNN and the ERNN models the forecastingresults are also presented in Table 4 where the MAPE(100)stands for the latest 100 days of MAPE in the testing dataTable 4 shows that the evaluation criteria by the ST-ERNN

model are almost smaller than those by other models FromTable 4 and Figure 6 we can conclude that the proposedST-ERNN model is better than the other three models InTable 4 the evaluation criteria by the STNN model andERNNmodel are almost smaller than those by BPNN for fourconsidered indices It illustrates that the effect of financial

Computational Intelligence and Neuroscience 9

1500 1600 1700 1800 1900 2000

200

400

600

800

1000

1200

1400

STNNBPNN

ERNN

ST-ERNNReal value

(a) SSE

1500 1600 1700 1800 1900 20005000

6000

7000

8000

9000

10000

11000

STNNBPNN

ERNN

ST-ERNNReal value

(b) TWSE

1500 1600 1700 1800 1900 2000

300

400

500

600

700

800

900

1000

STNNBPNN

ERNN

ST-ERNNReal value

(c) KOSPI

STNNBPNN

ERNN

ST-ERNNReal value

1500 1600 1700 1800 1900 2000

095

1

105

11

115

12

125

13

135times104

(d) Nikkei225

Figure 5 Predictive values on the test set for SSE TWSE KOSPI and Nikkei225

time series forecasting of STNN model is superior to that ofBPNN model and the dynamic neural network is effectiverobust and precise than original BPNN for these four indicesBesides most values of MAPE(100) are smaller than thoseof MAPE in all stock indexes Therefore the short-termprediction outperforms the long-term prediction Overalltraining and testing results are consistent with the measureddata which demonstrates that the ST-ERNN predictor hashigher forecast accuracy

In Figures 7(a) 7(b) 7(c) and 7(d) we considered therelative errors of the ST-ERNN forecasting results Figure 7depicts that most of the predicting relative errors for thesefour price series are betweenminus01 and 01Moreover there aresome points with large relative errors of forecasting results infour models especially on the SSE index which can attribute

to the large fluctuation that leads to the large relative errorsThe definition of relative error is given as follows

119890 (119905) =

119889119905minus 119910119905

119889119905

(17)

where 119889119905and 119910

119905denote the actual value and the predicting

value respectively at time 119905 119905 = 1 2

4 CID and MCID Analysis

The analysis and forecast of time series have long been afocus of economic research for a more clear understanding ofmechanism and characteristics of financial markets [36ndash42]In this section we employ an efficient complexity invariantdistance (CID) for time series Reference [43] shows that

10 Computational Intelligence and Neuroscience

200 400 600 800 1000 1200 1400 1600 1800 20000

200

400

600

800

1000

1200

1400

1600

BPNNSTNNERNN

ST-ERNNReal value

1560 1580 1600 1620 1640

800

1000

1200

(a) SSE

BPNNSTNNERNN

ST-ERNNReal value

200 400 600 800 1000 1200 1400 1600 1800 20003000

4000

5000

6000

7000

8000

9000

10000

11000

1600 1620 1640 1660 1680 17005500

6000

6500

7000

7500

(b) TWSE

BPNNSTNNERNN

ST-ERNNReal value

200 400 600 800 1000 1200 1400 1600 1800 2000200

400

600

800

1000

1200

1800 1850 1900 1950

400500600700

(c) KOSPI

BPNNSTNNERNN

ST-ERNNReal value

200 400 600 800 1000 1200 1400 1600 1800 2000

1

15

2

25

3

35

4

1800 1850 1900 1950 2000

095

1

105

11

115times104

times104

(d) Nikkei225

Figure 6 Comparisons of the actual data and the predictive data for SSE TWSE KOSPI and Nikkei225

complexity invariant distancemeasure can produce improve-ments in classification and clustering in the vast majority ofcases

Complexity invariance uses information about complex-ity differences between two time series as a correction factorfor existing distance measures We begin by introducingEuclidean distance and use this as a starting point to bringin the definition of CID Suppose we have two time series 119875and 119876 of length 119899 Consider

119875 = 1199011 1199012 119901

119894 119901

119899

119876 = 1199021 1199022 119902

119894 119902

119899

(18)

The ubiquitous Euclidean distance is

ED (119875 119876) = radic

119899

sum

119894=1

(119901119894minus 119902119894)2 (19)

The Euclidean distance ED(119875 119876) between two time series 119875and 119876 can be made complexity invariant by introducing acorrection factor

CID (119875 119876) = ED (119875 119876) times CF (119875 119876) (20)

where CF is a complexity correction factor defined as

CF (119875 119876) = max (CE (119875) CE (119876))min (CE (119875) CE (119876))

(21)

and CE(119879) is a complexity estimate of a time series 119879 whichcan be computed as follows

CE (119879) = radic

119899minus1

sum

119894=1

(119905119894+1

minus 119905119894)2 (22)

Computational Intelligence and Neuroscience 11

200 400 600 800 1000 1200 1400 1600 1800 2000

minus02

minus01

0

01

02

Relat

ive e

rror

of S

SE

Relative error(a) SSE

Relative error

200 400 600 800 1000 1200 1400 1600 1800 2000

minus01

minus005

0

005

01

Rela

tive e

rror

of T

WSE

(b) TWSE

Relative error

200 400 600 800 1000 1200 1400 1600 1800 2000

minus02

minus015

minus01

minus005

0

005

01

015

Relat

ive e

rror

of K

OSP

I

(c) KOSPIRelative error

200 400 600 800 1000 1200 1400 1600 1800 2000minus01

minus005

0

005

01

Relat

ive e

rror

of N

ikke

i225

(d) Nikkei225

Figure 7 ((a) (b) (c) and (d)) Relative errors of forecasting results from the ST-ERNNmodel

It is worth noticing that CF accounts for differences inthe complexities of the time series being compared CF forcestime series with very different complexities to be furtherapart In the case that all time series have the same complexityCID simply degenerates to Euclidean distanceThepredictionperformance is better when the CID distance is smallerthat is to say the curve of the predictive data is closer tothe actual data The actual values can be seen as the series119875 and the predicting results as the series 119876 Table 5 showsCID distance between the real indices values of SSE TWSEKOSPI and Nikkei225 and the corresponding predictionsfrom each network model It is clear that the CID distancebetween the real index values and the prediction by ST-ERNNmodel is the smallest one moreover the distances by theSTNN model and the ERNN model are smaller than thoseby the BPNN for all the four considered indices

In general the complexity of a real system is not con-strained to a sole scale In this part we consider a developedCID analysis that is the multiscale CID (MCID) TheMCID analysis takes into account the multiple time scales

while measuring the predicting results and it is appliedto the stock prices analysis for the actual data and thepredicting data in this work The MCID analysis shouldcomprise two steps (i) Considering one-dimensional discretetime series 119909

1 1199092 119909

119894 119909

119873 we construct consecutive

coarse-grained time series 119910(120591) corresponding to the scalefactor 120591 according to the following formula

119910(120591)

119895=

1

120591

119895120591

sum

119894=(119895minus1)120591+1

119909119894 1 le 119895 le

119873

120591

(23)

For scale one the time series 119910(1) is simply the originaltime series The length of each coarse-grained time series isequal to the original time series divided by the scale factor120591 (ii) Calculate the CID for each coarse-grained time seriesand then plot as a function of the scale factor Figure 8shows the MCID values between the forecasting results andthe real market prices from BPNN ERNN STNN and ST-ERNNmodels In Figure 8 it is obvious that the MCID from

12 Computational Intelligence and Neuroscience

0 5 10 15 20 25 30 35 40

500

1000

1500

2000

2500

3000

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(a) SSE

0 5 10 15 20 25 30 35 400

05

1

15

2

25

3times104

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(b) TWSE

0 5 10 15 20 25 30 35 400

500

1000

1500

2000

2500

3000

3500

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(c) KOSPI

0 5 10 15 20 25 30 35 40

05

1

15

2

25

3

35

4

45

5times104

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(d) Nikkei225

Figure 8 ((a) (b) (c) and (d)) MCID values between the forecasting results and the real market prices from BPNN ERNN STNN andST-ERNNmodels

ST-ERNN with the actual value is the smallest one in anyscale that is the ST-ERNN (with the stochastic time effectivefunction) for forecasting stock prices is effective

5 Conclusion

The aim of this research is to develop a predictive modelto forecast the financial time series In this study we havedeveloped a predictive model by using an Elman recurrentneural network with the stochastic time effective function toforecast the indices of SSE TWSE KOSPI and Nikkei225Through the linear regression analysis it implies that thepredictive values and the real values are not deviating too

much Then we take the proposed model compared withBPNN STNN and ERNN forecasting models Empiricalexaminations of predicting precision for the price time series(by the comparisons of predicting measures as MAE RMSEMAPE and MAPE(100)) show that the proposed neuralnetwork model has the advantage of improving the precisionof forecasting and the forecasting of this proposed modelmuch approaches to the real financial market movementsFurthermore from the curve of the relative error it canmake a conclusion that the large fluctuation leads to thelarge relative errors In addition by calculating CID andMCID distance the conclusion was illustrated more clearlyThe study and the proposed model contribute significantly tothe time series literature on forecasting

Computational Intelligence and Neuroscience 13

Table 4 Comparisons of indicesrsquo predictions for different forecasting models

Index errors BPNN STNN ERNN ST-ERNNSSE

MAE 453701 249687 37262647 127390RMSE 544564 405437 493907 370693MAPE 201994 118947 182110 41353MAPE(100) 50644 36868 43176 26809

TWSEMAE 2527225 1405971 1512830 1056377RMSE 3168197 1868309 2054236 1361674MAPE 32017 17303 18449 13468MAPE(100) 22135 11494 13349 12601

KOSPIMAE 743073 563309 479296 182421RMSE 771528 582944 508174 210479MAPE 166084 124461 109608 42257MAPE(100) 74379 59664 49176 21788

Nikkei225MAE 2038034 1381857 1662480 685458RMSE 2385933 1697061 2073395 890378MAPE 18556 12580 15398 06010MAPE(100) 07674 05191 04962 04261

Table 5 CID distances for four network models

Index BPNN STNN ERNN ST-ERNNSSE 30521 17647 23202 16599TWSE 27805 98763 10830 61581KOSPI 33504 23128 25510 10060Nikkei225 44541 23726 32895 25421

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

The authors were supported in part by National NaturalScience Foundation of China under Grant nos 71271026 and10971010

References

[1] Y Kajitani K W Hipel and A I McLeod ldquoForecastingnonlinear time series with feed-forward neural networks a casestudy of Canadian lynx datardquo Journal of Forecasting vol 24 no2 pp 105ndash117 2005

[2] T Takahama S Sakai A Hara and N Iwane ldquoPredicting stockprice using neural networks optimized by differential evolutionwith degenerationrdquo International Journal of Innovative Comput-ing Information and Control vol 5 no 12 pp 5021ndash5031 2009

[3] K Huarng and T H-K Yu ldquoThe application of neural networksto forecast fuzzy time seriesrdquo Physica A vol 363 no 2 pp 481ndash491 2006

[4] F Wang and J Wang ldquoStatistical analysis and forecasting ofreturn interval for SSE and model by lattice percolation systemand neural networkrdquoComputers and Industrial Engineering vol62 no 1 pp 198ndash205 2012

[5] H-J Kim and K-S Shin ldquoA hybrid approach based on neuralnetworks and genetic algorithms for detecting temporal pat-terns in stock marketsrdquo Applied Soft Computing vol 7 no 2pp 569ndash576 2007

[6] M Ghiassi H Saidane and D K Zimbra ldquoA dynamic artificialneural network model for forecasting time series eventsrdquoInternational Journal of Forecasting vol 21 no 2 pp 341ndash3622005

[7] M R Hassan B Nath andM Kirley ldquoA fusionmodel of HMMANNandGA for stockmarket forecastingrdquo Expert Systems withApplications vol 33 no 1 pp 171ndash180 2007

[8] D Devaraj B Yegnanarayana and K Ramar ldquoRadial basisfunction networks for fast contingency rankingrdquo InternationalJournal of Electrical Power and Energy Systems vol 24 no 5 pp387ndash395 2002

[9] B A Garroa and R A Vazquez ldquoDesigning artificial neuralnetworks using particle swarm optimization algorithmsrdquo Com-putational Intelligence and Neuroscience vol 2015 Article ID369298 20 pages 2015

[10] Q Gan ldquoExponential synchronization of stochastic Cohen-Grossberg neural networks withmixed time-varying delays andreaction-diffusion via periodically intermittent controlrdquoNeuralNetworks vol 31 pp 12ndash21 2012

[11] D Xiao and J Wang ldquoModeling stock price dynamics bycontinuum percolation system and relevant complex systemsanalysisrdquo Physica A Statistical Mechanics and its Applicationsvol 391 no 20 pp 4827ndash4838 2012

[12] D Enke and N Mehdiyev ldquoStock market prediction usinga combination of stepwise regression analysis differentialevolution-based fuzzy clustering and a fuzzy inference Neural

14 Computational Intelligence and Neuroscience

Networkrdquo Intelligent Automation and Soft Computing vol 19no 4 pp 636ndash648 2013

[13] G Sermpinisa C Stasinakisa and C Dunisb ldquoStochastic andgenetic neural network combinations in trading and hybridtime-varying leverage effectsrdquo Journal of International FinancialMarkets Institutions amp Money vol 30 pp 21ndash54 2014

[14] R Ebrahimpour H Nikoo S Masoudnia M R Yousefi andM S Ghaemi ldquoMixture of mlp-experts for trend forecastingof time series a case study of the tehran stock exchangerdquoInternational Journal of Forecasting vol 27 no 3 pp 804ndash8162011

[15] A Bahrammirzaee ldquoA comparative survey of artificial intelli-gence applications in finance artificial neural networks expertsystem and hybrid intelligent systemsrdquo Neural Computing andApplications vol 19 no 8 pp 1165ndash1195 2010

[16] A Graves M Liwicki S Fernandez R Bertolami H Bunkeand J Schmidhuber ldquoA novel connectionist system for uncon-strained handwriting recognitionrdquo IEEE Transactions on Pat-tern Analysis and Machine Intelligence vol 31 no 5 pp 855ndash868 2009

[17] M Ardalani-Farsa and S Zolfaghari ldquoChaotic time seriesprediction with residual analysis method using hybrid Elman-NARX neural networksrdquoNeurocomputing vol 73 no 13ndash15 pp2540ndash2553 2010

[18] M Paliwal and U A Kumar ldquoNeural networks and statisticaltechniques a review of applicationsrdquo Expert Systems withApplications vol 36 no 1 pp 2ndash17 2009

[19] Z Liao and J Wang ldquoForecasting model of global stock indexby stochastic time effective neural networkrdquoExpert SystemswithApplications vol 37 no 1 pp 834ndash841 2010

[20] L Pan and J Cao ldquoRobust stability for uncertain stochasticneural network with delay and impulsesrdquo Neurocomputing vol94 pp 102ndash110 2012

[21] H F Liu and J Wang ldquoIntegrating independent componentanalysis and principal component analysis with neural networkto predict Chinese stock marketrdquo Mathematical Problems inEngineering vol 2011 Article ID 382659 15 pages 2011

[22] Z Q Guo H Q Wang and Q Liu ldquoFinancial time seriesforecasting using LPP and SVM optimized by PSOrdquo SoftComputing vol 17 no 5 pp 805ndash818 2013

[23] H L Niu and J Wang ldquoFinancial time series predictionby a random data-time effective RBF neural networkrdquo SoftComputing vol 18 no 3 pp 497ndash508 2014

[24] J L Elman ldquoFinding structure in timerdquo Cognitive Science vol14 no 2 pp 179ndash211 1990

[25] MCacciola GMegali D Pellicano and F CMorabito ldquoElmanneural networks for characterizing voids in welded strips astudyrdquo Neural Computing and Applications vol 21 no 5 pp869ndash875 2012

[26] R Chandra and M Zhang ldquoCooperative coevolution of Elmanrecurrent neural networks for chaotic time series predictionrdquoNeurocomputing vol 86 pp 116ndash123 2012

[27] D K Chaturvedi P S Satsangi and P K Kalra ldquoEffect of differ-ent mappings and normalization of neural network modelsrdquo inProceedings of the National Power Systems Conference vol 9 pp377ndash386 Indian Institute of Technology Kanpur India 1996

[28] S Makridakis ldquoAccuracy measures theoretical and practicalconcernsrdquo International Journal of Forecasting vol 9 no 4 pp527ndash529 1993

[29] L Q Han Design and Application of Artificial Neural NetworkChemical Industry Press 2002

[30] M Zounemat-Kermani ldquoPrincipal component analysis (PCA)for estimating chlorophyll concentration using forward andgeneralized regression neural networksrdquoApplied Artificial Intel-ligence vol 28 no 1 pp 16ndash29 2014

[31] D Olson and C Mossman ldquoNeural network forecasts ofCanadian stock returns using accounting ratiosrdquo InternationalJournal of Forecasting vol 19 no 3 pp 453ndash465 2003

[32] H Demuth and M Beale Network Toolbox For Use withMATLAB The Math Works Natick Mass USA 5th edition1998

[33] A P Plumb R C Rowe P York and M Brown ldquoOptimisationof the predictive ability of artificial neural network (ANN)models a comparison of three ANN programs and four classesof training algorithmrdquo European Journal of PharmaceuticalSciences vol 25 no 4-5 pp 395ndash405 2005

[34] D O Faruk ldquoA hybrid neural network and ARIMA model forwater quality time series predictionrdquo Engineering Applicationsof Artificial Intelligence vol 23 no 4 pp 586ndash594 2010

[35] M Tripathy ldquoPower transformer differential protection usingneural network principal component analysis and radial basisfunction neural networkrdquo Simulation Modelling Practice andTheory vol 18 no 5 pp 600ndash611 2010

[36] C E Martin and J A Reggia ldquoFusing swarm intelligenceand self-assembly for optimizing echo state networksrdquo Com-putational Intelligence and Neuroscience vol 2015 Article ID642429 15 pages 2015

[37] R S Tsay Analysis of Financial Time Series JohnWiley amp SonsHoboken NJ USA 2005

[38] P-C Chang D-D Wang and C-L Zhou ldquoA novel modelby evolving partially connected neural network for stock pricetrend forecastingrdquo Expert Systems with Applications vol 39 no1 pp 611ndash620 2012

[39] P Roy G S Mahapatra P Rani S K Pandey and K NDey ldquoRobust feed forward and recurrent neural network baseddynamic weighted combination models for software reliabilitypredictionrdquo Applied Soft Computing vol 22 pp 629ndash637 2014

[40] X Gabaix P Gopikrishnan V Plerou andH E Stanley ldquoA the-ory of power-law distributions in financial market fluctuationsrdquoNature vol 423 no 6937 pp 267ndash270 2003

[41] R N Mantegna and H E Stanley An Introduction to Econo-physics Correlations and Complexity in Finance CambridgeUniversity Press Cambridge UK 2000

[42] T H Roh ldquoForecasting the volatility of stock price indexrdquoExpert Systems with Applications vol 33 no 4 pp 916ndash9222007

[43] G E Batista E J KeoghOM Tataw andVM de Souza ldquoCIDan efficient complexity-invariant distance for time seriesrdquo DataMining and Knowledge Discovery vol 28 no 3 pp 634ndash6692014

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 9: Research Article Financial Time Series Prediction …downloads.hindawi.com/journals/cin/2016/4742515.pdfResearch Article Financial Time Series Prediction Using Elman Recurrent Random

Computational Intelligence and Neuroscience 9

1500 1600 1700 1800 1900 2000

200

400

600

800

1000

1200

1400

STNNBPNN

ERNN

ST-ERNNReal value

(a) SSE

1500 1600 1700 1800 1900 20005000

6000

7000

8000

9000

10000

11000

STNNBPNN

ERNN

ST-ERNNReal value

(b) TWSE

1500 1600 1700 1800 1900 2000

300

400

500

600

700

800

900

1000

STNNBPNN

ERNN

ST-ERNNReal value

(c) KOSPI

STNNBPNN

ERNN

ST-ERNNReal value

1500 1600 1700 1800 1900 2000

095

1

105

11

115

12

125

13

135times104

(d) Nikkei225

Figure 5 Predictive values on the test set for SSE TWSE KOSPI and Nikkei225

time series forecasting of STNN model is superior to that ofBPNN model and the dynamic neural network is effectiverobust and precise than original BPNN for these four indicesBesides most values of MAPE(100) are smaller than thoseof MAPE in all stock indexes Therefore the short-termprediction outperforms the long-term prediction Overalltraining and testing results are consistent with the measureddata which demonstrates that the ST-ERNN predictor hashigher forecast accuracy

In Figures 7(a) 7(b) 7(c) and 7(d) we considered therelative errors of the ST-ERNN forecasting results Figure 7depicts that most of the predicting relative errors for thesefour price series are betweenminus01 and 01Moreover there aresome points with large relative errors of forecasting results infour models especially on the SSE index which can attribute

to the large fluctuation that leads to the large relative errorsThe definition of relative error is given as follows

119890 (119905) =

119889119905minus 119910119905

119889119905

(17)

where 119889119905and 119910

119905denote the actual value and the predicting

value respectively at time 119905 119905 = 1 2

4 CID and MCID Analysis

The analysis and forecast of time series have long been afocus of economic research for a more clear understanding ofmechanism and characteristics of financial markets [36ndash42]In this section we employ an efficient complexity invariantdistance (CID) for time series Reference [43] shows that

10 Computational Intelligence and Neuroscience

200 400 600 800 1000 1200 1400 1600 1800 20000

200

400

600

800

1000

1200

1400

1600

BPNNSTNNERNN

ST-ERNNReal value

1560 1580 1600 1620 1640

800

1000

1200

(a) SSE

BPNNSTNNERNN

ST-ERNNReal value

200 400 600 800 1000 1200 1400 1600 1800 20003000

4000

5000

6000

7000

8000

9000

10000

11000

1600 1620 1640 1660 1680 17005500

6000

6500

7000

7500

(b) TWSE

BPNNSTNNERNN

ST-ERNNReal value

200 400 600 800 1000 1200 1400 1600 1800 2000200

400

600

800

1000

1200

1800 1850 1900 1950

400500600700

(c) KOSPI

BPNNSTNNERNN

ST-ERNNReal value

200 400 600 800 1000 1200 1400 1600 1800 2000

1

15

2

25

3

35

4

1800 1850 1900 1950 2000

095

1

105

11

115times104

times104

(d) Nikkei225

Figure 6 Comparisons of the actual data and the predictive data for SSE TWSE KOSPI and Nikkei225

complexity invariant distancemeasure can produce improve-ments in classification and clustering in the vast majority ofcases

Complexity invariance uses information about complex-ity differences between two time series as a correction factorfor existing distance measures We begin by introducingEuclidean distance and use this as a starting point to bringin the definition of CID Suppose we have two time series 119875and 119876 of length 119899 Consider

119875 = 1199011 1199012 119901

119894 119901

119899

119876 = 1199021 1199022 119902

119894 119902

119899

(18)

The ubiquitous Euclidean distance is

ED (119875 119876) = radic

119899

sum

119894=1

(119901119894minus 119902119894)2 (19)

The Euclidean distance ED(119875 119876) between two time series 119875and 119876 can be made complexity invariant by introducing acorrection factor

CID (119875 119876) = ED (119875 119876) times CF (119875 119876) (20)

where CF is a complexity correction factor defined as

CF (119875 119876) = max (CE (119875) CE (119876))min (CE (119875) CE (119876))

(21)

and CE(119879) is a complexity estimate of a time series 119879 whichcan be computed as follows

CE (119879) = radic

119899minus1

sum

119894=1

(119905119894+1

minus 119905119894)2 (22)

Computational Intelligence and Neuroscience 11

200 400 600 800 1000 1200 1400 1600 1800 2000

minus02

minus01

0

01

02

Relat

ive e

rror

of S

SE

Relative error(a) SSE

Relative error

200 400 600 800 1000 1200 1400 1600 1800 2000

minus01

minus005

0

005

01

Rela

tive e

rror

of T

WSE

(b) TWSE

Relative error

200 400 600 800 1000 1200 1400 1600 1800 2000

minus02

minus015

minus01

minus005

0

005

01

015

Relat

ive e

rror

of K

OSP

I

(c) KOSPIRelative error

200 400 600 800 1000 1200 1400 1600 1800 2000minus01

minus005

0

005

01

Relat

ive e

rror

of N

ikke

i225

(d) Nikkei225

Figure 7 ((a) (b) (c) and (d)) Relative errors of forecasting results from the ST-ERNNmodel

It is worth noticing that CF accounts for differences inthe complexities of the time series being compared CF forcestime series with very different complexities to be furtherapart In the case that all time series have the same complexityCID simply degenerates to Euclidean distanceThepredictionperformance is better when the CID distance is smallerthat is to say the curve of the predictive data is closer tothe actual data The actual values can be seen as the series119875 and the predicting results as the series 119876 Table 5 showsCID distance between the real indices values of SSE TWSEKOSPI and Nikkei225 and the corresponding predictionsfrom each network model It is clear that the CID distancebetween the real index values and the prediction by ST-ERNNmodel is the smallest one moreover the distances by theSTNN model and the ERNN model are smaller than thoseby the BPNN for all the four considered indices

In general the complexity of a real system is not con-strained to a sole scale In this part we consider a developedCID analysis that is the multiscale CID (MCID) TheMCID analysis takes into account the multiple time scales

while measuring the predicting results and it is appliedto the stock prices analysis for the actual data and thepredicting data in this work The MCID analysis shouldcomprise two steps (i) Considering one-dimensional discretetime series 119909

1 1199092 119909

119894 119909

119873 we construct consecutive

coarse-grained time series 119910(120591) corresponding to the scalefactor 120591 according to the following formula

119910(120591)

119895=

1

120591

119895120591

sum

119894=(119895minus1)120591+1

119909119894 1 le 119895 le

119873

120591

(23)

For scale one the time series 119910(1) is simply the originaltime series The length of each coarse-grained time series isequal to the original time series divided by the scale factor120591 (ii) Calculate the CID for each coarse-grained time seriesand then plot as a function of the scale factor Figure 8shows the MCID values between the forecasting results andthe real market prices from BPNN ERNN STNN and ST-ERNNmodels In Figure 8 it is obvious that the MCID from

12 Computational Intelligence and Neuroscience

0 5 10 15 20 25 30 35 40

500

1000

1500

2000

2500

3000

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(a) SSE

0 5 10 15 20 25 30 35 400

05

1

15

2

25

3times104

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(b) TWSE

0 5 10 15 20 25 30 35 400

500

1000

1500

2000

2500

3000

3500

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(c) KOSPI

0 5 10 15 20 25 30 35 40

05

1

15

2

25

3

35

4

45

5times104

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(d) Nikkei225

Figure 8 ((a) (b) (c) and (d)) MCID values between the forecasting results and the real market prices from BPNN ERNN STNN andST-ERNNmodels

ST-ERNN with the actual value is the smallest one in anyscale that is the ST-ERNN (with the stochastic time effectivefunction) for forecasting stock prices is effective

5 Conclusion

The aim of this research is to develop a predictive modelto forecast the financial time series In this study we havedeveloped a predictive model by using an Elman recurrentneural network with the stochastic time effective function toforecast the indices of SSE TWSE KOSPI and Nikkei225Through the linear regression analysis it implies that thepredictive values and the real values are not deviating too

much Then we take the proposed model compared withBPNN STNN and ERNN forecasting models Empiricalexaminations of predicting precision for the price time series(by the comparisons of predicting measures as MAE RMSEMAPE and MAPE(100)) show that the proposed neuralnetwork model has the advantage of improving the precisionof forecasting and the forecasting of this proposed modelmuch approaches to the real financial market movementsFurthermore from the curve of the relative error it canmake a conclusion that the large fluctuation leads to thelarge relative errors In addition by calculating CID andMCID distance the conclusion was illustrated more clearlyThe study and the proposed model contribute significantly tothe time series literature on forecasting

Computational Intelligence and Neuroscience 13

Table 4 Comparisons of indicesrsquo predictions for different forecasting models

Index errors BPNN STNN ERNN ST-ERNNSSE

MAE 453701 249687 37262647 127390RMSE 544564 405437 493907 370693MAPE 201994 118947 182110 41353MAPE(100) 50644 36868 43176 26809

TWSEMAE 2527225 1405971 1512830 1056377RMSE 3168197 1868309 2054236 1361674MAPE 32017 17303 18449 13468MAPE(100) 22135 11494 13349 12601

KOSPIMAE 743073 563309 479296 182421RMSE 771528 582944 508174 210479MAPE 166084 124461 109608 42257MAPE(100) 74379 59664 49176 21788

Nikkei225MAE 2038034 1381857 1662480 685458RMSE 2385933 1697061 2073395 890378MAPE 18556 12580 15398 06010MAPE(100) 07674 05191 04962 04261

Table 5 CID distances for four network models

Index BPNN STNN ERNN ST-ERNNSSE 30521 17647 23202 16599TWSE 27805 98763 10830 61581KOSPI 33504 23128 25510 10060Nikkei225 44541 23726 32895 25421

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

The authors were supported in part by National NaturalScience Foundation of China under Grant nos 71271026 and10971010

References

[1] Y Kajitani K W Hipel and A I McLeod ldquoForecastingnonlinear time series with feed-forward neural networks a casestudy of Canadian lynx datardquo Journal of Forecasting vol 24 no2 pp 105ndash117 2005

[2] T Takahama S Sakai A Hara and N Iwane ldquoPredicting stockprice using neural networks optimized by differential evolutionwith degenerationrdquo International Journal of Innovative Comput-ing Information and Control vol 5 no 12 pp 5021ndash5031 2009

[3] K Huarng and T H-K Yu ldquoThe application of neural networksto forecast fuzzy time seriesrdquo Physica A vol 363 no 2 pp 481ndash491 2006

[4] F Wang and J Wang ldquoStatistical analysis and forecasting ofreturn interval for SSE and model by lattice percolation systemand neural networkrdquoComputers and Industrial Engineering vol62 no 1 pp 198ndash205 2012

[5] H-J Kim and K-S Shin ldquoA hybrid approach based on neuralnetworks and genetic algorithms for detecting temporal pat-terns in stock marketsrdquo Applied Soft Computing vol 7 no 2pp 569ndash576 2007

[6] M Ghiassi H Saidane and D K Zimbra ldquoA dynamic artificialneural network model for forecasting time series eventsrdquoInternational Journal of Forecasting vol 21 no 2 pp 341ndash3622005

[7] M R Hassan B Nath andM Kirley ldquoA fusionmodel of HMMANNandGA for stockmarket forecastingrdquo Expert Systems withApplications vol 33 no 1 pp 171ndash180 2007

[8] D Devaraj B Yegnanarayana and K Ramar ldquoRadial basisfunction networks for fast contingency rankingrdquo InternationalJournal of Electrical Power and Energy Systems vol 24 no 5 pp387ndash395 2002

[9] B A Garroa and R A Vazquez ldquoDesigning artificial neuralnetworks using particle swarm optimization algorithmsrdquo Com-putational Intelligence and Neuroscience vol 2015 Article ID369298 20 pages 2015

[10] Q Gan ldquoExponential synchronization of stochastic Cohen-Grossberg neural networks withmixed time-varying delays andreaction-diffusion via periodically intermittent controlrdquoNeuralNetworks vol 31 pp 12ndash21 2012

[11] D Xiao and J Wang ldquoModeling stock price dynamics bycontinuum percolation system and relevant complex systemsanalysisrdquo Physica A Statistical Mechanics and its Applicationsvol 391 no 20 pp 4827ndash4838 2012

[12] D Enke and N Mehdiyev ldquoStock market prediction usinga combination of stepwise regression analysis differentialevolution-based fuzzy clustering and a fuzzy inference Neural

14 Computational Intelligence and Neuroscience

Networkrdquo Intelligent Automation and Soft Computing vol 19no 4 pp 636ndash648 2013

[13] G Sermpinisa C Stasinakisa and C Dunisb ldquoStochastic andgenetic neural network combinations in trading and hybridtime-varying leverage effectsrdquo Journal of International FinancialMarkets Institutions amp Money vol 30 pp 21ndash54 2014

[14] R Ebrahimpour H Nikoo S Masoudnia M R Yousefi andM S Ghaemi ldquoMixture of mlp-experts for trend forecastingof time series a case study of the tehran stock exchangerdquoInternational Journal of Forecasting vol 27 no 3 pp 804ndash8162011

[15] A Bahrammirzaee ldquoA comparative survey of artificial intelli-gence applications in finance artificial neural networks expertsystem and hybrid intelligent systemsrdquo Neural Computing andApplications vol 19 no 8 pp 1165ndash1195 2010

[16] A Graves M Liwicki S Fernandez R Bertolami H Bunkeand J Schmidhuber ldquoA novel connectionist system for uncon-strained handwriting recognitionrdquo IEEE Transactions on Pat-tern Analysis and Machine Intelligence vol 31 no 5 pp 855ndash868 2009

[17] M Ardalani-Farsa and S Zolfaghari ldquoChaotic time seriesprediction with residual analysis method using hybrid Elman-NARX neural networksrdquoNeurocomputing vol 73 no 13ndash15 pp2540ndash2553 2010

[18] M Paliwal and U A Kumar ldquoNeural networks and statisticaltechniques a review of applicationsrdquo Expert Systems withApplications vol 36 no 1 pp 2ndash17 2009

[19] Z Liao and J Wang ldquoForecasting model of global stock indexby stochastic time effective neural networkrdquoExpert SystemswithApplications vol 37 no 1 pp 834ndash841 2010

[20] L Pan and J Cao ldquoRobust stability for uncertain stochasticneural network with delay and impulsesrdquo Neurocomputing vol94 pp 102ndash110 2012

[21] H F Liu and J Wang ldquoIntegrating independent componentanalysis and principal component analysis with neural networkto predict Chinese stock marketrdquo Mathematical Problems inEngineering vol 2011 Article ID 382659 15 pages 2011

[22] Z Q Guo H Q Wang and Q Liu ldquoFinancial time seriesforecasting using LPP and SVM optimized by PSOrdquo SoftComputing vol 17 no 5 pp 805ndash818 2013

[23] H L Niu and J Wang ldquoFinancial time series predictionby a random data-time effective RBF neural networkrdquo SoftComputing vol 18 no 3 pp 497ndash508 2014

[24] J L Elman ldquoFinding structure in timerdquo Cognitive Science vol14 no 2 pp 179ndash211 1990

[25] MCacciola GMegali D Pellicano and F CMorabito ldquoElmanneural networks for characterizing voids in welded strips astudyrdquo Neural Computing and Applications vol 21 no 5 pp869ndash875 2012

[26] R Chandra and M Zhang ldquoCooperative coevolution of Elmanrecurrent neural networks for chaotic time series predictionrdquoNeurocomputing vol 86 pp 116ndash123 2012

[27] D K Chaturvedi P S Satsangi and P K Kalra ldquoEffect of differ-ent mappings and normalization of neural network modelsrdquo inProceedings of the National Power Systems Conference vol 9 pp377ndash386 Indian Institute of Technology Kanpur India 1996

[28] S Makridakis ldquoAccuracy measures theoretical and practicalconcernsrdquo International Journal of Forecasting vol 9 no 4 pp527ndash529 1993

[29] L Q Han Design and Application of Artificial Neural NetworkChemical Industry Press 2002

[30] M Zounemat-Kermani ldquoPrincipal component analysis (PCA)for estimating chlorophyll concentration using forward andgeneralized regression neural networksrdquoApplied Artificial Intel-ligence vol 28 no 1 pp 16ndash29 2014

[31] D Olson and C Mossman ldquoNeural network forecasts ofCanadian stock returns using accounting ratiosrdquo InternationalJournal of Forecasting vol 19 no 3 pp 453ndash465 2003

[32] H Demuth and M Beale Network Toolbox For Use withMATLAB The Math Works Natick Mass USA 5th edition1998

[33] A P Plumb R C Rowe P York and M Brown ldquoOptimisationof the predictive ability of artificial neural network (ANN)models a comparison of three ANN programs and four classesof training algorithmrdquo European Journal of PharmaceuticalSciences vol 25 no 4-5 pp 395ndash405 2005

[34] D O Faruk ldquoA hybrid neural network and ARIMA model forwater quality time series predictionrdquo Engineering Applicationsof Artificial Intelligence vol 23 no 4 pp 586ndash594 2010

[35] M Tripathy ldquoPower transformer differential protection usingneural network principal component analysis and radial basisfunction neural networkrdquo Simulation Modelling Practice andTheory vol 18 no 5 pp 600ndash611 2010

[36] C E Martin and J A Reggia ldquoFusing swarm intelligenceand self-assembly for optimizing echo state networksrdquo Com-putational Intelligence and Neuroscience vol 2015 Article ID642429 15 pages 2015

[37] R S Tsay Analysis of Financial Time Series JohnWiley amp SonsHoboken NJ USA 2005

[38] P-C Chang D-D Wang and C-L Zhou ldquoA novel modelby evolving partially connected neural network for stock pricetrend forecastingrdquo Expert Systems with Applications vol 39 no1 pp 611ndash620 2012

[39] P Roy G S Mahapatra P Rani S K Pandey and K NDey ldquoRobust feed forward and recurrent neural network baseddynamic weighted combination models for software reliabilitypredictionrdquo Applied Soft Computing vol 22 pp 629ndash637 2014

[40] X Gabaix P Gopikrishnan V Plerou andH E Stanley ldquoA the-ory of power-law distributions in financial market fluctuationsrdquoNature vol 423 no 6937 pp 267ndash270 2003

[41] R N Mantegna and H E Stanley An Introduction to Econo-physics Correlations and Complexity in Finance CambridgeUniversity Press Cambridge UK 2000

[42] T H Roh ldquoForecasting the volatility of stock price indexrdquoExpert Systems with Applications vol 33 no 4 pp 916ndash9222007

[43] G E Batista E J KeoghOM Tataw andVM de Souza ldquoCIDan efficient complexity-invariant distance for time seriesrdquo DataMining and Knowledge Discovery vol 28 no 3 pp 634ndash6692014

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 10: Research Article Financial Time Series Prediction …downloads.hindawi.com/journals/cin/2016/4742515.pdfResearch Article Financial Time Series Prediction Using Elman Recurrent Random

10 Computational Intelligence and Neuroscience

200 400 600 800 1000 1200 1400 1600 1800 20000

200

400

600

800

1000

1200

1400

1600

BPNNSTNNERNN

ST-ERNNReal value

1560 1580 1600 1620 1640

800

1000

1200

(a) SSE

BPNNSTNNERNN

ST-ERNNReal value

200 400 600 800 1000 1200 1400 1600 1800 20003000

4000

5000

6000

7000

8000

9000

10000

11000

1600 1620 1640 1660 1680 17005500

6000

6500

7000

7500

(b) TWSE

BPNNSTNNERNN

ST-ERNNReal value

200 400 600 800 1000 1200 1400 1600 1800 2000200

400

600

800

1000

1200

1800 1850 1900 1950

400500600700

(c) KOSPI

BPNNSTNNERNN

ST-ERNNReal value

200 400 600 800 1000 1200 1400 1600 1800 2000

1

15

2

25

3

35

4

1800 1850 1900 1950 2000

095

1

105

11

115times104

times104

(d) Nikkei225

Figure 6 Comparisons of the actual data and the predictive data for SSE TWSE KOSPI and Nikkei225

complexity invariant distancemeasure can produce improve-ments in classification and clustering in the vast majority ofcases

Complexity invariance uses information about complex-ity differences between two time series as a correction factorfor existing distance measures We begin by introducingEuclidean distance and use this as a starting point to bringin the definition of CID Suppose we have two time series 119875and 119876 of length 119899 Consider

119875 = 1199011 1199012 119901

119894 119901

119899

119876 = 1199021 1199022 119902

119894 119902

119899

(18)

The ubiquitous Euclidean distance is

ED (119875 119876) = radic

119899

sum

119894=1

(119901119894minus 119902119894)2 (19)

The Euclidean distance ED(119875 119876) between two time series 119875and 119876 can be made complexity invariant by introducing acorrection factor

CID (119875 119876) = ED (119875 119876) times CF (119875 119876) (20)

where CF is a complexity correction factor defined as

CF (119875 119876) = max (CE (119875) CE (119876))min (CE (119875) CE (119876))

(21)

and CE(119879) is a complexity estimate of a time series 119879 whichcan be computed as follows

CE (119879) = radic

119899minus1

sum

119894=1

(119905119894+1

minus 119905119894)2 (22)

Computational Intelligence and Neuroscience 11

200 400 600 800 1000 1200 1400 1600 1800 2000

minus02

minus01

0

01

02

Relat

ive e

rror

of S

SE

Relative error(a) SSE

Relative error

200 400 600 800 1000 1200 1400 1600 1800 2000

minus01

minus005

0

005

01

Rela

tive e

rror

of T

WSE

(b) TWSE

Relative error

200 400 600 800 1000 1200 1400 1600 1800 2000

minus02

minus015

minus01

minus005

0

005

01

015

Relat

ive e

rror

of K

OSP

I

(c) KOSPIRelative error

200 400 600 800 1000 1200 1400 1600 1800 2000minus01

minus005

0

005

01

Relat

ive e

rror

of N

ikke

i225

(d) Nikkei225

Figure 7 ((a) (b) (c) and (d)) Relative errors of forecasting results from the ST-ERNNmodel

It is worth noticing that CF accounts for differences inthe complexities of the time series being compared CF forcestime series with very different complexities to be furtherapart In the case that all time series have the same complexityCID simply degenerates to Euclidean distanceThepredictionperformance is better when the CID distance is smallerthat is to say the curve of the predictive data is closer tothe actual data The actual values can be seen as the series119875 and the predicting results as the series 119876 Table 5 showsCID distance between the real indices values of SSE TWSEKOSPI and Nikkei225 and the corresponding predictionsfrom each network model It is clear that the CID distancebetween the real index values and the prediction by ST-ERNNmodel is the smallest one moreover the distances by theSTNN model and the ERNN model are smaller than thoseby the BPNN for all the four considered indices

In general the complexity of a real system is not con-strained to a sole scale In this part we consider a developedCID analysis that is the multiscale CID (MCID) TheMCID analysis takes into account the multiple time scales

while measuring the predicting results and it is appliedto the stock prices analysis for the actual data and thepredicting data in this work The MCID analysis shouldcomprise two steps (i) Considering one-dimensional discretetime series 119909

1 1199092 119909

119894 119909

119873 we construct consecutive

coarse-grained time series 119910(120591) corresponding to the scalefactor 120591 according to the following formula

119910(120591)

119895=

1

120591

119895120591

sum

119894=(119895minus1)120591+1

119909119894 1 le 119895 le

119873

120591

(23)

For scale one the time series 119910(1) is simply the originaltime series The length of each coarse-grained time series isequal to the original time series divided by the scale factor120591 (ii) Calculate the CID for each coarse-grained time seriesand then plot as a function of the scale factor Figure 8shows the MCID values between the forecasting results andthe real market prices from BPNN ERNN STNN and ST-ERNNmodels In Figure 8 it is obvious that the MCID from

12 Computational Intelligence and Neuroscience

0 5 10 15 20 25 30 35 40

500

1000

1500

2000

2500

3000

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(a) SSE

0 5 10 15 20 25 30 35 400

05

1

15

2

25

3times104

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(b) TWSE

0 5 10 15 20 25 30 35 400

500

1000

1500

2000

2500

3000

3500

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(c) KOSPI

0 5 10 15 20 25 30 35 40

05

1

15

2

25

3

35

4

45

5times104

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(d) Nikkei225

Figure 8 ((a) (b) (c) and (d)) MCID values between the forecasting results and the real market prices from BPNN ERNN STNN andST-ERNNmodels

ST-ERNN with the actual value is the smallest one in anyscale that is the ST-ERNN (with the stochastic time effectivefunction) for forecasting stock prices is effective

5 Conclusion

The aim of this research is to develop a predictive modelto forecast the financial time series In this study we havedeveloped a predictive model by using an Elman recurrentneural network with the stochastic time effective function toforecast the indices of SSE TWSE KOSPI and Nikkei225Through the linear regression analysis it implies that thepredictive values and the real values are not deviating too

much Then we take the proposed model compared withBPNN STNN and ERNN forecasting models Empiricalexaminations of predicting precision for the price time series(by the comparisons of predicting measures as MAE RMSEMAPE and MAPE(100)) show that the proposed neuralnetwork model has the advantage of improving the precisionof forecasting and the forecasting of this proposed modelmuch approaches to the real financial market movementsFurthermore from the curve of the relative error it canmake a conclusion that the large fluctuation leads to thelarge relative errors In addition by calculating CID andMCID distance the conclusion was illustrated more clearlyThe study and the proposed model contribute significantly tothe time series literature on forecasting

Computational Intelligence and Neuroscience 13

Table 4 Comparisons of indicesrsquo predictions for different forecasting models

Index errors BPNN STNN ERNN ST-ERNNSSE

MAE 453701 249687 37262647 127390RMSE 544564 405437 493907 370693MAPE 201994 118947 182110 41353MAPE(100) 50644 36868 43176 26809

TWSEMAE 2527225 1405971 1512830 1056377RMSE 3168197 1868309 2054236 1361674MAPE 32017 17303 18449 13468MAPE(100) 22135 11494 13349 12601

KOSPIMAE 743073 563309 479296 182421RMSE 771528 582944 508174 210479MAPE 166084 124461 109608 42257MAPE(100) 74379 59664 49176 21788

Nikkei225MAE 2038034 1381857 1662480 685458RMSE 2385933 1697061 2073395 890378MAPE 18556 12580 15398 06010MAPE(100) 07674 05191 04962 04261

Table 5 CID distances for four network models

Index BPNN STNN ERNN ST-ERNNSSE 30521 17647 23202 16599TWSE 27805 98763 10830 61581KOSPI 33504 23128 25510 10060Nikkei225 44541 23726 32895 25421

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

The authors were supported in part by National NaturalScience Foundation of China under Grant nos 71271026 and10971010

References

[1] Y Kajitani K W Hipel and A I McLeod ldquoForecastingnonlinear time series with feed-forward neural networks a casestudy of Canadian lynx datardquo Journal of Forecasting vol 24 no2 pp 105ndash117 2005

[2] T Takahama S Sakai A Hara and N Iwane ldquoPredicting stockprice using neural networks optimized by differential evolutionwith degenerationrdquo International Journal of Innovative Comput-ing Information and Control vol 5 no 12 pp 5021ndash5031 2009

[3] K Huarng and T H-K Yu ldquoThe application of neural networksto forecast fuzzy time seriesrdquo Physica A vol 363 no 2 pp 481ndash491 2006

[4] F Wang and J Wang ldquoStatistical analysis and forecasting ofreturn interval for SSE and model by lattice percolation systemand neural networkrdquoComputers and Industrial Engineering vol62 no 1 pp 198ndash205 2012

[5] H-J Kim and K-S Shin ldquoA hybrid approach based on neuralnetworks and genetic algorithms for detecting temporal pat-terns in stock marketsrdquo Applied Soft Computing vol 7 no 2pp 569ndash576 2007

[6] M Ghiassi H Saidane and D K Zimbra ldquoA dynamic artificialneural network model for forecasting time series eventsrdquoInternational Journal of Forecasting vol 21 no 2 pp 341ndash3622005

[7] M R Hassan B Nath andM Kirley ldquoA fusionmodel of HMMANNandGA for stockmarket forecastingrdquo Expert Systems withApplications vol 33 no 1 pp 171ndash180 2007

[8] D Devaraj B Yegnanarayana and K Ramar ldquoRadial basisfunction networks for fast contingency rankingrdquo InternationalJournal of Electrical Power and Energy Systems vol 24 no 5 pp387ndash395 2002

[9] B A Garroa and R A Vazquez ldquoDesigning artificial neuralnetworks using particle swarm optimization algorithmsrdquo Com-putational Intelligence and Neuroscience vol 2015 Article ID369298 20 pages 2015

[10] Q Gan ldquoExponential synchronization of stochastic Cohen-Grossberg neural networks withmixed time-varying delays andreaction-diffusion via periodically intermittent controlrdquoNeuralNetworks vol 31 pp 12ndash21 2012

[11] D Xiao and J Wang ldquoModeling stock price dynamics bycontinuum percolation system and relevant complex systemsanalysisrdquo Physica A Statistical Mechanics and its Applicationsvol 391 no 20 pp 4827ndash4838 2012

[12] D Enke and N Mehdiyev ldquoStock market prediction usinga combination of stepwise regression analysis differentialevolution-based fuzzy clustering and a fuzzy inference Neural

14 Computational Intelligence and Neuroscience

Networkrdquo Intelligent Automation and Soft Computing vol 19no 4 pp 636ndash648 2013

[13] G Sermpinisa C Stasinakisa and C Dunisb ldquoStochastic andgenetic neural network combinations in trading and hybridtime-varying leverage effectsrdquo Journal of International FinancialMarkets Institutions amp Money vol 30 pp 21ndash54 2014

[14] R Ebrahimpour H Nikoo S Masoudnia M R Yousefi andM S Ghaemi ldquoMixture of mlp-experts for trend forecastingof time series a case study of the tehran stock exchangerdquoInternational Journal of Forecasting vol 27 no 3 pp 804ndash8162011

[15] A Bahrammirzaee ldquoA comparative survey of artificial intelli-gence applications in finance artificial neural networks expertsystem and hybrid intelligent systemsrdquo Neural Computing andApplications vol 19 no 8 pp 1165ndash1195 2010

[16] A Graves M Liwicki S Fernandez R Bertolami H Bunkeand J Schmidhuber ldquoA novel connectionist system for uncon-strained handwriting recognitionrdquo IEEE Transactions on Pat-tern Analysis and Machine Intelligence vol 31 no 5 pp 855ndash868 2009

[17] M Ardalani-Farsa and S Zolfaghari ldquoChaotic time seriesprediction with residual analysis method using hybrid Elman-NARX neural networksrdquoNeurocomputing vol 73 no 13ndash15 pp2540ndash2553 2010

[18] M Paliwal and U A Kumar ldquoNeural networks and statisticaltechniques a review of applicationsrdquo Expert Systems withApplications vol 36 no 1 pp 2ndash17 2009

[19] Z Liao and J Wang ldquoForecasting model of global stock indexby stochastic time effective neural networkrdquoExpert SystemswithApplications vol 37 no 1 pp 834ndash841 2010

[20] L Pan and J Cao ldquoRobust stability for uncertain stochasticneural network with delay and impulsesrdquo Neurocomputing vol94 pp 102ndash110 2012

[21] H F Liu and J Wang ldquoIntegrating independent componentanalysis and principal component analysis with neural networkto predict Chinese stock marketrdquo Mathematical Problems inEngineering vol 2011 Article ID 382659 15 pages 2011

[22] Z Q Guo H Q Wang and Q Liu ldquoFinancial time seriesforecasting using LPP and SVM optimized by PSOrdquo SoftComputing vol 17 no 5 pp 805ndash818 2013

[23] H L Niu and J Wang ldquoFinancial time series predictionby a random data-time effective RBF neural networkrdquo SoftComputing vol 18 no 3 pp 497ndash508 2014

[24] J L Elman ldquoFinding structure in timerdquo Cognitive Science vol14 no 2 pp 179ndash211 1990

[25] MCacciola GMegali D Pellicano and F CMorabito ldquoElmanneural networks for characterizing voids in welded strips astudyrdquo Neural Computing and Applications vol 21 no 5 pp869ndash875 2012

[26] R Chandra and M Zhang ldquoCooperative coevolution of Elmanrecurrent neural networks for chaotic time series predictionrdquoNeurocomputing vol 86 pp 116ndash123 2012

[27] D K Chaturvedi P S Satsangi and P K Kalra ldquoEffect of differ-ent mappings and normalization of neural network modelsrdquo inProceedings of the National Power Systems Conference vol 9 pp377ndash386 Indian Institute of Technology Kanpur India 1996

[28] S Makridakis ldquoAccuracy measures theoretical and practicalconcernsrdquo International Journal of Forecasting vol 9 no 4 pp527ndash529 1993

[29] L Q Han Design and Application of Artificial Neural NetworkChemical Industry Press 2002

[30] M Zounemat-Kermani ldquoPrincipal component analysis (PCA)for estimating chlorophyll concentration using forward andgeneralized regression neural networksrdquoApplied Artificial Intel-ligence vol 28 no 1 pp 16ndash29 2014

[31] D Olson and C Mossman ldquoNeural network forecasts ofCanadian stock returns using accounting ratiosrdquo InternationalJournal of Forecasting vol 19 no 3 pp 453ndash465 2003

[32] H Demuth and M Beale Network Toolbox For Use withMATLAB The Math Works Natick Mass USA 5th edition1998

[33] A P Plumb R C Rowe P York and M Brown ldquoOptimisationof the predictive ability of artificial neural network (ANN)models a comparison of three ANN programs and four classesof training algorithmrdquo European Journal of PharmaceuticalSciences vol 25 no 4-5 pp 395ndash405 2005

[34] D O Faruk ldquoA hybrid neural network and ARIMA model forwater quality time series predictionrdquo Engineering Applicationsof Artificial Intelligence vol 23 no 4 pp 586ndash594 2010

[35] M Tripathy ldquoPower transformer differential protection usingneural network principal component analysis and radial basisfunction neural networkrdquo Simulation Modelling Practice andTheory vol 18 no 5 pp 600ndash611 2010

[36] C E Martin and J A Reggia ldquoFusing swarm intelligenceand self-assembly for optimizing echo state networksrdquo Com-putational Intelligence and Neuroscience vol 2015 Article ID642429 15 pages 2015

[37] R S Tsay Analysis of Financial Time Series JohnWiley amp SonsHoboken NJ USA 2005

[38] P-C Chang D-D Wang and C-L Zhou ldquoA novel modelby evolving partially connected neural network for stock pricetrend forecastingrdquo Expert Systems with Applications vol 39 no1 pp 611ndash620 2012

[39] P Roy G S Mahapatra P Rani S K Pandey and K NDey ldquoRobust feed forward and recurrent neural network baseddynamic weighted combination models for software reliabilitypredictionrdquo Applied Soft Computing vol 22 pp 629ndash637 2014

[40] X Gabaix P Gopikrishnan V Plerou andH E Stanley ldquoA the-ory of power-law distributions in financial market fluctuationsrdquoNature vol 423 no 6937 pp 267ndash270 2003

[41] R N Mantegna and H E Stanley An Introduction to Econo-physics Correlations and Complexity in Finance CambridgeUniversity Press Cambridge UK 2000

[42] T H Roh ldquoForecasting the volatility of stock price indexrdquoExpert Systems with Applications vol 33 no 4 pp 916ndash9222007

[43] G E Batista E J KeoghOM Tataw andVM de Souza ldquoCIDan efficient complexity-invariant distance for time seriesrdquo DataMining and Knowledge Discovery vol 28 no 3 pp 634ndash6692014

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 11: Research Article Financial Time Series Prediction …downloads.hindawi.com/journals/cin/2016/4742515.pdfResearch Article Financial Time Series Prediction Using Elman Recurrent Random

Computational Intelligence and Neuroscience 11

200 400 600 800 1000 1200 1400 1600 1800 2000

minus02

minus01

0

01

02

Relat

ive e

rror

of S

SE

Relative error(a) SSE

Relative error

200 400 600 800 1000 1200 1400 1600 1800 2000

minus01

minus005

0

005

01

Rela

tive e

rror

of T

WSE

(b) TWSE

Relative error

200 400 600 800 1000 1200 1400 1600 1800 2000

minus02

minus015

minus01

minus005

0

005

01

015

Relat

ive e

rror

of K

OSP

I

(c) KOSPIRelative error

200 400 600 800 1000 1200 1400 1600 1800 2000minus01

minus005

0

005

01

Relat

ive e

rror

of N

ikke

i225

(d) Nikkei225

Figure 7 ((a) (b) (c) and (d)) Relative errors of forecasting results from the ST-ERNNmodel

It is worth noticing that CF accounts for differences inthe complexities of the time series being compared CF forcestime series with very different complexities to be furtherapart In the case that all time series have the same complexityCID simply degenerates to Euclidean distanceThepredictionperformance is better when the CID distance is smallerthat is to say the curve of the predictive data is closer tothe actual data The actual values can be seen as the series119875 and the predicting results as the series 119876 Table 5 showsCID distance between the real indices values of SSE TWSEKOSPI and Nikkei225 and the corresponding predictionsfrom each network model It is clear that the CID distancebetween the real index values and the prediction by ST-ERNNmodel is the smallest one moreover the distances by theSTNN model and the ERNN model are smaller than thoseby the BPNN for all the four considered indices

In general the complexity of a real system is not con-strained to a sole scale In this part we consider a developedCID analysis that is the multiscale CID (MCID) TheMCID analysis takes into account the multiple time scales

while measuring the predicting results and it is appliedto the stock prices analysis for the actual data and thepredicting data in this work The MCID analysis shouldcomprise two steps (i) Considering one-dimensional discretetime series 119909

1 1199092 119909

119894 119909

119873 we construct consecutive

coarse-grained time series 119910(120591) corresponding to the scalefactor 120591 according to the following formula

119910(120591)

119895=

1

120591

119895120591

sum

119894=(119895minus1)120591+1

119909119894 1 le 119895 le

119873

120591

(23)

For scale one the time series 119910(1) is simply the originaltime series The length of each coarse-grained time series isequal to the original time series divided by the scale factor120591 (ii) Calculate the CID for each coarse-grained time seriesand then plot as a function of the scale factor Figure 8shows the MCID values between the forecasting results andthe real market prices from BPNN ERNN STNN and ST-ERNNmodels In Figure 8 it is obvious that the MCID from

12 Computational Intelligence and Neuroscience

0 5 10 15 20 25 30 35 40

500

1000

1500

2000

2500

3000

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(a) SSE

0 5 10 15 20 25 30 35 400

05

1

15

2

25

3times104

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(b) TWSE

0 5 10 15 20 25 30 35 400

500

1000

1500

2000

2500

3000

3500

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(c) KOSPI

0 5 10 15 20 25 30 35 40

05

1

15

2

25

3

35

4

45

5times104

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(d) Nikkei225

Figure 8 ((a) (b) (c) and (d)) MCID values between the forecasting results and the real market prices from BPNN ERNN STNN andST-ERNNmodels

ST-ERNN with the actual value is the smallest one in anyscale that is the ST-ERNN (with the stochastic time effectivefunction) for forecasting stock prices is effective

5 Conclusion

The aim of this research is to develop a predictive modelto forecast the financial time series In this study we havedeveloped a predictive model by using an Elman recurrentneural network with the stochastic time effective function toforecast the indices of SSE TWSE KOSPI and Nikkei225Through the linear regression analysis it implies that thepredictive values and the real values are not deviating too

much Then we take the proposed model compared withBPNN STNN and ERNN forecasting models Empiricalexaminations of predicting precision for the price time series(by the comparisons of predicting measures as MAE RMSEMAPE and MAPE(100)) show that the proposed neuralnetwork model has the advantage of improving the precisionof forecasting and the forecasting of this proposed modelmuch approaches to the real financial market movementsFurthermore from the curve of the relative error it canmake a conclusion that the large fluctuation leads to thelarge relative errors In addition by calculating CID andMCID distance the conclusion was illustrated more clearlyThe study and the proposed model contribute significantly tothe time series literature on forecasting

Computational Intelligence and Neuroscience 13

Table 4 Comparisons of indicesrsquo predictions for different forecasting models

Index errors BPNN STNN ERNN ST-ERNNSSE

MAE 453701 249687 37262647 127390RMSE 544564 405437 493907 370693MAPE 201994 118947 182110 41353MAPE(100) 50644 36868 43176 26809

TWSEMAE 2527225 1405971 1512830 1056377RMSE 3168197 1868309 2054236 1361674MAPE 32017 17303 18449 13468MAPE(100) 22135 11494 13349 12601

KOSPIMAE 743073 563309 479296 182421RMSE 771528 582944 508174 210479MAPE 166084 124461 109608 42257MAPE(100) 74379 59664 49176 21788

Nikkei225MAE 2038034 1381857 1662480 685458RMSE 2385933 1697061 2073395 890378MAPE 18556 12580 15398 06010MAPE(100) 07674 05191 04962 04261

Table 5 CID distances for four network models

Index BPNN STNN ERNN ST-ERNNSSE 30521 17647 23202 16599TWSE 27805 98763 10830 61581KOSPI 33504 23128 25510 10060Nikkei225 44541 23726 32895 25421

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

The authors were supported in part by National NaturalScience Foundation of China under Grant nos 71271026 and10971010

References

[1] Y Kajitani K W Hipel and A I McLeod ldquoForecastingnonlinear time series with feed-forward neural networks a casestudy of Canadian lynx datardquo Journal of Forecasting vol 24 no2 pp 105ndash117 2005

[2] T Takahama S Sakai A Hara and N Iwane ldquoPredicting stockprice using neural networks optimized by differential evolutionwith degenerationrdquo International Journal of Innovative Comput-ing Information and Control vol 5 no 12 pp 5021ndash5031 2009

[3] K Huarng and T H-K Yu ldquoThe application of neural networksto forecast fuzzy time seriesrdquo Physica A vol 363 no 2 pp 481ndash491 2006

[4] F Wang and J Wang ldquoStatistical analysis and forecasting ofreturn interval for SSE and model by lattice percolation systemand neural networkrdquoComputers and Industrial Engineering vol62 no 1 pp 198ndash205 2012

[5] H-J Kim and K-S Shin ldquoA hybrid approach based on neuralnetworks and genetic algorithms for detecting temporal pat-terns in stock marketsrdquo Applied Soft Computing vol 7 no 2pp 569ndash576 2007

[6] M Ghiassi H Saidane and D K Zimbra ldquoA dynamic artificialneural network model for forecasting time series eventsrdquoInternational Journal of Forecasting vol 21 no 2 pp 341ndash3622005

[7] M R Hassan B Nath andM Kirley ldquoA fusionmodel of HMMANNandGA for stockmarket forecastingrdquo Expert Systems withApplications vol 33 no 1 pp 171ndash180 2007

[8] D Devaraj B Yegnanarayana and K Ramar ldquoRadial basisfunction networks for fast contingency rankingrdquo InternationalJournal of Electrical Power and Energy Systems vol 24 no 5 pp387ndash395 2002

[9] B A Garroa and R A Vazquez ldquoDesigning artificial neuralnetworks using particle swarm optimization algorithmsrdquo Com-putational Intelligence and Neuroscience vol 2015 Article ID369298 20 pages 2015

[10] Q Gan ldquoExponential synchronization of stochastic Cohen-Grossberg neural networks withmixed time-varying delays andreaction-diffusion via periodically intermittent controlrdquoNeuralNetworks vol 31 pp 12ndash21 2012

[11] D Xiao and J Wang ldquoModeling stock price dynamics bycontinuum percolation system and relevant complex systemsanalysisrdquo Physica A Statistical Mechanics and its Applicationsvol 391 no 20 pp 4827ndash4838 2012

[12] D Enke and N Mehdiyev ldquoStock market prediction usinga combination of stepwise regression analysis differentialevolution-based fuzzy clustering and a fuzzy inference Neural

14 Computational Intelligence and Neuroscience

Networkrdquo Intelligent Automation and Soft Computing vol 19no 4 pp 636ndash648 2013

[13] G Sermpinisa C Stasinakisa and C Dunisb ldquoStochastic andgenetic neural network combinations in trading and hybridtime-varying leverage effectsrdquo Journal of International FinancialMarkets Institutions amp Money vol 30 pp 21ndash54 2014

[14] R Ebrahimpour H Nikoo S Masoudnia M R Yousefi andM S Ghaemi ldquoMixture of mlp-experts for trend forecastingof time series a case study of the tehran stock exchangerdquoInternational Journal of Forecasting vol 27 no 3 pp 804ndash8162011

[15] A Bahrammirzaee ldquoA comparative survey of artificial intelli-gence applications in finance artificial neural networks expertsystem and hybrid intelligent systemsrdquo Neural Computing andApplications vol 19 no 8 pp 1165ndash1195 2010

[16] A Graves M Liwicki S Fernandez R Bertolami H Bunkeand J Schmidhuber ldquoA novel connectionist system for uncon-strained handwriting recognitionrdquo IEEE Transactions on Pat-tern Analysis and Machine Intelligence vol 31 no 5 pp 855ndash868 2009

[17] M Ardalani-Farsa and S Zolfaghari ldquoChaotic time seriesprediction with residual analysis method using hybrid Elman-NARX neural networksrdquoNeurocomputing vol 73 no 13ndash15 pp2540ndash2553 2010

[18] M Paliwal and U A Kumar ldquoNeural networks and statisticaltechniques a review of applicationsrdquo Expert Systems withApplications vol 36 no 1 pp 2ndash17 2009

[19] Z Liao and J Wang ldquoForecasting model of global stock indexby stochastic time effective neural networkrdquoExpert SystemswithApplications vol 37 no 1 pp 834ndash841 2010

[20] L Pan and J Cao ldquoRobust stability for uncertain stochasticneural network with delay and impulsesrdquo Neurocomputing vol94 pp 102ndash110 2012

[21] H F Liu and J Wang ldquoIntegrating independent componentanalysis and principal component analysis with neural networkto predict Chinese stock marketrdquo Mathematical Problems inEngineering vol 2011 Article ID 382659 15 pages 2011

[22] Z Q Guo H Q Wang and Q Liu ldquoFinancial time seriesforecasting using LPP and SVM optimized by PSOrdquo SoftComputing vol 17 no 5 pp 805ndash818 2013

[23] H L Niu and J Wang ldquoFinancial time series predictionby a random data-time effective RBF neural networkrdquo SoftComputing vol 18 no 3 pp 497ndash508 2014

[24] J L Elman ldquoFinding structure in timerdquo Cognitive Science vol14 no 2 pp 179ndash211 1990

[25] MCacciola GMegali D Pellicano and F CMorabito ldquoElmanneural networks for characterizing voids in welded strips astudyrdquo Neural Computing and Applications vol 21 no 5 pp869ndash875 2012

[26] R Chandra and M Zhang ldquoCooperative coevolution of Elmanrecurrent neural networks for chaotic time series predictionrdquoNeurocomputing vol 86 pp 116ndash123 2012

[27] D K Chaturvedi P S Satsangi and P K Kalra ldquoEffect of differ-ent mappings and normalization of neural network modelsrdquo inProceedings of the National Power Systems Conference vol 9 pp377ndash386 Indian Institute of Technology Kanpur India 1996

[28] S Makridakis ldquoAccuracy measures theoretical and practicalconcernsrdquo International Journal of Forecasting vol 9 no 4 pp527ndash529 1993

[29] L Q Han Design and Application of Artificial Neural NetworkChemical Industry Press 2002

[30] M Zounemat-Kermani ldquoPrincipal component analysis (PCA)for estimating chlorophyll concentration using forward andgeneralized regression neural networksrdquoApplied Artificial Intel-ligence vol 28 no 1 pp 16ndash29 2014

[31] D Olson and C Mossman ldquoNeural network forecasts ofCanadian stock returns using accounting ratiosrdquo InternationalJournal of Forecasting vol 19 no 3 pp 453ndash465 2003

[32] H Demuth and M Beale Network Toolbox For Use withMATLAB The Math Works Natick Mass USA 5th edition1998

[33] A P Plumb R C Rowe P York and M Brown ldquoOptimisationof the predictive ability of artificial neural network (ANN)models a comparison of three ANN programs and four classesof training algorithmrdquo European Journal of PharmaceuticalSciences vol 25 no 4-5 pp 395ndash405 2005

[34] D O Faruk ldquoA hybrid neural network and ARIMA model forwater quality time series predictionrdquo Engineering Applicationsof Artificial Intelligence vol 23 no 4 pp 586ndash594 2010

[35] M Tripathy ldquoPower transformer differential protection usingneural network principal component analysis and radial basisfunction neural networkrdquo Simulation Modelling Practice andTheory vol 18 no 5 pp 600ndash611 2010

[36] C E Martin and J A Reggia ldquoFusing swarm intelligenceand self-assembly for optimizing echo state networksrdquo Com-putational Intelligence and Neuroscience vol 2015 Article ID642429 15 pages 2015

[37] R S Tsay Analysis of Financial Time Series JohnWiley amp SonsHoboken NJ USA 2005

[38] P-C Chang D-D Wang and C-L Zhou ldquoA novel modelby evolving partially connected neural network for stock pricetrend forecastingrdquo Expert Systems with Applications vol 39 no1 pp 611ndash620 2012

[39] P Roy G S Mahapatra P Rani S K Pandey and K NDey ldquoRobust feed forward and recurrent neural network baseddynamic weighted combination models for software reliabilitypredictionrdquo Applied Soft Computing vol 22 pp 629ndash637 2014

[40] X Gabaix P Gopikrishnan V Plerou andH E Stanley ldquoA the-ory of power-law distributions in financial market fluctuationsrdquoNature vol 423 no 6937 pp 267ndash270 2003

[41] R N Mantegna and H E Stanley An Introduction to Econo-physics Correlations and Complexity in Finance CambridgeUniversity Press Cambridge UK 2000

[42] T H Roh ldquoForecasting the volatility of stock price indexrdquoExpert Systems with Applications vol 33 no 4 pp 916ndash9222007

[43] G E Batista E J KeoghOM Tataw andVM de Souza ldquoCIDan efficient complexity-invariant distance for time seriesrdquo DataMining and Knowledge Discovery vol 28 no 3 pp 634ndash6692014

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 12: Research Article Financial Time Series Prediction …downloads.hindawi.com/journals/cin/2016/4742515.pdfResearch Article Financial Time Series Prediction Using Elman Recurrent Random

12 Computational Intelligence and Neuroscience

0 5 10 15 20 25 30 35 40

500

1000

1500

2000

2500

3000

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(a) SSE

0 5 10 15 20 25 30 35 400

05

1

15

2

25

3times104

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(b) TWSE

0 5 10 15 20 25 30 35 400

500

1000

1500

2000

2500

3000

3500

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(c) KOSPI

0 5 10 15 20 25 30 35 40

05

1

15

2

25

3

35

4

45

5times104

Scale120591

Actual value versus BPNNActual value versus ERNN

Actual value versus STNNActual value versus ST-ERNN

(d) Nikkei225

Figure 8 ((a) (b) (c) and (d)) MCID values between the forecasting results and the real market prices from BPNN ERNN STNN andST-ERNNmodels

ST-ERNN with the actual value is the smallest one in anyscale that is the ST-ERNN (with the stochastic time effectivefunction) for forecasting stock prices is effective

5 Conclusion

The aim of this research is to develop a predictive modelto forecast the financial time series In this study we havedeveloped a predictive model by using an Elman recurrentneural network with the stochastic time effective function toforecast the indices of SSE TWSE KOSPI and Nikkei225Through the linear regression analysis it implies that thepredictive values and the real values are not deviating too

much Then we take the proposed model compared withBPNN STNN and ERNN forecasting models Empiricalexaminations of predicting precision for the price time series(by the comparisons of predicting measures as MAE RMSEMAPE and MAPE(100)) show that the proposed neuralnetwork model has the advantage of improving the precisionof forecasting and the forecasting of this proposed modelmuch approaches to the real financial market movementsFurthermore from the curve of the relative error it canmake a conclusion that the large fluctuation leads to thelarge relative errors In addition by calculating CID andMCID distance the conclusion was illustrated more clearlyThe study and the proposed model contribute significantly tothe time series literature on forecasting

Computational Intelligence and Neuroscience 13

Table 4 Comparisons of indicesrsquo predictions for different forecasting models

Index errors BPNN STNN ERNN ST-ERNNSSE

MAE 453701 249687 37262647 127390RMSE 544564 405437 493907 370693MAPE 201994 118947 182110 41353MAPE(100) 50644 36868 43176 26809

TWSEMAE 2527225 1405971 1512830 1056377RMSE 3168197 1868309 2054236 1361674MAPE 32017 17303 18449 13468MAPE(100) 22135 11494 13349 12601

KOSPIMAE 743073 563309 479296 182421RMSE 771528 582944 508174 210479MAPE 166084 124461 109608 42257MAPE(100) 74379 59664 49176 21788

Nikkei225MAE 2038034 1381857 1662480 685458RMSE 2385933 1697061 2073395 890378MAPE 18556 12580 15398 06010MAPE(100) 07674 05191 04962 04261

Table 5 CID distances for four network models

Index BPNN STNN ERNN ST-ERNNSSE 30521 17647 23202 16599TWSE 27805 98763 10830 61581KOSPI 33504 23128 25510 10060Nikkei225 44541 23726 32895 25421

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

The authors were supported in part by National NaturalScience Foundation of China under Grant nos 71271026 and10971010

References

[1] Y Kajitani K W Hipel and A I McLeod ldquoForecastingnonlinear time series with feed-forward neural networks a casestudy of Canadian lynx datardquo Journal of Forecasting vol 24 no2 pp 105ndash117 2005

[2] T Takahama S Sakai A Hara and N Iwane ldquoPredicting stockprice using neural networks optimized by differential evolutionwith degenerationrdquo International Journal of Innovative Comput-ing Information and Control vol 5 no 12 pp 5021ndash5031 2009

[3] K Huarng and T H-K Yu ldquoThe application of neural networksto forecast fuzzy time seriesrdquo Physica A vol 363 no 2 pp 481ndash491 2006

[4] F Wang and J Wang ldquoStatistical analysis and forecasting ofreturn interval for SSE and model by lattice percolation systemand neural networkrdquoComputers and Industrial Engineering vol62 no 1 pp 198ndash205 2012

[5] H-J Kim and K-S Shin ldquoA hybrid approach based on neuralnetworks and genetic algorithms for detecting temporal pat-terns in stock marketsrdquo Applied Soft Computing vol 7 no 2pp 569ndash576 2007

[6] M Ghiassi H Saidane and D K Zimbra ldquoA dynamic artificialneural network model for forecasting time series eventsrdquoInternational Journal of Forecasting vol 21 no 2 pp 341ndash3622005

[7] M R Hassan B Nath andM Kirley ldquoA fusionmodel of HMMANNandGA for stockmarket forecastingrdquo Expert Systems withApplications vol 33 no 1 pp 171ndash180 2007

[8] D Devaraj B Yegnanarayana and K Ramar ldquoRadial basisfunction networks for fast contingency rankingrdquo InternationalJournal of Electrical Power and Energy Systems vol 24 no 5 pp387ndash395 2002

[9] B A Garroa and R A Vazquez ldquoDesigning artificial neuralnetworks using particle swarm optimization algorithmsrdquo Com-putational Intelligence and Neuroscience vol 2015 Article ID369298 20 pages 2015

[10] Q Gan ldquoExponential synchronization of stochastic Cohen-Grossberg neural networks withmixed time-varying delays andreaction-diffusion via periodically intermittent controlrdquoNeuralNetworks vol 31 pp 12ndash21 2012

[11] D Xiao and J Wang ldquoModeling stock price dynamics bycontinuum percolation system and relevant complex systemsanalysisrdquo Physica A Statistical Mechanics and its Applicationsvol 391 no 20 pp 4827ndash4838 2012

[12] D Enke and N Mehdiyev ldquoStock market prediction usinga combination of stepwise regression analysis differentialevolution-based fuzzy clustering and a fuzzy inference Neural

14 Computational Intelligence and Neuroscience

Networkrdquo Intelligent Automation and Soft Computing vol 19no 4 pp 636ndash648 2013

[13] G Sermpinisa C Stasinakisa and C Dunisb ldquoStochastic andgenetic neural network combinations in trading and hybridtime-varying leverage effectsrdquo Journal of International FinancialMarkets Institutions amp Money vol 30 pp 21ndash54 2014

[14] R Ebrahimpour H Nikoo S Masoudnia M R Yousefi andM S Ghaemi ldquoMixture of mlp-experts for trend forecastingof time series a case study of the tehran stock exchangerdquoInternational Journal of Forecasting vol 27 no 3 pp 804ndash8162011

[15] A Bahrammirzaee ldquoA comparative survey of artificial intelli-gence applications in finance artificial neural networks expertsystem and hybrid intelligent systemsrdquo Neural Computing andApplications vol 19 no 8 pp 1165ndash1195 2010

[16] A Graves M Liwicki S Fernandez R Bertolami H Bunkeand J Schmidhuber ldquoA novel connectionist system for uncon-strained handwriting recognitionrdquo IEEE Transactions on Pat-tern Analysis and Machine Intelligence vol 31 no 5 pp 855ndash868 2009

[17] M Ardalani-Farsa and S Zolfaghari ldquoChaotic time seriesprediction with residual analysis method using hybrid Elman-NARX neural networksrdquoNeurocomputing vol 73 no 13ndash15 pp2540ndash2553 2010

[18] M Paliwal and U A Kumar ldquoNeural networks and statisticaltechniques a review of applicationsrdquo Expert Systems withApplications vol 36 no 1 pp 2ndash17 2009

[19] Z Liao and J Wang ldquoForecasting model of global stock indexby stochastic time effective neural networkrdquoExpert SystemswithApplications vol 37 no 1 pp 834ndash841 2010

[20] L Pan and J Cao ldquoRobust stability for uncertain stochasticneural network with delay and impulsesrdquo Neurocomputing vol94 pp 102ndash110 2012

[21] H F Liu and J Wang ldquoIntegrating independent componentanalysis and principal component analysis with neural networkto predict Chinese stock marketrdquo Mathematical Problems inEngineering vol 2011 Article ID 382659 15 pages 2011

[22] Z Q Guo H Q Wang and Q Liu ldquoFinancial time seriesforecasting using LPP and SVM optimized by PSOrdquo SoftComputing vol 17 no 5 pp 805ndash818 2013

[23] H L Niu and J Wang ldquoFinancial time series predictionby a random data-time effective RBF neural networkrdquo SoftComputing vol 18 no 3 pp 497ndash508 2014

[24] J L Elman ldquoFinding structure in timerdquo Cognitive Science vol14 no 2 pp 179ndash211 1990

[25] MCacciola GMegali D Pellicano and F CMorabito ldquoElmanneural networks for characterizing voids in welded strips astudyrdquo Neural Computing and Applications vol 21 no 5 pp869ndash875 2012

[26] R Chandra and M Zhang ldquoCooperative coevolution of Elmanrecurrent neural networks for chaotic time series predictionrdquoNeurocomputing vol 86 pp 116ndash123 2012

[27] D K Chaturvedi P S Satsangi and P K Kalra ldquoEffect of differ-ent mappings and normalization of neural network modelsrdquo inProceedings of the National Power Systems Conference vol 9 pp377ndash386 Indian Institute of Technology Kanpur India 1996

[28] S Makridakis ldquoAccuracy measures theoretical and practicalconcernsrdquo International Journal of Forecasting vol 9 no 4 pp527ndash529 1993

[29] L Q Han Design and Application of Artificial Neural NetworkChemical Industry Press 2002

[30] M Zounemat-Kermani ldquoPrincipal component analysis (PCA)for estimating chlorophyll concentration using forward andgeneralized regression neural networksrdquoApplied Artificial Intel-ligence vol 28 no 1 pp 16ndash29 2014

[31] D Olson and C Mossman ldquoNeural network forecasts ofCanadian stock returns using accounting ratiosrdquo InternationalJournal of Forecasting vol 19 no 3 pp 453ndash465 2003

[32] H Demuth and M Beale Network Toolbox For Use withMATLAB The Math Works Natick Mass USA 5th edition1998

[33] A P Plumb R C Rowe P York and M Brown ldquoOptimisationof the predictive ability of artificial neural network (ANN)models a comparison of three ANN programs and four classesof training algorithmrdquo European Journal of PharmaceuticalSciences vol 25 no 4-5 pp 395ndash405 2005

[34] D O Faruk ldquoA hybrid neural network and ARIMA model forwater quality time series predictionrdquo Engineering Applicationsof Artificial Intelligence vol 23 no 4 pp 586ndash594 2010

[35] M Tripathy ldquoPower transformer differential protection usingneural network principal component analysis and radial basisfunction neural networkrdquo Simulation Modelling Practice andTheory vol 18 no 5 pp 600ndash611 2010

[36] C E Martin and J A Reggia ldquoFusing swarm intelligenceand self-assembly for optimizing echo state networksrdquo Com-putational Intelligence and Neuroscience vol 2015 Article ID642429 15 pages 2015

[37] R S Tsay Analysis of Financial Time Series JohnWiley amp SonsHoboken NJ USA 2005

[38] P-C Chang D-D Wang and C-L Zhou ldquoA novel modelby evolving partially connected neural network for stock pricetrend forecastingrdquo Expert Systems with Applications vol 39 no1 pp 611ndash620 2012

[39] P Roy G S Mahapatra P Rani S K Pandey and K NDey ldquoRobust feed forward and recurrent neural network baseddynamic weighted combination models for software reliabilitypredictionrdquo Applied Soft Computing vol 22 pp 629ndash637 2014

[40] X Gabaix P Gopikrishnan V Plerou andH E Stanley ldquoA the-ory of power-law distributions in financial market fluctuationsrdquoNature vol 423 no 6937 pp 267ndash270 2003

[41] R N Mantegna and H E Stanley An Introduction to Econo-physics Correlations and Complexity in Finance CambridgeUniversity Press Cambridge UK 2000

[42] T H Roh ldquoForecasting the volatility of stock price indexrdquoExpert Systems with Applications vol 33 no 4 pp 916ndash9222007

[43] G E Batista E J KeoghOM Tataw andVM de Souza ldquoCIDan efficient complexity-invariant distance for time seriesrdquo DataMining and Knowledge Discovery vol 28 no 3 pp 634ndash6692014

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 13: Research Article Financial Time Series Prediction …downloads.hindawi.com/journals/cin/2016/4742515.pdfResearch Article Financial Time Series Prediction Using Elman Recurrent Random

Computational Intelligence and Neuroscience 13

Table 4 Comparisons of indicesrsquo predictions for different forecasting models

Index errors BPNN STNN ERNN ST-ERNNSSE

MAE 453701 249687 37262647 127390RMSE 544564 405437 493907 370693MAPE 201994 118947 182110 41353MAPE(100) 50644 36868 43176 26809

TWSEMAE 2527225 1405971 1512830 1056377RMSE 3168197 1868309 2054236 1361674MAPE 32017 17303 18449 13468MAPE(100) 22135 11494 13349 12601

KOSPIMAE 743073 563309 479296 182421RMSE 771528 582944 508174 210479MAPE 166084 124461 109608 42257MAPE(100) 74379 59664 49176 21788

Nikkei225MAE 2038034 1381857 1662480 685458RMSE 2385933 1697061 2073395 890378MAPE 18556 12580 15398 06010MAPE(100) 07674 05191 04962 04261

Table 5 CID distances for four network models

Index BPNN STNN ERNN ST-ERNNSSE 30521 17647 23202 16599TWSE 27805 98763 10830 61581KOSPI 33504 23128 25510 10060Nikkei225 44541 23726 32895 25421

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

The authors were supported in part by National NaturalScience Foundation of China under Grant nos 71271026 and10971010

References

[1] Y Kajitani K W Hipel and A I McLeod ldquoForecastingnonlinear time series with feed-forward neural networks a casestudy of Canadian lynx datardquo Journal of Forecasting vol 24 no2 pp 105ndash117 2005

[2] T Takahama S Sakai A Hara and N Iwane ldquoPredicting stockprice using neural networks optimized by differential evolutionwith degenerationrdquo International Journal of Innovative Comput-ing Information and Control vol 5 no 12 pp 5021ndash5031 2009

[3] K Huarng and T H-K Yu ldquoThe application of neural networksto forecast fuzzy time seriesrdquo Physica A vol 363 no 2 pp 481ndash491 2006

[4] F Wang and J Wang ldquoStatistical analysis and forecasting ofreturn interval for SSE and model by lattice percolation systemand neural networkrdquoComputers and Industrial Engineering vol62 no 1 pp 198ndash205 2012

[5] H-J Kim and K-S Shin ldquoA hybrid approach based on neuralnetworks and genetic algorithms for detecting temporal pat-terns in stock marketsrdquo Applied Soft Computing vol 7 no 2pp 569ndash576 2007

[6] M Ghiassi H Saidane and D K Zimbra ldquoA dynamic artificialneural network model for forecasting time series eventsrdquoInternational Journal of Forecasting vol 21 no 2 pp 341ndash3622005

[7] M R Hassan B Nath andM Kirley ldquoA fusionmodel of HMMANNandGA for stockmarket forecastingrdquo Expert Systems withApplications vol 33 no 1 pp 171ndash180 2007

[8] D Devaraj B Yegnanarayana and K Ramar ldquoRadial basisfunction networks for fast contingency rankingrdquo InternationalJournal of Electrical Power and Energy Systems vol 24 no 5 pp387ndash395 2002

[9] B A Garroa and R A Vazquez ldquoDesigning artificial neuralnetworks using particle swarm optimization algorithmsrdquo Com-putational Intelligence and Neuroscience vol 2015 Article ID369298 20 pages 2015

[10] Q Gan ldquoExponential synchronization of stochastic Cohen-Grossberg neural networks withmixed time-varying delays andreaction-diffusion via periodically intermittent controlrdquoNeuralNetworks vol 31 pp 12ndash21 2012

[11] D Xiao and J Wang ldquoModeling stock price dynamics bycontinuum percolation system and relevant complex systemsanalysisrdquo Physica A Statistical Mechanics and its Applicationsvol 391 no 20 pp 4827ndash4838 2012

[12] D Enke and N Mehdiyev ldquoStock market prediction usinga combination of stepwise regression analysis differentialevolution-based fuzzy clustering and a fuzzy inference Neural

14 Computational Intelligence and Neuroscience

Networkrdquo Intelligent Automation and Soft Computing vol 19no 4 pp 636ndash648 2013

[13] G Sermpinisa C Stasinakisa and C Dunisb ldquoStochastic andgenetic neural network combinations in trading and hybridtime-varying leverage effectsrdquo Journal of International FinancialMarkets Institutions amp Money vol 30 pp 21ndash54 2014

[14] R Ebrahimpour H Nikoo S Masoudnia M R Yousefi andM S Ghaemi ldquoMixture of mlp-experts for trend forecastingof time series a case study of the tehran stock exchangerdquoInternational Journal of Forecasting vol 27 no 3 pp 804ndash8162011

[15] A Bahrammirzaee ldquoA comparative survey of artificial intelli-gence applications in finance artificial neural networks expertsystem and hybrid intelligent systemsrdquo Neural Computing andApplications vol 19 no 8 pp 1165ndash1195 2010

[16] A Graves M Liwicki S Fernandez R Bertolami H Bunkeand J Schmidhuber ldquoA novel connectionist system for uncon-strained handwriting recognitionrdquo IEEE Transactions on Pat-tern Analysis and Machine Intelligence vol 31 no 5 pp 855ndash868 2009

[17] M Ardalani-Farsa and S Zolfaghari ldquoChaotic time seriesprediction with residual analysis method using hybrid Elman-NARX neural networksrdquoNeurocomputing vol 73 no 13ndash15 pp2540ndash2553 2010

[18] M Paliwal and U A Kumar ldquoNeural networks and statisticaltechniques a review of applicationsrdquo Expert Systems withApplications vol 36 no 1 pp 2ndash17 2009

[19] Z Liao and J Wang ldquoForecasting model of global stock indexby stochastic time effective neural networkrdquoExpert SystemswithApplications vol 37 no 1 pp 834ndash841 2010

[20] L Pan and J Cao ldquoRobust stability for uncertain stochasticneural network with delay and impulsesrdquo Neurocomputing vol94 pp 102ndash110 2012

[21] H F Liu and J Wang ldquoIntegrating independent componentanalysis and principal component analysis with neural networkto predict Chinese stock marketrdquo Mathematical Problems inEngineering vol 2011 Article ID 382659 15 pages 2011

[22] Z Q Guo H Q Wang and Q Liu ldquoFinancial time seriesforecasting using LPP and SVM optimized by PSOrdquo SoftComputing vol 17 no 5 pp 805ndash818 2013

[23] H L Niu and J Wang ldquoFinancial time series predictionby a random data-time effective RBF neural networkrdquo SoftComputing vol 18 no 3 pp 497ndash508 2014

[24] J L Elman ldquoFinding structure in timerdquo Cognitive Science vol14 no 2 pp 179ndash211 1990

[25] MCacciola GMegali D Pellicano and F CMorabito ldquoElmanneural networks for characterizing voids in welded strips astudyrdquo Neural Computing and Applications vol 21 no 5 pp869ndash875 2012

[26] R Chandra and M Zhang ldquoCooperative coevolution of Elmanrecurrent neural networks for chaotic time series predictionrdquoNeurocomputing vol 86 pp 116ndash123 2012

[27] D K Chaturvedi P S Satsangi and P K Kalra ldquoEffect of differ-ent mappings and normalization of neural network modelsrdquo inProceedings of the National Power Systems Conference vol 9 pp377ndash386 Indian Institute of Technology Kanpur India 1996

[28] S Makridakis ldquoAccuracy measures theoretical and practicalconcernsrdquo International Journal of Forecasting vol 9 no 4 pp527ndash529 1993

[29] L Q Han Design and Application of Artificial Neural NetworkChemical Industry Press 2002

[30] M Zounemat-Kermani ldquoPrincipal component analysis (PCA)for estimating chlorophyll concentration using forward andgeneralized regression neural networksrdquoApplied Artificial Intel-ligence vol 28 no 1 pp 16ndash29 2014

[31] D Olson and C Mossman ldquoNeural network forecasts ofCanadian stock returns using accounting ratiosrdquo InternationalJournal of Forecasting vol 19 no 3 pp 453ndash465 2003

[32] H Demuth and M Beale Network Toolbox For Use withMATLAB The Math Works Natick Mass USA 5th edition1998

[33] A P Plumb R C Rowe P York and M Brown ldquoOptimisationof the predictive ability of artificial neural network (ANN)models a comparison of three ANN programs and four classesof training algorithmrdquo European Journal of PharmaceuticalSciences vol 25 no 4-5 pp 395ndash405 2005

[34] D O Faruk ldquoA hybrid neural network and ARIMA model forwater quality time series predictionrdquo Engineering Applicationsof Artificial Intelligence vol 23 no 4 pp 586ndash594 2010

[35] M Tripathy ldquoPower transformer differential protection usingneural network principal component analysis and radial basisfunction neural networkrdquo Simulation Modelling Practice andTheory vol 18 no 5 pp 600ndash611 2010

[36] C E Martin and J A Reggia ldquoFusing swarm intelligenceand self-assembly for optimizing echo state networksrdquo Com-putational Intelligence and Neuroscience vol 2015 Article ID642429 15 pages 2015

[37] R S Tsay Analysis of Financial Time Series JohnWiley amp SonsHoboken NJ USA 2005

[38] P-C Chang D-D Wang and C-L Zhou ldquoA novel modelby evolving partially connected neural network for stock pricetrend forecastingrdquo Expert Systems with Applications vol 39 no1 pp 611ndash620 2012

[39] P Roy G S Mahapatra P Rani S K Pandey and K NDey ldquoRobust feed forward and recurrent neural network baseddynamic weighted combination models for software reliabilitypredictionrdquo Applied Soft Computing vol 22 pp 629ndash637 2014

[40] X Gabaix P Gopikrishnan V Plerou andH E Stanley ldquoA the-ory of power-law distributions in financial market fluctuationsrdquoNature vol 423 no 6937 pp 267ndash270 2003

[41] R N Mantegna and H E Stanley An Introduction to Econo-physics Correlations and Complexity in Finance CambridgeUniversity Press Cambridge UK 2000

[42] T H Roh ldquoForecasting the volatility of stock price indexrdquoExpert Systems with Applications vol 33 no 4 pp 916ndash9222007

[43] G E Batista E J KeoghOM Tataw andVM de Souza ldquoCIDan efficient complexity-invariant distance for time seriesrdquo DataMining and Knowledge Discovery vol 28 no 3 pp 634ndash6692014

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 14: Research Article Financial Time Series Prediction …downloads.hindawi.com/journals/cin/2016/4742515.pdfResearch Article Financial Time Series Prediction Using Elman Recurrent Random

14 Computational Intelligence and Neuroscience

Networkrdquo Intelligent Automation and Soft Computing vol 19no 4 pp 636ndash648 2013

[13] G Sermpinisa C Stasinakisa and C Dunisb ldquoStochastic andgenetic neural network combinations in trading and hybridtime-varying leverage effectsrdquo Journal of International FinancialMarkets Institutions amp Money vol 30 pp 21ndash54 2014

[14] R Ebrahimpour H Nikoo S Masoudnia M R Yousefi andM S Ghaemi ldquoMixture of mlp-experts for trend forecastingof time series a case study of the tehran stock exchangerdquoInternational Journal of Forecasting vol 27 no 3 pp 804ndash8162011

[15] A Bahrammirzaee ldquoA comparative survey of artificial intelli-gence applications in finance artificial neural networks expertsystem and hybrid intelligent systemsrdquo Neural Computing andApplications vol 19 no 8 pp 1165ndash1195 2010

[16] A Graves M Liwicki S Fernandez R Bertolami H Bunkeand J Schmidhuber ldquoA novel connectionist system for uncon-strained handwriting recognitionrdquo IEEE Transactions on Pat-tern Analysis and Machine Intelligence vol 31 no 5 pp 855ndash868 2009

[17] M Ardalani-Farsa and S Zolfaghari ldquoChaotic time seriesprediction with residual analysis method using hybrid Elman-NARX neural networksrdquoNeurocomputing vol 73 no 13ndash15 pp2540ndash2553 2010

[18] M Paliwal and U A Kumar ldquoNeural networks and statisticaltechniques a review of applicationsrdquo Expert Systems withApplications vol 36 no 1 pp 2ndash17 2009

[19] Z Liao and J Wang ldquoForecasting model of global stock indexby stochastic time effective neural networkrdquoExpert SystemswithApplications vol 37 no 1 pp 834ndash841 2010

[20] L Pan and J Cao ldquoRobust stability for uncertain stochasticneural network with delay and impulsesrdquo Neurocomputing vol94 pp 102ndash110 2012

[21] H F Liu and J Wang ldquoIntegrating independent componentanalysis and principal component analysis with neural networkto predict Chinese stock marketrdquo Mathematical Problems inEngineering vol 2011 Article ID 382659 15 pages 2011

[22] Z Q Guo H Q Wang and Q Liu ldquoFinancial time seriesforecasting using LPP and SVM optimized by PSOrdquo SoftComputing vol 17 no 5 pp 805ndash818 2013

[23] H L Niu and J Wang ldquoFinancial time series predictionby a random data-time effective RBF neural networkrdquo SoftComputing vol 18 no 3 pp 497ndash508 2014

[24] J L Elman ldquoFinding structure in timerdquo Cognitive Science vol14 no 2 pp 179ndash211 1990

[25] MCacciola GMegali D Pellicano and F CMorabito ldquoElmanneural networks for characterizing voids in welded strips astudyrdquo Neural Computing and Applications vol 21 no 5 pp869ndash875 2012

[26] R Chandra and M Zhang ldquoCooperative coevolution of Elmanrecurrent neural networks for chaotic time series predictionrdquoNeurocomputing vol 86 pp 116ndash123 2012

[27] D K Chaturvedi P S Satsangi and P K Kalra ldquoEffect of differ-ent mappings and normalization of neural network modelsrdquo inProceedings of the National Power Systems Conference vol 9 pp377ndash386 Indian Institute of Technology Kanpur India 1996

[28] S Makridakis ldquoAccuracy measures theoretical and practicalconcernsrdquo International Journal of Forecasting vol 9 no 4 pp527ndash529 1993

[29] L Q Han Design and Application of Artificial Neural NetworkChemical Industry Press 2002

[30] M Zounemat-Kermani ldquoPrincipal component analysis (PCA)for estimating chlorophyll concentration using forward andgeneralized regression neural networksrdquoApplied Artificial Intel-ligence vol 28 no 1 pp 16ndash29 2014

[31] D Olson and C Mossman ldquoNeural network forecasts ofCanadian stock returns using accounting ratiosrdquo InternationalJournal of Forecasting vol 19 no 3 pp 453ndash465 2003

[32] H Demuth and M Beale Network Toolbox For Use withMATLAB The Math Works Natick Mass USA 5th edition1998

[33] A P Plumb R C Rowe P York and M Brown ldquoOptimisationof the predictive ability of artificial neural network (ANN)models a comparison of three ANN programs and four classesof training algorithmrdquo European Journal of PharmaceuticalSciences vol 25 no 4-5 pp 395ndash405 2005

[34] D O Faruk ldquoA hybrid neural network and ARIMA model forwater quality time series predictionrdquo Engineering Applicationsof Artificial Intelligence vol 23 no 4 pp 586ndash594 2010

[35] M Tripathy ldquoPower transformer differential protection usingneural network principal component analysis and radial basisfunction neural networkrdquo Simulation Modelling Practice andTheory vol 18 no 5 pp 600ndash611 2010

[36] C E Martin and J A Reggia ldquoFusing swarm intelligenceand self-assembly for optimizing echo state networksrdquo Com-putational Intelligence and Neuroscience vol 2015 Article ID642429 15 pages 2015

[37] R S Tsay Analysis of Financial Time Series JohnWiley amp SonsHoboken NJ USA 2005

[38] P-C Chang D-D Wang and C-L Zhou ldquoA novel modelby evolving partially connected neural network for stock pricetrend forecastingrdquo Expert Systems with Applications vol 39 no1 pp 611ndash620 2012

[39] P Roy G S Mahapatra P Rani S K Pandey and K NDey ldquoRobust feed forward and recurrent neural network baseddynamic weighted combination models for software reliabilitypredictionrdquo Applied Soft Computing vol 22 pp 629ndash637 2014

[40] X Gabaix P Gopikrishnan V Plerou andH E Stanley ldquoA the-ory of power-law distributions in financial market fluctuationsrdquoNature vol 423 no 6937 pp 267ndash270 2003

[41] R N Mantegna and H E Stanley An Introduction to Econo-physics Correlations and Complexity in Finance CambridgeUniversity Press Cambridge UK 2000

[42] T H Roh ldquoForecasting the volatility of stock price indexrdquoExpert Systems with Applications vol 33 no 4 pp 916ndash9222007

[43] G E Batista E J KeoghOM Tataw andVM de Souza ldquoCIDan efficient complexity-invariant distance for time seriesrdquo DataMining and Knowledge Discovery vol 28 no 3 pp 634ndash6692014

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 15: Research Article Financial Time Series Prediction …downloads.hindawi.com/journals/cin/2016/4742515.pdfResearch Article Financial Time Series Prediction Using Elman Recurrent Random

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014