Forecasting With GAs

download Forecasting With GAs

of 47

Transcript of Forecasting With GAs

  • 8/11/2019 Forecasting With GAs

    1/47

    Learning-To-Forecast with GeneticAlgorithms

    Working paper

    Version February 2013 with appendices

    Mikhail Anufriev , Cars Hommes and Tomasz Makarewicz

    University of Technology, SydneyCenter for Nonlinear Dynamics in Economics and Finance, University of Amsterdam;

    Tinbergen Institute

    Amsterdam February 27, 2013

  • 8/11/2019 Forecasting With GAs

    2/47

  • 8/11/2019 Forecasting With GAs

    3/47

    Learning to Forecast with Genetic Algorithms

    Working paper version February 2013

    Mikhail Anufriev, Cars Hommes and Tomasz Makarewicz

    February 27, 2013

    Abstract

    In this paper we study a model in which agents independently optimize rst orderprice forecasting rule with Genetic Algorithms. This agent-based model (inspired byHommes and Lux, 2011) allows for explicit individual heterogeneity and learning. Weshow that it replicates individual behavior in various Learning-to-Forecast experiments.In these, the subjects are asked to predict prices, which in turn depend on the predictions.We use the data from Heemeijer et al. (2009) to ne tune our GA model. Furthermorewe investigate three other LtF experiments: with shocks to the fundamental price (Baoet al., 2012), cobweb economy(vd Velden, 2001; Hommes et al., 2007) and two-periodahead nonlinear asset pricing market (Hommes et al., 2005). We perform a Monte Carloexercise with 50-period ahead simulations and use Auxiliary Particle Filter to study one-period ahead forecasting performance of the model on the individual level, a novelty inthe literature on agents-based models. Our model is robust against these complicatedsettings and outperforms many homogenous models, including Rational Expectations.

    1 Introduction

    Price expectations are a cornerstone of many economic models, because the economic agentsoften operate in a dynamic context. Consumers have to organize their life-time work and con-sumption paths, while companies decide on how to build up future production capabilities. Ineither case, the agents want to know how the uncertain future may unfold. What makes mod-eling predictions difficult is that they typically form a feedback with the realizations throughagents decisions. For instance, if everybody expects an increased price of a consumption good,consumers are likely to save less, while rms rise production. This implies lower market clear-ing price in the future scenario not anticipated by the agents. It is therefore likely that theywould alter their predictions, leading to a new realized price.

    Even if the agents know the structure of the economy, the price-expectation feedback can

    to lead to non-trivial dynamics ( Tuinstra and Weddepohl , 1999). This picture becomes morecomplicated if the agents furthermore have to learn this structure ( Grandmont , 1998). Agentsdo want to form ne price expectations, but how would they cope with this complexity?

    1

  • 8/11/2019 Forecasting With GAs

    4/47

    The traditional literature (after Muth , 1961) emphasizes Rational Expectations (RE) hy-pothesis, which states that in the equilibrium the predictions have to be model consistent.Most economists would interpret RE as an as-if approximation real markets behave as if their representative agents were perfectly rational, because the real people are rational enoughto learn to avoid systematic, correlated errors. 1 However, this is not conrmed by the data.Recent important example comes from the housing market in US before the latest economiccrisis, where people systematically misjudged the long-term value of their houses ( Bentez-Silvaet al. , 2008; Case and Shiller, 2003; Goodman Jr. and Ittner , 1992). In a broader context,the ination expectations formed by the Jones are far from the RE predictions ( Charnesset al. , 2007) and can be subject to cognitive biases ( Malmendier and Nagel, 2009). Manyrms similarly fail to use RE (see Nunes, 2010, for an example on Phillips Curve and surveyexpectations).

    The failure of RE made many economists look for an alternative model with explicitlearning. They faced the so called wilderness of bounded rationality problem: there isa myriad of possible learning mechanisms with varied restrictions on human memory andcomputational capabilities. These range from simple linear heuristic models (see Evans andRamey , 2006, for a discussion of adaptive expectations), through econometric learning ( Evansand Honkapohja, 2001 ), through heuristic switching type of models ( Brock and Hommes,1997) to evolutionary learning mechanisms ( Arifovic et al., 2012). Moreover these mechanismscan lead to different dynamics. For example, Bullard (1994) and Tuinstra and Wagener(2007) show that for a standard OLG economy, where the agents use OLS learning for price

    forecasting, the choice of the learned variable (level of prices or ination) makes a differencebetween stable and chaotic dynamics.

    Learning-to-Forecast (LtF) experiments ( Hommes, 2011) offer a simple testing ground forlearning mechanisms. In these, the controlled experimental economies are simple and havea straightforward fundamental (RE) equilibrium. Just as in the case of the real markets,the subjects observe the realized prices and their past individual predictions, but not thehistory of other subjects predictions, and are not directly informed about the quantitativelaw of motion of the economy. Many LtF experiments contradict the RE hypothesis. The

    subjects can coordinate on oscillating and serially correlated time series. Convergence to thefundamental equilibrium happens only under severe restrictions on the experimental economy(Hommes, 2011). Another important nding in the experiments is heterogeneity: within thesame experimental group, subject tend to give different predictions with different dynamicstructure and reliance on the past prices, which cannot be fully explained by the type of theexperimental economy.

    The most successful attempt to explain the LtF experiments comes with the so-calledHeuristic Switching Model (HSM; Brock and Hommes, 1997). The basic idea of the model

    is that the agents have a set of simple forecasting heuristics (rules of thumb like adaptive or1 One interesting and straightforward explication of this approach can be found in the concluding section

    of Blundell and Stoker (2005).

    2

  • 8/11/2019 Forecasting With GAs

    5/47

    trend extrapolating expectations) and choose those that had a better past performance. Themodel replicates the stylized, aggregate difference between the treatments in Heemeijer et al.(2009) (henceforth HHST09). Nevertheless, the authors consider a limited set of heuristicsand so cannot fully account for the individual heterogeneity. Moreover they are unable toexplain the mechanism, with which the subjects would learn those heuristics. For instance,Anufriev and Hommes (2012) use HMS to explain the experimental from a non-linear assetpricing experiment ( Hommes et al. , 2005) (henceforth HSTV05), but only with a broader setof forecasting rules.

    In our paper we would like to reinforce the original HSM so that it will be able to replicatethe individual learning and heterogeneity in various experimental settings. To do so, we willuse Genetic Algorithms (GA). GA are a exible optimization procedure, thus the GA-basedmodel retains basic economic interpretation. Agents, who use GA, have to rely on a second-best forecasting rules. Nevertheless, they learn to use them efficiently. For example, if thereis a signicant trend in the data, the agents may want to harvest speculative trade revenues.To do so, they will update their forecasting rules parameters with GA, making it more trendextrapolative.

    GA was already used in its social learning form to explore stylized facts from experimentaldata, outperforming RE hypothesis ( Arifovic, 1995), with the examples of the exchange ratevolatility ( Arifovic, 1996; Lux and Schornstein , 2005) or quantitative choices in a cobwebproducers economy (Dawid and Kopel , 1998). More mature use of GA can be found in Hommesand Lux (2011). In their setting the agents use GA to optimize a forecasting heuristic (instead

    of directly optimizing a prediction) and, much like the actual subjects in LtF experiments,cannot observe each others behavior or strategies. With this versatile model, Hommes and Lux(2011) replicate the distribution of the predictions and prices of the cobweb experiments byHommes et al. (2007) and van de Velden (2001) (henceforth HSTV07 and V01 respectively).

    In our paper, we want to build upon the GA-based individual learning introduced byHommes and Lux (2011) to explain the LtF experiments. The novelty of our paper is threefold.The rst is that we will use a different than Hommes and Lux (2011) heuristic space, based onthe so called rst order rule, which is a mixture of adaptive and trend extrapolating heuristics.

    This gives the model better micro-foundations, as HHST09 nd this rule to describe well theindividual expectations in their experiment.

    The second novelty is that our model allows for a simultaneous explanation of different LtFexperiments, based on markets with breaks in the fundamental price or a highly non-linearprice expectations feedback. This challenge will prove the generality of our model. Finally,the third novelty is that we explain the individual , not just the aggregate results of the LtFexperiments. We will use Auxiliary Particle Filter ( Johansen and Doucet , 2008), techniquebased on Sequential Importance Sampling, to show that our model replicates the behavior of

    the individual subjects. This is a major contribution to the literature, which usually focuseson a models t to the aggregate stylized facts, even if that model is agent-based by design.

    The structure of the paper is the following. In the second section, we explain in detail the

    3

  • 8/11/2019 Forecasting With GAs

    6/47

    Learning to Forecast experiments and also comment on the insight brought by the HeuristicSwitching Model. In the third section, we introduce the Genetic Algorithm model. Wecalibrate it with the simple, linear feedback system from HHST09. We also comment on howto use the Auxiliary Particle Filter to evaluate the t of the model to the individual predictions.In the fourth section, we will use our model to describe the following experimental economies:linear price-expectations feedback system with unexpected shifts to the fundamental price(Bao et al. , 2012) (henceforth BHST12); cobweb producers economy (HSTV07; V01) used alsoby Hommes and Lux (2011); and nally a highly non-linear positive feedback asset pricingeconomy, where the subjects are asked for two-period ahead predictions ( HSTV05). The lastsection will summarize our paper. For the sake of brevity and clarity, we decided not toinclude too many technical details in the main text, including the full formal denition of GA, or many robustness checks against the model specication. These can be found in theappendix.

    2 Learning to Forecast and Heuristic Switching

    Consider a market with a set of subjects i { 1, . . . , I }, who are asked at each period t toforecast price of a certain good. The subjects are explicitly informed that they are askedonly for and rewarded only for the accuracy of the predictions. Their role is of a forecastingconsultants for rms. These rms in turn will use subjects predictions to optimize theirbehavior, which determines the next market price. The subjects have no other inuence on

    the realized price. The feedback relationship between the prices and predictions is summarizedby a reduced form law of motion in the form of

    (1) pt = F ( pe1 ,t , . . . , peI,t ),

    where the pt denotes the price and pei is the agents i expectation of pt and F () results fromaggregating rms optimal choices. This is one-period ahead type of feedback, in the sense thatthe price at period t depends on the predictions which were formulated after the previous price pt 1 becomes known to the subjects. Dene the fundamental price pf as the RE outcome, theself-consistent prediction: pf = E F pf , . . . , p f .

    Unlike the RE agents, subjects in the experiment have a limited information about themarket. They are told that their predictions inuence the average market mood, which in turndetermines the realized price, but they are given only a qualitative story about this feedback.Moreover, they are not explicitly informed about the fundamental price. 2

    One important example was investigated by HHST09, who use a linear version of (1):

    (2) pt = A + BI i=1 p

    ei,t

    I A = A + B ( pet A) ,

    2 Usually it is possible to infer it from the experiment instructions. Anecdotal evidence suggests that eveneconomics students, including graduate students, fail to realize it.

    4

  • 8/11/2019 Forecasting With GAs

    7/47

    where pet =I i =1 p

    ei,t

    I is the average prediction at period t and A = pf is the fundamental

    price. The two cases are with B > 0 (positive feedback) and B < 0 (negative feedback). Atypical example of the positive feedback market is a stock exchange. The investors will buy acertain stock if they are optimistic and expect it to become more expensive. But this increased

    demand means that the stocks price will indeed go up. In this way the investor sentimentsare self-fullling. the contrary case is a producers economy, who face a lag in the production.If the producers expect high price in the future, they will increase the production and so thefuture price must be low for the markets to clear. Here, there is a negative feedback betweenpredictions and prices.

    In the experiment, the authors used two specic treatments for I = 6 subjects:

    Positive feedback: pt = 2021

    (3 + pet ) + t ;(3)

    Negative feedback: pt = 2021(123 pet ) + t ,(4)

    where t NID (0, 0.25) is a small disturbance and the experiment run for 50 periods foreach group in both treatments.

    The two treatments are symmetric. They have the same unique fundamental price pf = 60.Also, the dumping factor |B | = 2021 is the same in absolute terms. It was chosen so that undernaive expectations ( pet = pt 1 ), the fundamental price for both treatments is a unique andstable steady state, but the system would still require some time to converge.

    The two feedback treatments resulted in very different dynamics, see Figure 1a and Fig-ure 1b for typical time series for the negative and positive feedback treatments respectively.The price under the negative feedback would jump around the fundamental for a handful of periods and hence converge to it. Only after that would the subjects converge to the fun-damental as well, and before that their behavior was volatile. Most of the groups under thepositive feedback resulted in systematic oscillations as seen on Figure 1b, where the price twiceovershoots and once undershoots pf ; and if the price actually converged to the fundamental,it did so only towards the end of the experiment (which happened for two out of seven cases).In spite of that, the subjects would coordinate in around three periods and remain so untilthe end of the experiment. This means that the price oscillations were caused by systematicprediction errors (in respect to the RE outcome), which were highly correlated between thesubjects.

    To describe the subjects behavior, HHST09 focus on the rst-order rule (FOR):

    (5) pei,t = 1 pt 1 + 2 pei,t 1 + 3 60 + ( pt 1 pt 2 ),

    for 1 , 2 , 3 0, 1 + 2 + 3 = 1, [ 1, 1]. The authors estimated this rule separatelyfor each subject, based on their predictions from the last 40 periods. Notice that the thirdterm, the fundamental which is associated with the 3 coefficient, was used for the sake of

    5

  • 8/11/2019 Forecasting With GAs

    8/47

    stationarity of the estimation, and to test the RE hypothesis. 3

    HHST09 nd that the individual forecasting rules are varied between the subjects, evenwithin the same treatment. The authors also report signicant difference between the twotreatments. Under the positive feedback, subjects tended to focus on trend extrapolation andestimated 3 fundamental price coefficients were insignicant. Under the negative feedback,the reverse holds: trend extrapolation is barely used, while the weight for the fundamentalprice is signicant. This shows that a model with a homogeneous forecasting rule (RE, butalso linear heuristics like trend extrapolation or naive expectations) may explain one of thetwo treatments, but not both at the same time. Moreover, it cannot explain the signicantdifferences between the subjects within each treatment.

    This led Anufriev et al. (2012) to investigate the Heuristic Switching Model (HSM), inwhich the subjects are endowed with two prediction heuristics:

    adaptive: pei,t = p i,t 1 + (1 ) pei,t 1 for [0, 1],

    trend extrapolation: pei,t = pi,t 1 + ( pt 1 pt 2 ) for [ 1, 1],

    where the authors have used = 0 .75 and = 1. Both heuristics are a special case of the rst-order rule. The adaptive heuristic is the FOR with 3 = = 0, while the trendextrapolation is the FOR with 2 = 3 = 0. The idea of the model is that people adapt theirbehavior to the circumstances. Subjects can at any time use any of the two heuristics, butwill focus on the one with higher relative past performance. Under the positive feedback, theagents easily coordinate their predictions below the fundamental, but (by the construction of the feedback equation) the realized price is slightly higher than the average prediction. Trendextrapolation heuristic captures this gradual increase of the initial prices and so becomesmore popular among the agents. This reinforces the trend, as well as the performance of the heuristic itself. Reverse story holds for the negative feedback: there is no possibility of coordination unless the agents converge to the fundamental price, otherwise the realized priceis in contrast with the average market expectation. In this case adaptive expectations canhave a better performance, as they facilitate the agents to converge to the fundamental.

    HSM captures the essence of the aggregated predictions behavior and successfully replicatesthe results of HHST09 in a stylized fashion. The drawback of the model is that the authorsassume a limited space of heuristics. At rst glance this is not a problem, since the model canbe easily extended to include a broader portfolio of predicting rules ( Anufriev and Hommes,2012). Nevertheless, the model cannot explain two things. First issue is where do thoseheuristics come from, that is, how the subjects are able to learn the two heuristics in the rstplace. Second issue with HSM is that it cannot account for the heterogeneity of rules betweenthe subjects and explain the experiment on the individual level. To answer these two issues,

    we will introdcue a model with explicit individual learning through Genetic Algorithms.3 Under RE, FOR in ( 5) should be specied with 1 = 2 = = 0, which implies that the subjects always

    predict the fundamental price, pei,t = 60.

    6

  • 8/11/2019 Forecasting With GAs

    9/47

    3 The model

    3.1 Genetic Algorithms

    Genetic Algorithms (GA) is a class of numerical stochastic maximization procedure, which

    mimics the evolutionary operations with which DNA of biological organisms adapts to theenvironment. GA were introduced to solve hard optimization problems, which may involvenon-continuities or high dimensionality with complicated interrelations between the argument.They are exible and efficient and so found many successful applications in computer sciencesand engineering (Haupt and Haupt , 2004).

    GA routine starts with a population of random arguments that are possible, trial solutionsto the problem. Individual trial arguments are encoded as binary strings (strings of ones orzeros), or chromosomes. They are retained into the next iteration with a probability that

    increases with their relative performance, which is dened in terms of the functional valueof the arguments. This so called procreation operator means that with each iteration, thepopulation of trial arguments is likely to have a higher functional value, i.e. be tter. Onthe top of the procreation, GA uses three evolutionary operators that allow for an efficientsearch through the problem space: mutation, crossover and election, where the last operatorwas introduced by the economic literature ( Arifovic, 1995).

    Mutation At each iteration, every entry in each chromosome has a small probability to mu-

    tate, in which case it changes its value from zero to one and vice versa . The mutationoperator utilizes the binary representation of the arguments. A single change of one bitat the end of the chromosome leads to a minor, numerically insignicant change of theargument. But with the same probability a mutation of a bit at the beginning of thechromosome can occur, which changes the argument drastically. With this experimenta-tion, GA can easily search through the whole parameter space and have a good chanceof shifting from a local maximum towards the region containing the global one.

    Crossover Pairs of arguments can, with a predened probability, exchange predened partsof their respective binary strings. In practice, the crossover is set to exchanges subsetof the argument. For example, if the objective function has two arguments, crossoverwould swap the rst argument between pairs of trial arguments. This allows for experi-mentation in terms of different mixtures of arguments.

    Election Election operator is meant to screen inefficient outcomes of the experimentationphase. This operator transmits the new chromosomes (selected from the old generationand treated with mutation and crossover) into the new generation only if their functional

    value is greater than that of the original old argument. This operator ensures thatonce the routine nds the global solution, it will not diverge from it due to unnecessaryexperimentation.

    7

  • 8/11/2019 Forecasting With GAs

    10/47

    The procreation routine and the three evolutionary operators have a straightforward eco-nomic interpretation for a situation, in which the agents want to optimize their behavioralrules, e.g . price forecasting heuristics. The procreation means that like in the case of HSM people focus on better solutions (or heuristics). The mutation and crossover are experimen-tation with the heuristics specications, and nally the election ensures that the experimen-tation does not lead to suboptimal heuristics.

    For the sake of presentations, we give the specic formulation of our GA in Appendix A.For the technical discussion and examples of GA applications see Haupt and Haupt (2004).

    3.2 Model specication

    Consider the price-expectation feedback economy, which was introduced in the previous sectionand which is captured by the reduced-form law of motion ( 1). We consider a set of I agents,

    which we will call GA agents. GA agents use a general forecasting rule which requires exactparameter specication, and each agent is endowed with H such specications. For the wholefollowing analysis, we take H = 20. Following the estimations by HHST09, we deploy amodied rst order rule (FOR) in order to give our model robust micro-foundations.

    To be specic, agent i { 1, . . . , I } at time t focuses on H linear prediction rules to predictprice at that period pt given by

    (6) pei,h,t = i,h,t pt 1 + (1 i,h,t ) pei,t 1 + i,h,t ( pt 1 pt 2 ),

    where pei,h,t is the prediction of price pt , formulated by the agent i at time t conditional onusing the rule h , and pei,t 1 is the nal prediction by the agent of the price at time t 1, whichthe agent submitted to the market.

    Notice that this specication ( 6) is a special case of the general FOR from equation ( 5),with the anchor or the fundamental price weight 3 = 0. We experimented with specicationswith some possible anchors, but they could not properly account for the dynamics in thepositive feedback market. 4 We will refer to the i,h,t [0, 1] parameter as the price weight,in contrast to the past predictions: the higher it is, the less conservative is the agent in her

    predictions. The trend extrapolation parameter i,h,t shows the extent to which the agentwant to follow the recent price change. For convenience we will drop the subscripts of thesetwo parameters whenever we refer to them in general.

    Contrary to the price weight , the choice of the allowed interval for the trend extrapolationis not immediately clear. Experimentation has led us to take the upper bound of to be equalto 1.1, as this specication seems to t well to the positive feedback. 5 Thus, there are twoequally intuitive choices for the lower bound. At rst glance, the interval should be symmetricaround zero, so that the agents can learn to contrast the trend to the same degree as they

    4

    To be specic, we looked on the average price so far, and the fundamental price itself, see Appendix Cfor a discussion. Note also that ( 6) is a combination of the two heuristics (trend extrapolation and adaptiveexpectations) used by Anufriev et al. (2012), who also had no need for an anchor for their HSM.

    5 Please refer to Appendix C for a discussion.

    8

  • 8/11/2019 Forecasting With GAs

    11/47

    can extrapolate it. We refer to the model with such specied [ 1.1, 1.1] as GA FORT+C(trend and contrarian). On the other hand, it is not immediately clear that people perceivetrend contrarian and extrapolative rules in the same way. In fact HHST09 report only twosubjects to use contrarian strategies, so one may nd it more appropriate to focus on amodel with [0, 1.1], which we will refer to as GA FORT (trend only). For the HHST09experiment, the two specications for all practical reasons behave in the same way, so for thesake of brevity we will report results only for the model with unrestricted contrarian rules GAFOTR+C. However, this will not be the case for other experimental data, to which we willcome back in the next section.

    Dene H i,t as the set of such heuristics of agent i at time t. We emphasize that thesesets can be heterogeneous between the agents. Furthermore GA agents do not want to have astatic set of heuristics. Instead, they want to learn: to optimize the heuristics so that they canadapt to the changing landscape of the economy. For example, in some circumstances it maypay off to focus on the trend in the data and agents would like to nd the optimal degree of trend following, by experimenting with different trend extrapolation coefficients i,h,t . Agentsdo so by updating the set of heuristics with GA.

    To be specic, the agents evaluate their heuristics based on the hypothetical forecastingperformance of these heuristics in the previous period, where their objective is predictionsmean squared error (MSE). Formally, at the beginning of each period t, they focus on thefollowing hypothetical performance measure:

    (7) U i,h,t = exp ( peh,i,t 1 pt 1 )

    2

    .

    Thus dene the normalized performance measure as:

    (8) i,h,t = U i,h,t

    H k =1 U i,k,t

    ,

    which is a logit transformation of MSE. The normalized performance measure ( 8) can bedirectly interpreted as the weight attached to each heuristic h by agent i at time i theprobability with which the agent wants to use that heuristic. 6

    Agents compute the weights U i,h,t twice each period. First time happens during the learningphase. Once pt 1 becomes available to the subjects, heuristic parameters from H i,t 1 areencoded as GA binary strings and undergo one iteration of GA operators to form H i,t , wherethe U i,h,t is used as the GA objective function. The important thing is that the agentsrun independent GA iterations, or more formally, the model consists of I independent GAprocedures. Intuitively, the agents cannot observe each others heuristic sets, in specic theycannot exchange successful specications. The GA procedure utilizes procreation and three

    6 Notice that ( 8) is different than the actual experimental payoff received by the subjects, and which was

    used by Hommes and Lux (2011) in their GA model. We decided to use this performance measure for tworeasons. First, our model becomes general: it can be directly applied to other experimental data sets withouta change to its core ingredient. Second, in this way we obtain a clear link to the literature on HSM models,which often uses the logit transformation of MSE in a similar fashion.

    9

  • 8/11/2019 Forecasting With GAs

    12/47

    Parameter Notation Value

    Number of agents I 6Number of heuristics per agent H 20Number of parameters N 2

    Allowed price weight [ L , H ] [0, 1]Allowed trend extrapolationFORT+C [ L , H ] [ 1.1, 1.1]FORT [ L , H ] [0, 1.1]Number of bits per parameter {L 1 , L 2 } {20, 20}

    Mutation rate m 0.01Crossover rate c 0.9Lower crossover cutoff point C L 20Higher crossover cutoff point C H 1 (none)

    Performance measure U () exp( MSE ())

    Table 1: Parameter specication used by the Genetic Algorithms agents.

    evolutionary operators, with each agent using the same specication of parameters shown inTable 1. U i,h,t is taken directly as the objective function, crossover exchanges parameters i,h,tand i,h,t between heuristics and each parameter is encoded with 20 bits, meaning 40 bits pereach heuristic. 7

    After the agents have learned, that is, after the agents have updated their heuristic sets,they have to pick one specic heuristic in order to dene their predictions for the period t , pei,t .

    For this task they sample one heuristic h H i,t with normalized ( 7) performance measure asprobabilities, but after the set of heuristics was updated (because of the optimization step,the set H i,t in principle can be different from H i,t 1 ). This means that the actual probabilities(8) have to be recalculated.

    The timing of the model is the following. Until some moment the agents cannot learn: therule itself requires past prices and predictions. In those periods, the agents sample randompredictions from a predened distribution which we take as exogenous. The heuristics areinitialized at random from a uniform distribution: each agent i has 20 strings of 40 bits

    representing H i,t (Table 1); and each bit is initially 0 or 1 with equal probability.Once the initial periods are over and the agents can start to learn, each period consists of

    three steps:

    1. Agents independently update their heuristics using one GA iteration, where the GAcriterion function is U i,h,t ;

    2. Agents pick particular heuristics and generate their predictions, and the probability thata heuristic h by the agent i is chosen is given by the recalculated tness probability i,h,t ;

    3. The market price is realized according to ( 1) and agents observe the new price.7 Our model seems to be robust against reasonable parameter changes.

    10

  • 8/11/2019 Forecasting With GAs

    13/47

    We would like to emphasize the way in which we have chosen the specication of the model.Given its complexity, estimation of its parameters is for all practical reasons infeasible. As wewill discuss later, there is a way to obtain likelihood measures on the model with AuxiliaryParticle Filter (APF), but it remains computationally demanding, and so can be used forcalculating a small set of comparative statistics at most. 8

    On the other hand, GA parameters have no direct economic interpretation and we are notinterested in identifying them with an uncanny precision. To the contrary, we prefer to rely ona predened specication that has well-known properties and is widely used in the literature.To be specic, we use exactly the same set of parameters as Hommes and Lux (2011).9 Asmentioned before, the variable that requires calibrating is the allowed trend extrapolation .We used the experimental data reported by HHST09 to ne-tune our model in this respect. 10

    Despite the model not being directly estimated, we nd it to be able to replicate the stylizedresults of the HHST09 experimental data, both on the aggregate and the individual level.

    3.3 50-period ahead simulations for HHST09

    First test for tness of our model to the experimental data are 50-period ahead simulationsfor the HHST09 experiment (as was the number of periods for all experimental groups). 11 Wetake the feedback equations ( 4) and (3) for the negative and positive feedback respectivelyand simulate our model for 50 periods in total, without any information from the experimentafter (and including) period 2, specically the realized prices and predictions. 12

    Such a 50-period ahead simulation depends on the random numbers used by the model,since GA is a stochastic procedure. To understand the 50-period ahead behavior of the model,we resample it until we obtain a satisfactory distribution of its dynamics. We use this MonteCarlo (MC) experiment to compare the model with the experimental data (with MC sampleequal to 1 000). We emphasize that this is a difficult test, since it requires the model to stayclose to the data for 50 periods.13 .

    For initialization, the model requires exogenous initial prediction. 14 In the experiment, theinitial predictions varied between and within the groups, whereas the group average seemedto affect the later dynamics under the positive feedback treatment ( Anufriev et al. , 2012).Therefore, we do not want to use the same initial predictions for all the 50-period ahead

    8 For a dual-core Pentium with 2 .7GHz clock and 3 .21 GB RAM, a shot of APF for one experimental grouptakes approximately 25 minutes for a relatively small number of 32 particles.

    9 The exception is that we take the mutation rate m = 0 .9, instead of 0 .6 like Hommes and Lux (2011).Here we follow suggestion of Haupt and Haupt (2004). This has no signicant inuence our results.

    10 Please refer to Appendix C for a discussion.11 All simulations, as well as the GA and APF libraries, were written in Ox matrix algebra language ( Doornik ,

    2007) and are available at request.12 We include the realizations of t to the feedback equation.13 In one of the positive feedback treatment groups, one of the subjects out of the blue predicted ten times

    higher price than both his previous forecast and the realized market price. This destabilized the whole market

    for a number of periods. In the following analysis, we follow Anufriev et al. (2012) and omit this group andhence focus on six positive feedback and six negative feedback treatment groups.

    14 Recall from the previous discussion that we initialize the heuristics with a uniform distribution. We donot change that for the 50-period ahead simulations, leaving only the issue of the initial predictions.

    11

  • 8/11/2019 Forecasting With GAs

    14/47

    simulations. We follow suggestion by Diks and Makarewicz (2013) that a simple bootstrapof the experimental initial predictions may be inefficient for a large MC. Instead we samplethese from a distribution calibrated by Diks and Makarewicz (2013).

    Negative feedback Positive feedback

    (a) Experimental Group 1 (b) Experimental group 1

    (c) Sample GA FORT+C (d) Sample GA FORT+C

    Figure 1: HHST09: typical results for the experimental groups and sample 50-period ahead simu-lations of the GAFORT+C model. Black line represents the price and green dashed linesare the individual predictions.

    For a rst impression, consider Figure 1 with sample paths from the experiment andsimulations of GA FORT+C. Notice that each group or simulation has different initial predic-tions. Despite that, for both type of feedback, the experimental data and the 50-period aheadsimulations are similar. Under the positive feedback, the GA agents coordinate in terms of predictions in around three, four periods (Figure 1d). Moreover the distance between theirpredictions is smaller in the second period than in the rst. Despite this coordination, the

    agents oscillate around the fundamental and the price overshoots and undershoots the fun-damental price pf = 60, in a similar vein to the experimental groups under this treatment(Figure 1b). Under the negative feedback (Figure 1c), the price is pushed to the fundamentaloutcome in around 5 periods. Only then the GA agents can actually converge as well. Beforethat, their behavior is volatile, much like for the case of the experimental groups (Figure 1a).

    These sample time paths are representative for the MC study for the GA FORT+C. 15

    As the authors of the experiment, we focus on the distance of the realized price from the

    fundamental (aggregate behavior) and the standard deviation of the individual six predictionsat each period (degree of coordination; individual behavior).15 Results for GA FORT are comparable.

    12

  • 8/11/2019 Forecasting With GAs

    15/47

    Negative feedback Positive feedback

    (a) Realized price (b) Realized price

    (c) Predictions standard deviation (d) Predictions standard deviation

    Figure 2: HHST09: Monte Carlo for the GA FORT+C 50-period ahead simulations. Realizedprice and coordination (standard deviation of the individual predictions) over time.Green dashed line represents the experimental median, black pluses are real observa-tions; blue dotted lines are the 95% condence interval and red line is the median for theGA FORT+C. Left column displays the negative feedback, right the positive feedback.1 000 simulated markets for each feedback.

    We report the results on Figure 2. As for the prices, Figure 2a shows the median andthe 95% Condence Intervals (CI) model prices across time for the negative feedback. For95% of the simulations, the price is within [50 , 70] interval after roughly 5 periods, while after10th period clearly converges to the fundamental, as did happen in the experiment. Differentpattern emerges for the positive feedback treatment (Figure 2b). Here, the distribution of the prices does not collapse into a small region even after 50 periods, when 95% of the pricesstay in a wide [55, 75] interval. Oscillations are a clear pattern, with the MC median (andthe 95% CI) price going up until around 20th period, then down for the next twenty periods

    and up again. The fundamental price pf = 60 is not an unlikely outcome throughout thelast 35 periods, but the price can easily reach 80 from below and 45 from above. This isa similar pattern to the experimental prices: the 95% CI of our model contain the bulk of the experimental prices, almost all the groups until period 40 and still roughly half of theobservations afterward.

    The GA FORT+C replicates subjects coordination (standard deviation of six individualpredictions within each period) as well. For the positive feedback (Figure 2d), the six exper-imental groups coordinated after at most 5 periods. The same holds for the model in which

    the 95% CI are narrow and in the same 5 periods fall close to zero and remain there so. Forthe negative feedback (Figure 2c), the experimental subjects can have varied predictions evenuntil the 15th period, the time required for the model 95% CI to converge as well.

    13

  • 8/11/2019 Forecasting With GAs

    16/47

  • 8/11/2019 Forecasting With GAs

    17/47

    positive feedback. The specic performance measure was the mean squared error of the one-period ahead price predictions for the last 47 periods, dened for each group as

    (9) MSE M X = 147

    50

    t =4

    pGr X t pM t

    2,

    where pGr X t denotes realized price at the period t in an experimental group X and pM t is theprice pt predicted by the model M conditional on the behavior of the group X until periodt 1. For the Rational Expectations model, pe,RE i,t = 60 and pRE t = 60 + t regardless of therealized prices or predictions until t and the type of the feedback.

    Similar exercise can be done for our GA model, but it requires more involved statisticalmeasures in comparison with the HSM. The only stochastic element in the HSM are theshocks to the feedback equation ( 1). This makes HSM a deterministic model conditional

    on the price-predictions feedback. The same holds for RE and many other homogeneousexpectations models, including simple naive expectations, trend extrapolation and adaptiveexpectations. Therefore, the MSE ( 9) measure can be computed directly for these models.

    This is not the case for our GA model. GA agents will update their heuristic sets con-ditional on the past prices and this learning is stochastic and highly nonlinear in nature,based on a non-smooth period-to-period transition distribution. To address this issue, we willuse Sequential Importance Sampling (with Resampling) technique (SIRS). 16

    We emphasize that our model is agent-based by design, which means that we can attemptto trace the individual behavior in the experiment. This is in contrast with RE or HSM,which serve as a stylized tool of aggregate market description. Therefore, unlike Anufrievet al. (2012), we will focus on an alternative performance measure which is one period aheadforecast of individual decisions. As a result, we obtain a statistical measure for how well ouragent-based model explains the data on the agent (or individual) level. We will also commenton how to make a crude test of different models with our approach.

    We introduce the following notation. Let at denote the state of the model at time t, bywhich we mean the H t set of the six sets of chromosomes that correspond to the six sets of heuristics H i,t . Notice that in our model for the HHST09 experiment, a t changes 49 times fromperiod t = 2, when the chromosomes are randomly initialized, until the period t = 50, when thechromosomes are updated for the last time conditional on the realized prices and predictions inthe period t = 49. We can observe the chromosomes only indirectly, through the realized pricesand predictions picked by the agents (observational variables). Both the state and observedvariables are evolving according to a distribution q (). Denote also pet = { pe1 ,t , . . . , p e6 ,t } as theset of six individual predictions from period t in an experimental group. Here and later t in thesuperscript denotes history of the variable, so pt 1 = { p1 , . . . , p t 1 } and pe,t 1 = { pe1 , . . . , p et 1 }.

    Our problem is to dene the baseline distribution q ( pet |a t 1 ), that is, to evaluate the dis-

    16 In order to avoid a technical discussion which is not important for our paper, in the following we as-sume that the reader is familiar with SISR and sequential MC techniques. If that is not the case, a generalintroduction can be found in Doucet et al. (2000).

    15

  • 8/11/2019 Forecasting With GAs

    18/47

    tribution of the real predictions pei,t given the predictions from the period t 1 and whatthey signal could have been the chromosomes H t 1 from the period t 1. This is a typicalstate-space model problem. Essentially, q ( pet |a t 1 ) can be decomposed as

    (10) q ( pet |a t 1 ) = q ( pet |a t ) q (a t |a t 1 ).

    Following Anufriev et al. (2012), we assume that conditional on the history until t 1 andthe chromosome set H t , the distribution of the six realized predictions is given by standardnormal distribution. 17 Given the information structure of the experiment we can assume thatin each period, the individual predictions are independent between the agents, and their jointdensity is a simple product of the marginal densities of individual forecasts. To be specic,we represent it as

    (11) pet q ( p

    et |a t , p

    t 1

    , pe,t 1

    ) = 6

    i=1 N ( pe,GA

    i,t pe,GA

    i,t , 1).

    We simplify the notation by suppressing pet q ( pet |a t ). By pe,GAt we mean the individual price

    forecasts predicted by the GA model. This is based not on the price actually picked by eachagent, but rather on their expected pick ( Anufriev et al. , 2012). Dene

    (12) pe,GAi,t =H

    h =1

    ( i,h,t pe,GAi,h,t )

    (see formula (8)) and hence dene

    (13) pGAt = F ( pe,GA1 ,t , . . . , p

    e,GA6 ,t )

    to be the price predicted by the GA model for the feedback structure ( 1). For a general case,(11) density of the experimental predictions pet is just a product of normal standard densitiescentered around the forecasts predicted by a model.

    Unfortunately, q (a t |a t 1 ) is not that simple to work with. As explained earlier, it is notfeasible to represent this problem analytically or to linearize it. Nevertheless, it is fairly simple

    to simulate a t conditional on a t 1 . Therefore, we focus on SISR technique known as AuxiliaryParticle Filter (APF) ( Johansen and Doucet , 2008).18

    Denote the importance distribution as g() and assume that g(a t |a t 1 ) = q (a t |a t 1 ) (whichis a standard assumption for APF). It follows that g( pet |a t 1 ) can be decomposed in the samemanner as the baseline distribution in equation ( 10). For g( pet |a t ) we use (a product of six)Student-t with one degree of freedom. This density is again analytically straightforward andcompares pet with p

    e,GAt price forecasts predicted by the GA model as in equation ( 12).

    17 For APF, the choice of variance of the distributions is not important. We use standard normal for the

    sake of computational efficiency.18 It is extremely difficult, also in conceptual terms, to dene a reverse distribution of the model at periodt conditional on period t + 1, given the complexity of GA operators. As a result, we leave open the questionwhether econometrically more efficient ltering-smoothing techniques can be used for the case of our model.

    16

  • 8/11/2019 Forecasting With GAs

    19/47

    Negative feedback Positve feedback

    MSE Prices Predictions Prices Predictions

    Trend extr. 21 .101 35.648 0.926 4.196Adaptive 2 .3 14.912 2.999 6.482Contrarian 2 .249 14.856 3.864 7.436Naive 3.09 15.782 1.822 5.184

    RE 2.571 15.21 46.835 54.811HSM 2.999 17.106 0.889 4.156

    GA: FORT+C 3.623 16.913 1.496 6.943(0.395) (1.496) (0.01) (0.219)

    GA: FORT 3.134 17.951 0.881 6.252(2.077) (8.055) (0.08) (0.183)

    Table 2: Mean squared error (MSE) of the Trend Extrapolation, Adaptive, Contrarian, Naive

    and Rational Expectations, Heuristic Switching Model and Genetic Algorithms modelsFORT+C (with contrarian rules) and FORT (without contrarian rules) one period aheadpredictions of the experimental prices and predictions, averaged over six negative feedbackand six positive feedback groups. In parenthesis the standard deviation of the respectivemeasures of the GA models are provided.

    The specic APF algorithm is discussed by Johansen and Doucet (2008). We use 1024particles at for b { 1, . . . , 1024} (1024 sets of six heuristic sets for six agents), with fullresampling. The core problem of our investigation lies with the g( pet |a t 1 ) distribution, whichcannot be tracked analytically. We approximate it with a MC integral in the following fashion.

    At the beginning of each iteration t > 1 of AFP, for each particle a t 1 (that is, for each set of six agents and their chromosomes), we simulate 256 counter-factual prediction realizations of the next period predictions pe,t for each particle b, where s { 1, . . . , 256}. To be specic,for the particle b, for each simulation s given H t 1 we draw one prediction pe,i,t 1 for eachagent. This also generates the counter-factual price pt 1 , conditional on which the agentsuse GA evolutionary operators to update H t 1 into the counter-factual H t . We use theseto compute price pt and p

    e,t individual price forecasts predicted by the model as in

    equation ( 13). We therefore dene

    (14) g( pet |at 1 ) =

    1256

    256

    s =1

    6i=1 T 1 pe,i,t p

    ei,t ,

    where T 1 denotes Student-t distribution with one degree of freedom.

    We use the baseline ( 11) and the importance ( 14) distributions for the standard APF par-ticle weighting and updating .19 We run a separate APF for each of the twelve investigatedexperimental groups. Like in the 50-period ahead simulations, the chromosomes (or the parti-cles) are initialized at random from the uniform distribution dened above. Notice, however,

    19 Both the baseline and importance densities are a product of six independent densities, which can takevery low values in the rst periods for some experimental groups. To ensure numerical stability, we multiplyboth joint densities by 10 60 (or each of the six marginal distributions by 10 10 ) and truncate them at 10 100 .

    17

  • 8/11/2019 Forecasting With GAs

    20/47

    that in this case we do not have the problem of the initial predictions or prices, since theAPF works on the period-to-period basis, independently for each experimental group. Inter-estingly, for deterministic models like RE or HSM or homogeneous heuristic models, the APFeffectively reduces to the procedure reported by Anufriev et al. (2012), since all the particleswould be the same and had the same weights.

    Negative feedback group 1

    (a) Price (b) Price forecasts of subject 1Positive feedback group 1

    (c) Price (d) Price forecasts of subject 1

    Figure 4: Sample results for Auxiliary Particle Filter for HHST09 experiment: one-period aheadpredictions of the GA FORT+C model for prices and price forecasts of subject 1 fromsample groups from each treatment. Black line denotes the experimental variable andred boxes display the APF one-period ahead predictions.

    For each experimental group, we focus on fourteen variables in total, which we obtainby using the APF weighting of the particles. For each of the last 47 periods in each group,we look at the (mean) one-period ahead prediction of the price, as well as at the (mean)one-period ahead predictions of individual price forecasts. Next, for the prices and the sixindividual predictions from that period, we compute MSE against the original data. Noticethat we compute the expected APF MSEs, instead of MSEs for the expected prices orpredictions. We compute these variables for each group, and average them separately overthe two treatments to obtain the average MSE of the models prediction of the prices andindividual forecasts.

    In the same manner, we can also compute the mean variance of the MSE of predictedprices and price forecasts for the two treatments. A crude measure of the 95% CI (99% CI) of

    these four MSEs is simply the average MSE plus and minus twice (thrice) the square root of its respective variance. It can be used as a basis of a simple test: if these CI coincide with a CIof a different stochastic model, or if they include the (deterministic) MSE of a deterministic

    18

  • 8/11/2019 Forecasting With GAs

    21/47

    model, the two models cannot be distinguished; otherwise the one with lower MSE is better.Sample results of the one-period ahead forecasts for GA FOTR+C specication are pre-

    sented on Figure 4. For both treatments, the model clearly follows both the prices andindividual predictions. We report the average MSE of the one-period ahead predictions of the prices and the individual price forecasts in Table 2. We also compare our model withbenchmark rules: HSM, RE and a small number of homogeneous linear heuristics .20 Mostmodels (with the exception of the trend extrapolation heuristic) are indistinguishable for thecase of the negative feedback. This is not surprising, since under this type of feedback, almostany reasonable learning mechanism will quickly converge to the fundamental price, just likethe subjects in the experiment.

    On the other hand, the positive treatment, with its oscillations, offers a real test for themodels. This proves a disaster for RE, but not for the GA model, which outperforms REby a factor of 10 in terms of individual forecasts. Interestingly, the specication GA FOTR

    without contrarian rules seems to have a slightly better t to both types of feedback, differencesignicant especially for the positive one. Moreover, both specications (and more so for GAFOTR) are at least as good as any other model for each treatment.

    Together with the 50-period ahead C exercise, this shows that our model captures both theaggregate and individual behavior in the LtF experiment reported by HHST09, both in termsof short and long-run dynamics. We emphasize again that the APF measure is an importantevidence, since it evaluates the agent-based structure of our model against the behavior of theindividuals in the experiment. Moreover, our methodology can be easily adapted for other

    stochastic models, including GA models with different forecasting rules.

    4 Evidence from other experiments

    Results of our Genetic Algorithms model for the HHST09 are promising. Nevertheless, theexperiment is based on a simple linear feedback. We argue that our model can be usedto explain more complicated experimental settings. To be specic, we look at three otherexperiments that offer a hierarchy of challenge for the GA model:

    1. BHST12: linear feedback with large and unanticipated shocks to the fundamental price;

    2. HSTV07; V01: nonlinear (cobweb) negative feedback economy investigated by Hommesand Lux (2011);

    3. HSTV05: non-linear positive feedback economy, with two-period ahead predictions;

    4.1 Shocks to the fundamental price

    BHST12 report LtF experiment with almost the same structure as HHST09: positive and neg-ative feedback of linear structure given by ( 2). They use the same dumping factor |B | = 202120 For the denition of the benchmark rules, please refer to Table 7, Appendix D.

    19

  • 8/11/2019 Forecasting With GAs

    22/47

    for the two treatments (positive and negative feedback), but there are two large and unantic-ipated shocks to the fundamental price A. Regardless of the feedback, the fundamental pricechanges from pf = 56 to pf = 41 starting from period t = 21 and then to pf = 62 startingfrom period t = 44 until the last period t = 65.

    Negative feedback Positive feedback

    (a) Experiment group 1 (b) Experiment group 8

    (c) Sample GA FORT+C (d) Sample GA FORT+C

    Figure 5: BHST12: typical results for the experimental groups and sample 50-period ahead simu-lations of the GAFORT+C model. Black line represents the price and green dashed linesare the individual predictions.

    The results of this experiment are similar to the one by HHST09 and sample time pathsare shown on Figure 5. Under the negative feedback (Figure 5a), shock to the fundamentaldistracts the subjects coordination and is followed by a small number of volatile forecasts,which are pushed to the new fundamental only when the price itself has converged. Under thepositive feedback (Figure 5b), shocks do not brake the coordination, and the predictions and

    prices move smoothly towards the new fundamental, eventually over- or undershooting it.We run the same 65-period ahead Monte Carlo (MC) study as for the HHST09 experi-

    ment. Each simulation is initialized with a uniform distribution of heuristics (see the previoussection). As for the initial forecasts, we follow Diks and Makarewicz (2013) by using their pro-cedure to estimate the distribution of the initial predictions for BHST12 and sample directlyfrom it.21 We again look at the prices and the standard deviation of the individual predictions.The two model specications GA FORT+C and GA FORT yield comparable results, so wereport only the ones for the unrestricted GA FORT+C.

    Lower part of Figure 5 presents sample, representative simulations for GA FORT+C.Notice that the two experimental groups and the two GA simulations reported on that gure21 See Appendix B for the estimated distribution.

    20

  • 8/11/2019 Forecasting With GAs

    23/47

    Negative feedback Positive feedback

    (a) Realized price (b) Realized price

    (c) Distance from the fundamental (d) Distance from the fundamental

    (e) Predictions standard deviation (f ) Predictions standard deviation

    Figure 6: BHST12: Monte Carlo for the GA FORT+C 65-period ahead simulations. Realized price,distance of the realized price from the fundamental and coordination (standard deviationof the individual predictions) over time. Green dashed line is the experimental median,black pluses are real observations; blue dotted lines are the 95% condence interval andred line is the median of GA FORT+C. Left column displays the negative feedback, rightthe positive feedback. 1 000 simulated markets for each feedback.

    have different initial predictions. Figure 5c displays a sample time path for the negativefeedback. As in the case of the real group for this feedback, the two shocks to the fundamentalcause a break in the coordination and volatile behavior. The individual predictions are pushedto the fundamental price once the price itself has converged. The opposite happens in thecase of the positive feedback (Figure 5d). The agents quickly coordinate, and the breaksin the fundamental are unable to spoil it. Instead, GA agents move smoothly and over- orunder-shoot the new fundamental.

    Figure 6 reports the Monte Carlo (MC) experiment for GA FORT+C with 1 000 MC sam-

    ple, conducted in the same way as for the HHST09 experiment. Under the negative feedback(Figure 6a), the median price of the model with its 95% CI requires around 5 periods to con-verge to the fundamental at the beginning of the experiment and twice after the breaks to the

    21

  • 8/11/2019 Forecasting With GAs

    24/47

    Negative feedback Positive feedback

    (a) price weight (b) price weight

    (c) trend extrapolation (d) trend extrapolation

    Figure 7: BHST12: Monte Carlo for the GA FORT+C 65-period ahead simulations. Chosen co-efficients of the price weight and the trend extrapolation. Blue dotted lines are 95%condence interval, purple dashed are 90% CI and red line is the median for the GAmodel. Left column displays the negative feedback, right the positive feedback. 1 000simulated markets for each feedback.

    Negative feedback Positve feedback

    MSE Prices Predictions Prices Predictions

    Trend extr. 114 .061 121.329 1.183 2.165Adaptive 3 .689 10.332 3.776 4.618Contrarian 5 .92 12.534 4.737 5.559Naive 9.979 16.81 2.411 3.286

    RE 13.871 20.923 55.133 60.859HSM 38.309 45.679 0.9996 2.024

    GA: FORT+C 85.144 110.28 1.893 4.487

    (0.172) (0.44) (0.102) (0.124)GA: FORT 10.228 26.317 1.177 3.532(0.071) (0.384) (0.113) (0.147)

    Table 3: BHST12 experiment: Mean squared error (MSE) of the Trend Extrapolation, Adaptive,Contrarian, Naive and Rational Expectations, Heuristic Switching Model and GeneticAlgorithms model one period ahead predictions of the experimental prices and predictions,averaged over eight negative feedback and eight positive feedback groups. In parenthesisthe standard deviation of the respective measures of the GA model are provided.

    fundamental. The shape of this convergence is similar to the one visible in the behavior of the

    experimental groups under this treatment. Moreover, if the new fundamental is lower/higherthan the old one, the GA agents will under-/over-shoot it respectively, much like the subjects

    22

  • 8/11/2019 Forecasting With GAs

    25/47

    in the experiment. This is seen on Figure 6c, which gives the distance of the price from thefundamental, for experimental groups and the GA FORT+C. Under the positive feedback, theprices in the GA model move smoother and there are no sharp breaks in market predictionsas the fundamental changes (Figure 6b). The realized prices move slowly towards the newfundamental (Figure 6d), to eventually pass it (under- or over-shoots it).

    Negative feedback group 1

    (a) Price (b) Price forecasts of subject 1Positive feedback group 8

    (c) Price (d) Price forecasts of subject 1

    Figure 8: Sample results for Auxiliary Particle Filter for BHST12 experiment: one-period aheadpredictions of the GA FORT+C model for prices and price forecasts of subject 1 fromsample groups from each treatment. Black line denotes the experimental variable andred boxes display the APF one-period ahead predictions.

    The patterns of the coordination between the experimental subjects and the GA agentsare seen at Figures 6e and 6f. Under the negative feedback, a break in the fundamental causesboth the model and the experimental groups to have more varied predictions for up to tenperiods. Under the positive feedback treatment, the predictions of both the real subjects andthe GA agents retain small standard deviation within each period, after no more than veperiods and towards the end of the experiment, despite of the breaks in the fundamental.

    We also look on the coefficients used by our GA agents, see Figure 7. These are verysimilar to those from the 2009 experiment: trend extrapolation heuristics with a high priceweight are much more important for the positive feedback market, where the whole 95% CIfor the trend extrapolation become signicantly positive towards the end of the experiment.

    We also computed the MSE of the price and individual forecasts predicted by our modelone-period ahead. For this task we apply APF specied as in the previous section. Sampletime paths are shown for GA FORT+C and FORT on Figures 8 and 9 respectively. Both

    23

  • 8/11/2019 Forecasting With GAs

    26/47

    Negative feedback group 1

    (a) Price (b) Price forecasts of subject 1Positive feedback group 8

    (c) Price (d) Price forecasts of subject 1

    Figure 9: Sample results for Auxiliary Particle Filter for BHST12 experiment: one-period aheadpredictions of the GA FORT model for prices and price forecasts of subject 1 from samplegroups from each treatment. Black line denotes the experimental variable and red boxesdisplay the APF one-period ahead predictions.

    model specications replicate the prices and individual predictions for the positive feedbackgroups, including the turning points of the oscillations. Under the negative treatment, themodels one-period ahead price forecasts are in general consistent with the data. However,GA FORT+C has problems in replicating data in one or two rst periods immediately afterthe breaks in the fundamental. 65-period ahead simulations show that for this treatment, theprices and predictions quickly converge to the fundamental and so it does not matter whichheuristics the GA agents choose. This leaves space for contrarian rules, with conservative highweight on the past predictions. With such rules, once the agents observe a signicant decreaseof the price (as happens after the rst brake of the fundamental), they would forecast the nextprice to be close to or higher than the old fundamental, and so the model (with the negativeexpectations-price feedback) predicts even further drop of the price ( c.f . Figures 8a and 8b).On the other hand, such an outcome is not possible once the contrarian rules are constrainedout, as for the case of GA FORT+C (Figure 9).

    We argue that this explains the results for the average MSE of the predicted prices andindividual forecasts, for the two treatments, which are reported in Table 3. GA FORT outper-forms GA FORT+C signicantly for the positive and dramatically for the negative feedback,

    and beats HSM for the negative feedback. In either case, it does signicantly better than REand is the only reported model to have a decent t to both feedback treatments. Togetherwith the 65-period ahead simulations, this shows that the GA model is a good explanation of

    24

  • 8/11/2019 Forecasting With GAs

    27/47

    Mean (p ) Var (p ) Mean (p e ) Var (p e )

    Stable

    Experiments 5.64 0.36 5.56 0.087GA: AR1 5.565 0.326 5.576 0.1GA: FORT+C 5.628 0.372 5.571 0.08295% CI [5.613, 5.643] [0.359, 0.389] [5.553, 5.59] [0.065, 0.101]GA: FORT 5.649 0.353 5.548 0.056595% CI [5.631, 5.667] [0.341, 0.371] [5.527, 5.57] [0.043, 0.077]

    Unstable

    Experiments 5.85 0.63 5.67 0.101 GA: AR1 5.817 0.647 5.645 0.16GA: FORT+C 5.792 0.598 5.705 0.10395% CI [5.744, 5.841] [0.525, 0.746] [5.667, 5.739] [0.067, 0.171]GA: FORT 5.825 0.557 5.694 0.079

    95% CI [5.786, 5.863] [0.487, 0.658] [5.67, 5.719] [0.052, 0.122]Strongly unstable

    Experiments 5.93 2.62 5.73 0.429GA: AR1 6.2 2.161 5.434 0.769GA: FORC+T 5.809 2.172 5.832 0.34595% CI [5.693, 5.908] [1.626, 2.875] [5.735, 5.918] [0.182, 0.598]GA: FORT 5.962 1.487 5.807 0.20695% CI [5.876, 6.045] [1.188, 1.834] [5.75, 5.858] [0.113, 0.347]

    Strongly unstable, large group

    Experiments 5.937 1.783 5.781 0.204 GA: AR1 6.183 1.571 5.515 0.5GA: FORT+C 5.812 1.699 5.852 0.19495% CI [5.731, 5.892] [1.368, 2.157] [5.779, 5.918] [0.122, 0.338]GA: FORT 5.972 1.316 5.804 0.17395% CI [5.918, 6.026] [1.118, 1.553] [5.768, 5.843] [0.111, 0.253]

    Table 4: HSTV07 experiment under four treatments, stable, unstable and strongly unstable with6 or 12 subjects. Average price and prediction, and their variances. Mean experimentalstatistics; GA simulations with AR1 prediction rule for mutation rate m = 0 .01 (mean

    statistics); GA simulations with rst order rule with or without contrarian rules (FORT+Cor FORT) (median statistics with 95% condence intervals). Asterisk and dagger denoteexperimental statistic which falls into 95% CI of GA FORT+C and FORT respectively.Source for the experimental and AR1 entries: Hommes and Lux (2011).

    the dynamics from BHST12, especially the GA FORT specication.

    4.2 Cobweb economy

    HSTV07; V01 report an LtF experiment in a Cobweb economy setting. As discussed, Hommes

    and Lux (2011) investigate this data sets with a GA model based on AR1 forecasting rule. Itis thus important to check if our model, with the FOR ( 6) instead, can as well account for thedifficult, non-linear price-expectations feedback of the Cobweb economy.

    25

  • 8/11/2019 Forecasting With GAs

    28/47

    Treatments Stable Unstable Strongly unstable

    MSE Prices Predictions Prices Predictions Prices Predictions

    Trend extr. 1 .176 1.997 2.122 3.719 5.856 14.39Adaptive 0 .108 0.328 0.434 0.549 2.784 2.863Contrarian 0 .102 0.318 0.414 0.497 2.929 2.729Naive 0.196 0.448 0.577 0.788 3.095 3.731

    RE 0.048 0.248 0.364 0.385 2.257 1.844HSM 0.212 0.474 0.52 0.732 3.065 3.691

    GA: FORT+C 0.247 0.585 0.828 0.801 4.558 2.909(0.086) (0.35) (0.238) (0.392) (0.76) (0.53)

    GA: FORT 0.109 0.42 0.516 0.659 3.872 2.755(0.069) (0.303) (0.198) (0.31) (0.607) (0.405)

    Table 5: HSTV07 experiment: Mean squared error (MSE) of the Trend Extrapolation, Adaptive,

    Contrarian, Naive and Rational Expectations, Heuristic Switching Model and Genetic Al-gorithms model one period ahead predictions of the experimental prices and predictions,averaged over six groups for each treatment (stable, unstable, strongly unstable). In paren-thesis the standard deviation of the respective measures of the GA model are provided.

    Following Hommes and Lux (2011), we simulate our model in the setting of the cobwebexperiment (see HSTV07 for formal denition of the feedback structure, for which we also usethe experimental errors), with 6 independent agents and three parameter treatments: stable,unstable (on the verge of stability) and (strongly) unstable. We also look at the stronglyunstable specication with 12 agents, experiment reported by V01.

    As a rst test for our model, we conduct a MC exercise in the vein of Hommes and Lux(2011). For each treatment, we compute (as was the number of groups in each treatment) six50-period ahead simulations with different random numbers. 22 Next we compute the mean andstandard deviation of the realized prices p and the predictions pe . The variances correspondto the volatility of the market, which in the experiment was higher for the unstable cases(which cannot be explained by RE). To obtain a proper Monte Carlo distribution, we repeatthis procedure 1 000 times. This allows us to generate 95% condence intervals. 23 We reportthe results in Table 4 for the two GA model specications.

    Our 50-period ahead simulations explain well the experimental data, yield similar resultsto the ones reported in Hommes and Lux (2011) and perform signicantly better than RE.The CI of our model replicate 12 and 11 out of 16 experimental statistics for GA FORT+Cand GA FORT respectively. Furthermore, the unexplained statistics are usually missed by asmall error that can be attributed to rounding issues of the simulations and the experiment.

    The next exercise is the one-period ahead forecasting of the model. Here we look onlyat the 18 groups from HSTV07. We use APF exactly as specied in the previous sectionand focus on the same set of variables: predicted prices and individual forecasts, as well as

    22 As in the two previous cases, the initial heuristics are uniform. Following Diks and Makarewicz (2013),we have also estimated the distribution of the initial predictions, see Appendix B.

    23 Hommes and Lux (2011) look only on the expected outcomes of their simulations.

    26

  • 8/11/2019 Forecasting With GAs

    29/47

    Stable treatment group 3 prices

    (a) GA FORT+C (b) GA FORTUnstable treatment group 3 prices

    (c) GA FORT+C (d) GA FORTStrongly unstable treatment group 3 prices

    (e) GA FORT+C (f ) GA FORT

    Figure 10: Sample results for Auxiliary Particle Filter for HSTV07 experiment: one-period aheadpredictions for prices from the third groups of the treatments for GA FOTR+C andGA FOTR. Black line denotes the experimental variable and red boxes display the APFone-period ahead predictions.

    MSE of these measures. Sample price time paths for both model specications for each of thethree treatments are shown on Figure 10. For the stable and unstable treatment, GA modelclosely follows the experimental prices and replicates reversals of their volatile, short periodoscillations. The model has worse t for the strongly unstable treatment.

    We report the average MSE of our model for the three treatments in Table 5. It seems thatGA FORT does better than GA FORT+C, but the differences are on the edge of signicance.The less stable the treatment, the worse t has any model. Moreover, all the models (withthe exception of trend extrapolation) have similar performance regardless of the treatment. 24

    This is similar to the negative feedback treatments from the two previous experiments, but24 Notice that the scale of the prices in this experiment is [0 , 10] in the contrast with the two previous settings,

    where the prices belonged to [0 , 100] intervals. The highest possible MSE in the linear experiments is 100 timeshigher than in the cobweb experiment.

    27

  • 8/11/2019 Forecasting With GAs

    30/47

    we speculate that for a different the reason. Especially for the non stable treatments, thesubjects behavior can be close to chaotic and so detecting it one period ahead is problematic.However, 50-period ahead simulations show that our model replicates this behavior in termsof its long run distribution.

    4.3 Two-period ahead asset pricing

    HSTV05 report an experiment in which the subjects participated in a non-linear positivefeedback economy (asset-pricing model with robotic fundamental traders), in which the currentprice depends on the expectations about the price in the next period. This means that thesubjects had to predict prices two periods ahead.

    There is no one denite way in which the basic FOR rule ( 6) together with the GA modelcan be translated into the two-period ahead setting. Some experimentation led us to the

    following (and indeed the simplest) specication. The agents at time t predict pt +1 the nextprice based on the rst order rule and the last available period. Dene the prediction of pricefrom period t + 1, made at period t by the agent i and her rule h as

    (15) peh,i,t = h,i,t pt 1 + (1 h,i,t ) pei,t 1 + h,i,t ( pt 1 pt 2 ).

    Then, at the beginning of period t, each agent focuses on peh,i,t (prediction of pt +1 ). On theother hand, once pt is realized, the agents can evaluate their rules and to do so, they look attheir hypothetical performance two periods ago. To be specic, they focus on ( pt peh,i,t 1 )

    2 .

    (a) Convergence (b) Unclear (c) Oscillations

    Figure 11: HSTV05: sample 50-period ahead simulations for GA FORT+C with different initialpredictions and learning. The green lines are individual predictions, the black line isthe realized price and the purple dashed line is the fundamental price.

    Contrary to the HHST09 experiment, HSTV05 obtain results which cannot be easily clas-sied into a clear-cut stylized facts. In the seven treatment groups with the fundamental price pf = 60 they observe groups which have converged to this fundamental, as well as groupswith oscillations of different amplitude and period. For this reason we abstain from a MCexperiment for the 50-period ahead performance of the model. Instead, we report sample

    simulations of the experimental economy (with the fundamental price pf = 60) with our GAagents, in which the initial predictions for the two rst periods are draws from the distributioncalibrated by Diks and Makarewicz (2013).

    28

  • 8/11/2019 Forecasting With GAs

    31/47

    (a) 500 periods (b) 2000 periods

    Figure 12: HSTV05: sample long run behavior of the GA FORTR+C model with fundamentalprice pf = 60. 2 000-period ahead simulation ( b) and its rst 500 periods ( a). Thegreen lines are individual predictions, the black line is the realized price and the purple

    dashed line is the fundamental price.MSE Prices Predictions

    Trend extr. 17 .4527 55.0898Adaptive 44 .125 25.3157Contrarian 59 .3905 30.8646Naive 31.6864 20.8416

    RE 96.0328 145.998GA: FORT+C 48.51 48.56

    (0.266) (0.496)

    GA: FORT 39.236 43.8(0.211) (0.529)

    Table 6: HSTV05 experiment: Mean squared error (MSE) of the Trend Extrapolation, Adaptive,Contrarian, Naive and Rational Expectations, and Genetic Algorithms FORT+C andFORT models one period ahead predictions of the experimental prices and predictions,averaged over eight negative feedback and eight positive feedback groups. In parenthesisthe standard deviation of the respective measures of the GA models are provided.

    Many 50-period ahead sample time paths look much alike the experimental ones. Figure 11displays three typical time paths of the simulated markets (for different realizations of therandom number generator) for GA FOTR+C. We use the experimental errors to the price-expectations law of motion, hence the differences are purely due to different realizations of the learning and the initial individual predictions. In terms of the simulated prices, bothconvergence to the fundamental price (Figure 11a) and oscillations (Figure 11c) are common.However, sometimes the agents seem to diverge from the fundamental after being close to itfor a few periods (Figure 11b). Model specication without the contrarian rules FOTR hassimilar time paths.

    To further stress the volatile behavior of this market structure, we report one long runsimulation for GA FOTR+C. Figure 12 displays its rst 500 (Figure 12a) and all 2 000 (Fig-ure 12b) periods. Volatile, unruly oscillations are persistent and can also reappear even if the

    29

  • 8/11/2019 Forecasting With GAs

    32/47

    Group 8: GA FOTR+C

    (a) Price (b) Price forecasts of subject 1Group 8: GA FOTR

    (c) Price (d) Price forecasts of subject 1

    Figure 13: Sample results for Auxiliary Particle Filter for HSTV05 experiment: one-period aheadpredictions of the GA FORT+C and GA FORT models for prices and price forecastsof subject 1 from group 8. Black line denotes the experimental variable and red boxesdisplay the APF one-period ahead predictions.

    market settles on the fundamental price for some time, as seen on Figure 12a around period170. This means that in the system the fundamental price is a stable steady state, but is notthe unique attractor. This result is intuitive: individual agents cannot impose fundamentalprice, but will rather try to follow the current trend, and this through the non-linear feedbackeasily amplies the oscillations. That is the reason why the experimental groups under thesame conditions could converge to the fundamental, diverge from it or oscillate in a variedfashion.

    Table 6 reports the results of one-period ahead forecasting performance of our model. REare the worst model, both on aggregate and individual level. Interestingly, the best modelare naive expectations, whereas trend extrapolation has a good t only to the aggregate level.We think that this result is because the oscillations between the groups were different, andso one particular trend extrapolation specication can explain one or two groups, but not allof them. Naive expectations seem to work ne mostly because the period to period changesin predictions are often relatively small. This model would not be able to explain lastingtrend in the data. GA models work moderately well, again with GA FORT restricted modeloutperforming GA FORT+C. The constrained specications is either very close to or slightly

    better than most of the homogenous models. This, together with the dynamics present inthe long-run simulations show that our model is able to capture a good measure of theseexperimental dynamics, although there is probably a space for improvement which we leave

    30

  • 8/11/2019 Forecasting With GAs

    33/47

    for future research.

    5 Conclusions

    In this paper we discuss a model in which the agents independently use Genetic Algorithmsto optimize a simple prediction heuristic. Our investigation derives from the intuitions of the Heuristic Switching Model ( Anufriev and Hommes , 2012; Anufriev et al. , 2012) and theGenetic Algorithm model introduced by Hommes and Lux (2011). Following the experimentalresults by ( Heemeijer et al. , 2009), we model our agents as learning how to use a simple rstorder rule (a mixture of adaptive and trend extrapolation expectations) in different economicenvironments. We argue that our model is able to replicate many experimental ndingsfrom different Learning-to-Forecast experiments. We show this by means of 50-period ahead

    simulations. Furthermore we use Auxiliary Particle Filter technique to evaluate the modelsone-period ahead predicting power of the individual behavior, a novelty in the behavioralliterature.

    In Learning-to-Forecast experiments, subjects are asked to predict prices, while the realizedprice depends on their predictions. This mimics many well studied economic environments,such as an asset pricing market or a cobweb economy. As a result, Learning-to-Forecastexperiments are a perfect controlled setting to study how the human subjects try to adapt tothe price-predictions feedback. Our model retains the basic intuition of the Heuristic SwitchingModel: among different prediction heuristics, the agents focus on those that have relativelygood hypothetical past performance. On the other hand, following Hommes and Lux (2011)we show how the agents exibility can be enhanced by explicit learning with the individual useof Genetic Algorithms. Therefore the heterogeneity of heuristics between the agents emergesendogenously and resembles the one among the experimental subjects.

    This contrast the dominating framework of the perfectly rational expectations. Tradition-ally, the economists assumed that people use sophisticated concepts such as a fundamentalprice or a long run equilibrium. They disregarded the fact that the market practice, theagents face constraints on their rationality and may be forced to use second-best predictionrules. As a result, rational expectation fail to describe experimental dynamics, unless theseare extremely simple. To counter this approach, we propose on a model where the subjects usesimple forecasting rules, but adapt them to the current environment with a smart optimizationprocedure. This allows for a realistic description of the human behavior, which also explainsthe experimental data to the degree that was unattainable for the traditional literature.

    We use the simple linear setting of the experiment reported by Heemeijer et al. (2009) toset up our Genetic Algorithms model. In a Monte Carlo experiment based on 50-period aheadsimulations, the model replicates the dynamics of the experiment, both on the aggregate

    and individual level and for both treatments. We also replicate the major insights of theHeuristic Switching Model model that the trend extrapolation is relatively more importantfor the positive feedback, in which it reinforces the oscillating behavior. Our results therefore

    31

  • 8/11/2019 Forecasting With GAs

    34/47

    validate the stylized investigation by Anufriev et al. (2012). We also use Auxiliary ParticleFilter technique to conrm that the one-period ahead predictions of our model follow closelythe experimental individual predictions and prices. This is a novelty in the literature, sincewe are able to conduct a direct test on how an agent based model ts to an experimental dataset on the individual level.

    We further investigate three more complicated experimental settings with our Genetic Al-gorithms model. The experiment reported by Bao et al. (2012) adds large and unanticipatedshocks to the basic linear structure of the original Heemeijer et al. (2009) experiment. Secondexperiment, reported by van de Velden (2001) and Hommes et al. (2007) and investigated byHommes and Lux (2011), focuses on a cobweb economy. Finally, the asset pricing experi-ment reported by Hommes et al. (2005) introduces two-periods ahead feedback between thepredictions and the realized prices. For all the three experiments, we use the Auxiliary Par-ticle technique to demonstrate that our model can successfully predict the subjects behaviorone period ahead. Moreover, 50-period ahead simulations of the model show that it is ableto replicate the long-run distribution of the individual predictions and prices for the threeexperiments.

    The strength of our model is its generality and agent-based structure. We emphasizethat it replicates the individual behavior from Learning-to-Forecast experiments, which werebased on very different experimental economies. Moreover, the model allows for realisticheterogeneity and learning. We therefore argue that it can be used to investigate settingswith a more complicated interactions between individual agents. This can include economies

    with heterogeneous preferences, unequal market power, information networks or decentralizedprice setting. In any of these cases, heterogeneous price expectations may have importantconsequences for market efficiency or dynamics. Our model can be directly used to explorethese phenomena.

    References

    Anufriev, M. and Hommes, C. (2012). Evolutionary selection of individual expectations and

    aggregate outcomes in asset pricing experiments. American Economic Journal: Microeco-nomics, forthcoming .

    Anufriev, M., Hommes, C., and Philipse, R. (2012). Evolutionary selection of expectations inpositive and negative feedback markets. Journal of Evolutionary Economics , pages 126.10.1007/s00191-011-0242-4.

    Arifovic, J. (1995). Genetic algorithms and inationary economies. Journal of Monetary Economics , 36(1):219 243.

    Arifovic, J. (1996). The behavior of the exchange rate in the genetic algorithm and experi-mental economies. Journal of Political Economy , 104(3):pp. 510541.

    32

  • 8/11/2019 Forecasting With GAs

    35/47

    Arifovic, J., Bullard, J., and Kostyshyna, O. (2012). Social learning and monetary policyrules*. The Economic Journal .

    Bao, T., Hommes, C., Sonnemans, J., and Tuinstra, J. (2012). Individual expectations, limitedrationality and aggregate outcomes. Journal of Economic Dynamics and Control , 36(8):1101

    1120.

    Bentez-Silva, H., Eren, S., Heiland, F., and Jimenez-Martn, S. (2008). How well do individ-uals predict the selling prices of their homes?

    Blundell, R. and Stoker, T. M. (2005). Heterogeneity and aggregation. Journal of Economic Literature , 43(2):pp. 347391.

    Brock, W. A. and Hommes, C. H. (1997). A rational route to randomness. Econometrica ,

    65(5):pp. 10591095.Bullard, J. (1994). Learning equilibria. Journal of Economic Theory , 64(2):468 485.

    Case, K. E. and Shiller, R. J. (2003). Is there a bubble in the housing market? Brookings Papers on Economic Activity , 2003(2):pp. 299342.

    Charness, G., Karni, E., and Levin, D. (2007). Individual and group decision making underrisk: An experimental study of bayesian updating and violations of rst-order stochasticdominance. Journal of Risk and Uncertainty , 35:129148.

    Dawid, H. and Kopel, M. (1998). On economic applications of the genetic algorithm: a modelof the cobweb type. Journal of Evolutionary Economics , 8:297315. 10.1007/s001910050066.

    Diks, C. and Makarewicz, T. (2013). Initial predictions in learning-to-forecast experiment. InManaging Market Complexity , volume 662 of Lecture Notes in Economics and Mathematical Systems , pages 223235. Springer Berlin Heidelberg. 10.1007/978-3-642-31301-1 18.

    Doornik, J. (2007). Object-oriented matrix programming using Ox . Timberlake Consultants

    Press, London, 3rd edition.

    Doucet, A., Godsill, S., and Andrieu, C. (2000). On sequential monte carlo sampling methodsfor bayesian ltering. Statistics and Computing , 10:197208.

    Evans, G. and Honkapohja, S. (2001). Learning and expectations in macroeconomics . PrincetonUniversity Press.

    Evans, G. and Ramey, G. (2006). Adaptive expectations, underparameterization and the lucascritique. Journal of monetary economics , 53(2):249264.

    Goodman Jr., J. L. and Ittner, J. B. (1992). The accuracy of home owners estimates of housevalue. Journal of Housing Economics , 2(4):339 357.

    33

  • 8/11/2019 Forecasting With GAs

    36/47

    Grandmont, J. (1998). Expectations formation and stability of large socioeconomic systems.Econometrica , pages 741781.

    Haupt, R. and Haupt, S. (2004). Practical Genetic Algorithms . John Wiley & Sons, Inc., NewJersey, 2nd edition.

    Heemeijer, P., Hommes, C., Sonnemans, J., and Tuinstra, J. (2009). Price stability andvolatility in markets with positive and negative expectations feedback: An experimentalinvestigation. Journal of Economic Dynamics and Control , 33(5):10521072.

    Hommes, C. (2011). The heterogeneous expectations hypothesis: Some evidence from the lab.Journal of Economic Dynamics and Control , 35(1):1 24.

    Hommes, C. and Lux, T. (2011). Individual expectations and aggregate behavior in learning-to-forecast experiments. Macroeconomic Dynamics , 1(1):129.

    Hommes, C., Sonnemans, J., Tuinstra, J., and van de Velden, H. (2007). Learning in cobwebexperiments. Macroeconomic Dynamics , 11(Supplement S1):833.

    Hommes, C., Sonnemans, J., Tuinstra, J., and Velden, H. v. d. (2005). Coordination of expectations in asset pricing experiments. The Review of Financial Studies , 18(3):pp. 955980.

    Johansen, A. M. and Doucet, A. (2008). A note on auxiliary particle lters. Statistics & Probability Letters , 78(12):1498 1504.

    Lux, T. and Schornstein, S. (2005). Genetic learning as an explanation of stylized factsof foreign exchange markets. Journal of Mathematical Economics , 41(12):169 196.ce:titleSpecial Issue on Evolutionary Finance/ce:title.

    Malmendier, U. and Nagel, S. (2009). Learning from ination experiences. available online athttp://faculty-gsb.stanford.edu/nagel/documents/InflExp.pdf .

    Mut