Periodic Points and Iteration in Sequential Dynamical Systems

48
Periodic Points and Iteration in Sequential Dynamical Systems Taom Sakal Advised by Padraic Bartlett Senior Thesis CCS Mathematics University of California Santa Barbara May 2016

Transcript of Periodic Points and Iteration in Sequential Dynamical Systems

Page 1: Periodic Points and Iteration in Sequential Dynamical Systems

Periodic Points and Iteration in

Sequential Dynamical Systems

Taom Sakal

Advised by Padraic Bartlett

Senior Thesis

CCS Mathematics

University of California Santa Barbara

May 2016

Page 2: Periodic Points and Iteration in Sequential Dynamical Systems

CONTENTS

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1. Predicting Periodic Points in Cycle Graphs . . . . . . . . . . . . . 51.0.1 What is a Simple Cyclic SDS? . . . . . . . . . . . . . 51.0.2 The Phase Space . . . . . . . . . . . . . . . . . . . . . 7

1.1 The Compatibility Graph . . . . . . . . . . . . . . . . . . . . 81.2 The Transition Graph . . . . . . . . . . . . . . . . . . . . . . 10

1.2.1 Construction and Definitions . . . . . . . . . . . . . . 111.2.2 Calculating SDS with the Transition Graph: an Algo-

rithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 141.3 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

1.3.1 Shifting States . . . . . . . . . . . . . . . . . . . . . . 191.3.2 Finding other graphs with points of the same period . 20

1.4 Future Directions. . . . . . . . . . . . . . . . . . . . . . . . . 23

2. The Iterated Phase Space . . . . . . . . . . . . . . . . . . . . . . . 252.1 Generalized SDS . . . . . . . . . . . . . . . . . . . . . . . . . 252.2 The Iterated Phase Space . . . . . . . . . . . . . . . . . . . . 25

2.2.1 Definitions for IPS . . . . . . . . . . . . . . . . . . . . 282.3 Classification of IPS . . . . . . . . . . . . . . . . . . . . . . . 29

2.3.1 Behavior of Or and And . . . . . . . . . . . . . . . . . 292.3.2 Behavior of Parity and Parity+1 . . . . . . . . . . . . 352.3.3 Behavior of Majority and Majority+1 . . . . . . . . . 362.3.4 Behavior of Nor and Nand . . . . . . . . . . . . . . . . 38

2.4 Other results . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

3. Color SDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413.1 What is a Color SDS? . . . . . . . . . . . . . . . . . . . . . . 413.2 Color Distance . . . . . . . . . . . . . . . . . . . . . . . . . . 413.3 Other Coloring SDSs, Other Color Distances . . . . . . . . . 433.4 Lonely Colorings . . . . . . . . . . . . . . . . . . . . . . . . . 44

Page 3: Periodic Points and Iteration in Sequential Dynamical Systems

Contents 3

Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

Page 4: Periodic Points and Iteration in Sequential Dynamical Systems

INTRODUCTION

Graph dynamical systems capture a phenomena found in nearly all of sci-ence: emergence. Individuals which follow simple rules can, when viewedcollectively, create complex behavior. Graph dynamical systems have mod-eled everything from ants to tra�c to economies. In all these models indi-viduals look to their immediate environment and the actions of those aroundthem to decide their next action.

The classic mathematical example of such a system is Conway’s Gameof Life. Moving creations, pulsing galaxies, and even the complexity of selfreproducing organisms can emerge from the game’s simple and mechanicalrules [2, Chapter 7.3] [3].

Sequential Dynamical Systems are time and space discrete systems, sim-ilar to the Game of Life. However, they have the added dimension of anvertex-by-vertex update order (compared to the Game of Life’s simultane-ous update order). The order in which a system updates matters, and thisis what gives SDS its applications, some examples being tra�c simulations,genetic models, queuing theory, and even a mathematically precise founda-tion for computer simulation [4, Chapter 1.3, Chapter 8].

We begin in Chapter 1 by describing results on a special class of SDScalled cyclic SDS. We then consider the Iterated Phase Space and moregeneral SDS in Chapter 2. Finally in Chapter 3 we consider Color SDS andsome results on 2-cycles. We will assume basic knowledge of graph theorythroughout (ie. what a graph is, what vertices and edges are, and the basictypes of graphs).

Page 5: Periodic Points and Iteration in Sequential Dynamical Systems

1. PREDICTING PERIODIC POINTS IN CYCLE GRAPHS

Often we wish to know when SDS are periodic. Given a starting state, anSDS may update back to this starting state after n updates. A state thatdoes this is called a point of period n. Knowing which points are periodicis important because it tells what inputs cause our systems to loop.

Mortveit and Reidys characterized points of period one for a special classof SDS (called Simple Cyclic SDS) on cycle graphs. [4, Chapter 5.1]. In thischapter we generalize their technique to find points of arbitrary period onCn

for such SDS.To do this we first we introduce basic terminology and Simple Cyclic

SDS in section 1.0.1. Next we review the compatibility graph, the toolMortveit and Reidy studied fixed points with. From there we introduce thetransition graph as a generalization of the compatibility graph and show howto transform the problem of calculating an SDS into a problem of findingwalks on the transition graph. To finish we prove two theorems which allowus to, when given a point of period k on C

n

, find points of period k on thegraphs C

n

and Cn+ank

, where a is a natural number.

1.0.1 What is a Simple Cyclic SDS?

A Sequential Dynamical System (SDS) is a discrete dynamical system on agraph, and a formal construction of them can be found in [4, Chapter 4.1].The results in this paper are limited to Simple Cyclic SDS, a special classof SDS on C

n

.We label the vertices of C

n

in the natural way: vi

is adjacent to vi+1,

and vn

is adjacent to v1. A Simple Cyclic SDS consists of

1. An undirected cycle graph Cn

with vertices {v1, . . . vn} with n � 3.

2. A set of possible vertex states K. (For this paper we let K = F2.)

3. A local vertex function f : K3 ! K.

4. An ordering ⇡ = (�(v1),�(v2), . . . ,�(vn)) of the vertices in Cn

, where� is a permutation in S

n

. For this paper, � will be the identity.

Page 6: Periodic Points and Iteration in Sequential Dynamical Systems

1. Predicting Periodic Points in Cycle Graphs 6

The local vertex function f is defined as

f : K3 ! K

(xi�1, xi, xi+1) 7! (x0

i

)

Where x0i

2 K and the subscripts are taken modulo n. Given a vertex vi

instate x

i

with neighbors vi�1, vi+1 in states x

i�1, xi+1, we can update vi

byreplacing x

i

with the state given by f(xi�1, xi, xi+1). An update function

is symmetric if it does not care about the order of the x’s in the input(x

i�1, xi, xi+1).These local functions can be composed to give the SDS-map. This

map, denoted [fCn ,⇡], applies the local vertex function to each vertex in

our graph, in the order specified by ⇡. We call an application of the SDS-map a system update.

A system state is an assignment of vertices in our graph to elementsof K. We let the tuple (x1, . . . xn) 2 Kn denote the system state in whichvertex v

i

has state xi

. A system update changes the system state (x1, . . . , xn)to a new system state

Definition 1. Let F = [fCn ,⇡] be an SDS-map and let X and X 0

be system

states. If F (X) = X 0we write

X 7!F

X 0

and read it as “X updates to X 0,” and we say that we have applied a system

update.

Generally, Cn

and ⇡ are assumed to be fixed and we write F in place of[f

Cn ,⇡].

Example: Consider the SDS on C3, the cycle graph on three vertices.Let the vertex set be {v1, v2, v3} and let the vertex state set be F2. Set⇡ = (v1, v2, v3), the identity update order. Define the vertex updatefunction to be

Parity(vi�1, vi, vi+1) := (v

i�1 + vi

+ vi+1) mod 2.

Take the system state (1, 0, 0), as shown in Figure 1.1, and apply ourlocal vertex functions in the order given by ⇡. We begin with v1. It seesthat its left neighbor is 0, itself is 1, and the right neighbor is 0. Thuswe have f

v1(0, 1, 0) = 0 + 1 + 0 mod 2 = 1. Thus v1 updates to 1.

Page 7: Periodic Points and Iteration in Sequential Dynamical Systems

1. Predicting Periodic Points in Cycle Graphs 7

Do this for v2. It sees the updated v1 to its left, a 0 for itself, and a 0 toits right. Thus it updates to f

v2(1, 0, 0) = 1+ 0+ 0 mod 2 = 1. Finallywe do this for v3 and we find it updates to f

v3(1, 0, 1) = 0. The resultingsystem state is (1, 1, 0). As we have updated all the vertices in the orderspecified, we have finished an entire system update, and we have that(1, 0, 0) 7!

F

(1, 1, 0).

Fig. 1.1: A single system update using Parity

1.0.2 The Phase Space

The phase space of an SDS is a directed graph that represents how an SDSupdates. The vertex set of this graph is the collection of all possible systemstates for our SDS. For an SDS-map F , we draw an edge from one systemstate X to system state X 0 if X 7!

F

X 0.

Fig. 1.2: The phase space of Parity over C3.

The phase space gives us a complete view of an SDS’s behavior. Aspecial structure in the phase space is the periodic point: a system statethat returns to itself after a certain number of system updates. For example,the phase space illustrated above contains four points of period four, twopoints of period two, and two points of period one. A point of period one iscalled a fixed point.

Page 8: Periodic Points and Iteration in Sequential Dynamical Systems

1. Predicting Periodic Points in Cycle Graphs 8

Observe that finding periodic points in an SDS is equivalent to findingdirected cycles in the phase space.

1.1 The Compatibility Graph

In this section we review the compatibility graph, first introduced in [4,Chapter 5.1]. The compatibility graph allows us to find all fixed points ina simple cyclic SDS. It does this by “stitching together” local fixed points,which we define below.

Definition 2. Let f be a local vertex function. A local fixed point for fis a vertex state (x

i�1, xi, xi+1) such that

f(xi�1, xi, xi+1) = x

i

.

Example: Consider the function Majority : F32 ! F2, defined as follows:

Majority(a, b, c) =

(1 if a+ b+ c > 1,

0 if a+ b+ c 1.

The local fixed points for Majority are labeled in the table below.

(xi�1, xi, xi+1) Majority(x

i�1, xi, xi+1) Fixed Point?(0, 0, 0) 0 Yes(0, 0, 1) 0 Yes(0, 1, 0) 0 No(0, 1, 1) 1 Yes(1, 0, 0) 0 Yes(1, 0, 1) 1 No(1, 1, 0) 1 Yes(1, 1, 1) 1 Yes

The compatibility graph for a given simple cyclic SDS describes a wayto fit local fixed points together into a global fixed point. We do this byfinding which local fixed points are compatible with each other, as definedbelow.

Definition 3. A triple (a, b, c) is compatible with a triple (x, y, z) if and

only if b = x and c = y.

Example: For Majority, the triple (0, 0, 1) is compatible with the triples(0, 1, 1) and (0, 1, 0), but not with any other triples.

Page 9: Periodic Points and Iteration in Sequential Dynamical Systems

1. Predicting Periodic Points in Cycle Graphs 9

Definition 4. The compatibility graph of an SDS is a directed graph

where the vertex set is all local fixed points. There is an edge from (a, b, c)to (x, y, z) if and only if (a, b, c) is compatible with (x, y, z).

Example:

The compatibility graph for Majority is as follows.

000

111

100

110

001

011

To denote a walk that goes from v1 to v2 to . . . vn

we write v1 ! v2 !· · · ! v

n

. The compatibility graph encodes all the information about fixedpoints on C

n

as follows.

Proposition 1. Let [fCn ,⇡] be a simple cyclic SDS function where ⇡ is an

arbitrary permutation of our vertices and let G be the corresponding com-

patibility graph. Then if

W = (a1, b1, c1) ! (a2, b2, c2) ! · · · ! (ak

, bk

, ck

) ! (a1, b1, c1)

is a closed walk in G such that k divides n, we have that

B =

b1, . . . , bk repeated n/k timesz }| {(b1, . . . , b

k

, b1, . . . , bk

, . . . . . . , b1, . . . , bk

)

is a fixed point for [fCn ,⇡]. That is, [f

Cn ,⇡](b1, b2, . . . , bk) = (b1, b2, . . . , bk

).Likewise, every fixed point corresponds to a closed walk on the compatibility

graph.

Page 10: Periodic Points and Iteration in Sequential Dynamical Systems

1. Predicting Periodic Points in Cycle Graphs 10

Furthermore, the number of fixed points of our SDS-map [fCn ,⇡] is equal

the trace of An

, where A is any adjacency matrix corresponding to this com-

patibility graph.

Proof. As described above, take any simple cyclic SDS [fCn ,⇡] and any walk

W that gives the state B = (b1, . . . , bk

, b1, . . . , bk

, . . . . . . , b1, . . . , bk

). When-ever we update the vertex b

⇡(i) our function takes in (b⇡(i)�1, b⇡(i), b⇡(i)+1),

which is a local fixed point, meaning the vertex state stays the same. Asthe vertex state never changes for any vertex, B is a fixed point.

If instead we are given a fixed point B, then whenever we update the ver-tex b

⇡(i) the vertex’s state does not change, meaning that (b⇡(i)�1, b⇡(i), b⇡(i)+1)

is a local fixed point. Likewise, when we update the adjacent vertex b⇡(i)+1

it’s state also does not change, meaning that (b⇡(i), b⇡(i)+1, b⇡(i)+2) is also a

local fixed point. This corresponds to the edge that goes from (b⇡(i)�1, b⇡(i), b⇡(i)+1)

to (b⇡(i), b⇡(i)+1, b⇡(i)+2) in the compatibility graph. In this way we create a

path in the compatibility graph. Since our SDS is over Cn

, this path is aclosed cycle of length k, where k divides n.

Finally, let A denote the adjacency matrix of our compatibility graph. Itis well-known that the entry (i, j) of the n-th power of an adjacency matrixis equal to the number of distinct walks length n from the i-th vertex to thej-th vertex. Hence an entry in the diagonal of An gives the total numberof closed walks length n. Thus the trance of An gives the total number offixed points of our SDS-map.

Example: We find that (000) ! (001) ! (011) ! (111) ! (110) !(100) ! (000) is a closed walk of length 6 in the compatibility graph forMajority. Therefore the state (0, 0, 1, 1, 1, 0, 0) is a fixed point.

1.2 The Transition Graph

The transition graph is a generalization of the compatibility graph. Whilethe compatibility graph gives information about points of period one onsimple cyclic SDS, the transition graph gives information about points ofany period on simple cyclic SDS, provided we limit the update order to theidentity permutation.

For the remainder of this paper we limit our study simple cyclic SDS withthe SDS-map [f

Cn , id]. For notational convenience, we will denote [fCn , id]

as simply F whenever the map f and the cycle Cn

are understood.

Page 11: Periodic Points and Iteration in Sequential Dynamical Systems

1. Predicting Periodic Points in Cycle Graphs 11

1.2.1 Construction and Definitions

The transition graph is similar to the compatibility graph in that it is adirected graph made of the eight triples in F3

2. However, instead of com-patibility we draw an edge from one triple to another if, when we updatethe first triple, we move to the next vertex and see the second triple. Moreformally,

Definition 5. Let our local update function be f and our SDS-map be

[fCn , id]. The transition graph of [f

Cn , id] has F32 as the vertex set, and

for any two triples (a, b, c), (x, y, z) 2 F32, we have (a, b, c) ! (x, y, z) if and

only if x = f(a, b, c) and y = c.

000 101

111010

100 110

001 011

Fig. 1.3: The transition graph for Parity+ 1

The following notation is helpful for understanding the transition graph.

Definition 6. Let X = (x1, . . . , xn) be a system state. Then

x0i

=

8><

>:

f(xn

, x1, x2) if i = 1

f(x0i�1, xi, xi+1) if 1 < i < n

f(x0n�1, xn, x

01) if i = n

Observe that, as long as ⇡ = id, we have that F (x1, . . . , xn) = (x01, . . . , x0n

).(This follows directly from the definition of our SDS-map.) Likewise F 2(x1, . . . , xn) =(x001, . . . , x

00n

). In general, we apply the following notation when needed.

F a(x1, . . . , xn) =⇣x(a)1 , . . . , x(a)

n

Page 12: Periodic Points and Iteration in Sequential Dynamical Systems

1. Predicting Periodic Points in Cycle Graphs 12

What x(a)i

says is, that, after X has been updated a times, the ith vertex

has the state x(a)i

.

Definition 7. Let X = (x1, . . . , xn) be a system state. Updating X corre-

sponds to the following walk on our transition graph.

W = (xn

, x1, x2) ! (x01, x2, x3) ! (x02, x3, x4) ! · · · ! (x0n�2, xn�1, xn) ! (x0

n�1, xn, x01).

If a walk corresponds to a state then we call the walk a state-walk. We

denote the ith vertex in W with wi

.

Intuitively, the state-walk tells us what the SDS “sees” as it updatesthe associated state. We illustrate this with the following example. Givenonly a state and f , we can find the corresponding state-walk by directlycalculating each x0

i

. Each state has a unique walk; however, not every walkis a state-walk.

Example:

Let f = Parity+1. We claim that the state X = (1, 0, 1, 1) correspondsto the walk W = (1, 1, 0) ! (1, 0, 1) ! (1, 1, 1) ! (0, 1, 1).

This is not too hard to see. When we calculate F (X), we update thefirst vertex, then the second, then the third, and finally the fourth.

Step Vertex being Current system State vi

New systemi updated state updates to state1 v1 (x

n

, x1, x2) = (1, 1, 0) 1 (1, 1, 0)2 v2 (x01, x2, x3) = (1, 0, 1) 1 (1, 1, 1)2 v2 (x02, x3, x4) = (1, 1, 1) 0 (1, 0, 1)2 v2 (x03, x4, x

01) = (0, 1, 1) 1 (0, 1, 1)

Observe that the triples in the third column above coincide exactly withthose of W . This is because the triples record what the SDS sees at thatvertex before it updates it. The state-walk captures these perspectives.

Notice that on any given walk we may find vertices vi

such that (xi�1, xi, xi+1)

are local fixed points and vertices where this is not true. If a triple is not alocal fixed point we call it a local swap. In the previous example, (1, 0, 1)and (1, 1, 1) are local swaps.

The local fixed points and local swaps let us associate our state-walksto binary strings. These strings encode the position of the local swaps andlocal fixed points.

Page 13: Periodic Points and Iteration in Sequential Dynamical Systems

1. Predicting Periodic Points in Cycle Graphs 13

Definition 8. Let W = w1 ! w2 ! · · · ! wn

be a state-walk. Then the

string associated with W is

S = (s1, s2, . . . , sn),

where si

= 1 if wi

is a local swap and 0 if wi

is a local fixed point.

Example: Let f = Parity+1 and take the state-walk W = (1, 1, 0) !(1, 0, 1) ! (1, 1, 1) ! (0, 1, 1) from the previous example. As (1, 1, 0) and(0, 1, 1) are local fixed points, the associated string is S = (0, 1, 1, 0).

At this point we have that each system state corresponds to a unique walkon the transition graph, and each walk corresponds to a string, hence eachstate is associated with a string. Note that the strings are not necessarilyunique, while the walks are.

The strings, with their encoded information, give us a way to update asystem state without using the SDS function. This is done through stringaddition, which is simply vector addition between a string and a state. Thatis, for system state X = (x1, . . . , xn) and string S = (s1, . . . , sn) we have

X + S = (x1 + s1, . . . , xn + sn

)

where xi

+ si

is taken modulo 2. Similarly we can add strings to strings.

Lemma 1. Let X = (x1, . . . , xn) be a system state and let S = (s1, . . . , sn)be the associated string. Then

F (X) = X + S

where we add these strings as vectors modulo 2.

Proof. Notice that for any xi

, we have

x0i

= f(a, xi

, b)

for some a and b. Say that (a, xi

, b) is a local swap. Then

x0i

= f(a, xi

, b) = xi

+ 1 mod 2.

Since (a, xi

, b) is a local swap, si

= 1. Hence

xi

+ si

= xi

+ 1 = x01 mod 2

The reasoning is similar if (a, xi

, b) is a local fixed point. We have shownthat x

i

+ si

= x0i

, and thus (x1, . . . , xn) 7!F

(x1 + s1, . . . , xn + sn

) = X + S.

Page 14: Periodic Points and Iteration in Sequential Dynamical Systems

1. Predicting Periodic Points in Cycle Graphs 14

Example: Let our SDS function be Parity+1 and let the state be(1, 0, 1, 1). The associated string is S = (0, 1, 1, 0). Then

(1, 0, 1, 1) + 0, 1, 1, 0 = (1 + 0, 0 + 1, 1 + 1, 1 + 0) = (1, 1, 0, 1),

and so (1, 1, 0, 1) = Parity+1(1, 0, 1, 1).

1.2.2 Calculating SDS with the Transition Graph: an Algorithm

The following algorithm transforms the problem of updating an SDS to thatof finding paths on the transition graph. This algorithm takes in F = [f

Cn ,⇡]and a system state X1 and outputs a sequence of states X1, X2, X3, . . . suchthat

X1 7!F

X2 7!F

X3 7!F

· · ·

Notation-wise, let the jth element of Xi

be denoted xi,j

. We use similarnotation for walks and strings.

Furthermore, on a directed graph there is an edge from v1 to v2, then wecall v2 the direct successor of v1. Likewise, v1 is the direct predecessor

of v2.

Algorithm

1. Set i = 1 and input X1 and F . Find the walk corresponding to X1 bydirectly calculating it through the SDS-map. Call the walk W1.

2. Find the string corresponding to Wi

. Call this string Si

.

3. Set Xi+1 equal to X

i

+ Si

.

4. Find the walk Wi+1 corresponding to X

i+1.

(a) First we find wi+1,1. Consider w

i,n

, whose states we denote as(x

i+1,n�1, xi,n, xi+1,1). Set wi+1,1 = (x

i+1,n, xi+1,1, xi+1,2). No-tice that this state is a successor of w

i,n

(b) Now we find wi+1,j for 1 < j < n. Consider w

i+1,j�1, whose stateswe denote as (x

i+1,j�2, xi,j�1, xi,j). Set wi+1,j = (xi+1,j�1, xi,j , xi,j+1).

See that this state is a successor of wi+1,j�1

(c) Finally we find wi+1,n. Consider wi+1,n�1, whose states appear as

(xi+1,n�1, xi,n, xi+1,1). Set w

x+1,n = (xi+1,n, xi+1,1, xi+1,2). This

state is the successor of wi+1,n�1.

Page 15: Periodic Points and Iteration in Sequential Dynamical Systems

1. Predicting Periodic Points in Cycle Graphs 15

5. Increment i by one and return to Step 2.

In this way the algorithm updates the SDS without ever applying theSDS function (except in Step 1, where we need to directly calculate W1). Wewill prove that running this algorithm generates the same system states asrepeatedly applying our SDS-map. But first we give a lemma which showsthat, in Step 4, W

i+1 is indeed the walk corresponding to Xi+1.

Lemma 2. Let Xi

be the system state (x1, . . . , xn) and let Wi

be the cor-

responding walk and Si

the corresponding state. Let Xi+1 = X

i

+ Si

. Then

Wi+1 can always be created through the process given in Step 4, and is the

walk corresponding to Xi+1.

Proof. By construction Wi+1 is a state-walk. It corresponds to the state

Xi+1, again by construction.

Proposition 2. Input the system state X1 and the SDS function F into

our algorithm. The algorithm generates X1, X2, X3, . . . such that

X1 7!F

X2 7!F

X3 7!F

· · ·

Proof. We know W1 corresponds to X1. For i > 1, Lemma 2 guarantees thatW

i

is the walk corresponding to Xi

. Thus the string Si

generated from Wi

is the string associated with Xi

. Whenever we reach Step 3 the algorithmoutputs X

i+1. By Lemma 1 we have that Xi

7!F

Xi+1.

An example of the algorithm follows.

Example: For Step 1, input F = Parity+1 and X1 = (0, 0, 0, 0). Bydirect calculation we find that the walk corresponding to X1 is

W1 = (0, 0, 0) ! (1, 0, 0) ! (0, 0, 0) ! (1, 0, 1).

This is the red walk pictured below. Moving onto Step 2 we find thatS1 = 1, 0, 1, 1. (The local swaps are colored gray.) In Step 3 we calculatethat

X2 = X1 + S1 = (0 + 1, 0 + 0, 0 + 1, 0 + 1) = (1, 0, 1, 1)

In Step 4 we find W2. This walk is colored green.

Page 16: Periodic Points and Iteration in Sequential Dynamical Systems

1. Predicting Periodic Points in Cycle Graphs 16

000 101

111010

100 110

001 011

Step 4(a): We find w2,1. First look at w1,4, which equals (1, 0, 1).It’s direct successors are (1, 1, 0) and (1, 1, 1). As x1,2 = 0 we see thatw2,1 = (1, 1, 0).

Step 4(b): We now find w2,2 and w2,3. For w2,2 we first look at w2,1 =(1, 1, 0). Its direct successors are (1, 0, 1) and (1, 0, 0). Since x2,3 = 1 wehave that w2,2 = (1, 0, 1).

We repeat the process to find w2,3. We see that w2,2 = (1, 0, 1) and thatthe direct successors are (1, 1, 0) and (1, 1, 1). Since x2,4 = 1 we havethat w2,3 = (1, 1, 1).

Step 4(c): We now find the last vertex, w2,4. First we see that s2,1 = 0because w2,1 is a fixed point. Next we look at w2,3 = (1, 1, 1) and see thatthe direct successors are (0, 1, 1) and (0, 1, 0). Since x2,1+s2,1 = 1+0 = 1we have that w2,4 = (0, 1, 1).

Thus we have found that that

W2 = (1, 1, 0) ! (1, 0, 1) ! (1, 1, 1) ! (0, 1, 1),

which corresponds to the green walk.

Notice that, if W1 is the red walk and W2 is the blue walk then thecombined walk W1 ! W2 = (0, 0, 0) ! (1, 0, 0) ! (0, 0, 0) ! (1, 0, 1) !(1, 1, 0) ! (1, 0, 1) ! (1, 1, 1) ! (0, 1, 1) has the edge colors red-red-red-black-green-green-green.

For Step 5 we increment i from 1 to 2, and begin again at Step 2. Herewe see that the string corresponding to W2 is S2 = 0, 1, 1, 0. Step 3 thengives us that

X3 = X2+S2 = (1, 0, 1, 1)+0, 1, 1, 0 = (1+0, 0+1, 1+1, 1+0) = (1, 1, 0, 1).

Page 17: Periodic Points and Iteration in Sequential Dynamical Systems

1. Predicting Periodic Points in Cycle Graphs 17

At Step 4 we repeat (a), (b) and (c) to find that

W3 = (1, 1, 1) ! (0, 1, 0) ! (0, 0, 1) ! (0, 1, 0),

which corresponds to the blue walk. Step 5 increments i to 3 and sendsus back to Step 2. If we continue the algorithm will loop and we willfind that X4 = X1 and W4 = W1.

Observe that the only time we need to calculate Parity+1(xi�1, xi, xi+1)

once, and that is to construct W1. After that we simply read informationfrom the transition graph.

1.3 Main Results

Through the algorithm we gain powerful information about cycles. Our twomain results rest on the following proposition. Before we give it we mustdefine a small piece of notation. For what follows, let F be our SDS-map.

Definition 9. Let W = w1 ! · · · ! wn

and V = v1 ! · · · ! vn

be two

walks. Then W leads into V if there is an edge from wn

to v1 on the

transition graph.

If W leads into V , then W ! V denotes the walk w1 ! · · · ! wn

!v1 ! · · · ! v

n

.

Proposition 3. Let F be our SDS-map, and let X1, . . . , Xk

be a sequence

of system states generated by inputting X1 into our algorithm. Say that

W1, . . . ,Wk

is the corresponding sequence of walks and S1, . . . , Sk

is the

corresponding sequence of strings. Say that the Wi

form the closed walk

W1 ! W2 ! · · · ! Wk

! w1,1.

Then

X1 7!F

X2 7!F

· · · 7!F

Xk

7!F

X1 () S1 + S2 + · · ·+ Sk

= (0, 0, . . . , 0).

Proof. Let ~0 denote (0, 0, . . . , 0). For the “ (= ” direction: we have thatS1 + S2 + · · ·+ S

k

= ~0. By Proposition 2,

X1 7!F

X2 7!F

. . . 7!F

Xk

.

All we have to prove is that Xk

7!F

X1.

Page 18: Periodic Points and Iteration in Sequential Dynamical Systems

1. Predicting Periodic Points in Cycle Graphs 18

Observe that Xk

= X1 + S1 + S2 + · · · + Sk�1. By Lemma 1 we know

that Xk

7!F

Xk

+ Sk

. Then

Xk

+ Sk

= (X1 + S1 + S2 + · · ·+ Sk�1) + S

k

,

and since S1 + S2 + · · ·Sk

= ~0, we have

Xk

+ Sk

= X1 +~0 = X1.

Thus Xk

7!F

X1. Hence X1 7!F

X2 7!F

· · · 7!F

Xk

7!F

X1.

For the “ =) ” direction: we have X1 7!F

X2 7!F

. . . 7!F

Xk

7!F

X1.Since X

k

7!F

X1, we must have Xk

+ Sk

= X1. Make the following twoobservations

X1 = X1 +~0

Xk

= X1 + S1 + S2 + · · ·+ Sk�1

Combining these with the fact that Xk

+ Sk

= X1 tells us

X1 + S1 + S2 + · · ·+ Sk

= X1 = X1 +~0,

and hence S1 + S2 + · · ·+ Sk

= ~0.

Proposition 4. Let F be our SDS-map and let X1, . . . , Xk

be a sequence of

system states such that X1 7!F

X2 7!F

· · · 7!F

Xk

7!F

X1. Let W1, . . . ,Wk

be the walks corresponding to this sequence of system states. Then the fol-

lowing is a closed walk on the transition graph.

W = W1 ! W2 ! · · · ! Wk

! w1,1.

Proof. Let X1 = (x1, . . . , xn). From Step 4(a) we see that wi,1 is a direct

succesor of wi�1,n, for any i > 1. This means the walk W1 ! W2 ! · · · !

Wk

! Wk+1 exists.

As Xk

7!F

X1, we have Xk+1 = X1. Since states correspond to unique

walks, W1 must equal Wk+1, which means w

k+1,1 = w1,1. If we combine thiswith our conclusion about Step 4(a) we see that w1,1 is the direct successorof w

k,n

. Hence W exists and is a closed walk on the transition graph.

Note that the converse is not true. We cannot choose any closed walkand find phase space cycles – the walk must be one that our algorithmcould generate. This implies certain conditions of the walk’s structure (inparticular, it divides cleanly into some number of state walks, all of whichare the same size).

Page 19: Periodic Points and Iteration in Sequential Dynamical Systems

1. Predicting Periodic Points in Cycle Graphs 19

1.3.1 Shifting States

Here we reach the first of our theorems.

Theorem 1. Suppose that X1 7!F

X2 7!F

. . . 7!F

Xk

7!F

X1. Let Tdenote the string of length kn formed by

(x1,1, x1,2, . . . x1,nx2,1, . . . x2,n . . . xk,n

)

Let �m

denote any rotation permutation of Skn

by m, and let ~yi

denote the

sequence (�m

(t(i�1)+1),�m(t(i�1)+2), . . .�m(t(i�1)+n

)). Then

Y1 7!F

Y2 7!F

. . . 7!F

Yk

7!F

Y1

Example: In Parity+1 over C4 we have that (0, 0, 0, 0) 7!F

(1, 0, 1, 1) 7!F

(1, 1, 0, 1) 7!F

(0, 0, 0, 0). This gives us the string T = 000010111101. Ifwe shift this string by one we get 000101111010. Therefore Theroem 1tells us that (0, 0, 0, 1) 7!

F

(0, 1, 1, 1) 7!F

(1, 0, 1, 0) 7!F

(0, 0, 0, 1).

Proof. We now prove the theorem. Let W1, . . . ,Wk

be the walks corre-sponding to X1, . . . , Xn

and let V1, . . . , Vk

be the walks corresponding toY1, . . . , Y

k

. Furthermore, let W = W1 ! · · · ! Wn

.Proving the theorem amounts to proving it for a rotation of one. If we

know a rotation of one works, then we can repeatedly rotate by one to reacha rotation of any size.

With this in mind, let the rotation size be one. Let us say that, given i,we have X

i

= (x1, . . . , xn). Due to rotation by one,

Yi

= (y1, . . . , yn) = (x2, . . . , xn, x01).

As this holds for any i, notice that Yi+1 = (y01, y

02, . . . y

0n

) = �1(Xi+1) =(x02, x

03, . . . x

0n

, x001).The walk corresponding to Y

i

is

Vi

= (yn

, y1, y2) ! (y01, y2, y3) ! · · · ! (y0n�1, yn�1, yn) ! (y0

n�1, yn, y01).

Substitute the x’s in for the y’s to find that

Vi

= (x01, x2, x3) ! (x02, x3, x4) ! · · · ! (x0n�1, xn, x

01) ! (x0

n

, x01, x02)

= wi,2 ! w

i,3 ! · · · ! wi,n

! wi+1,1.

Page 20: Periodic Points and Iteration in Sequential Dynamical Systems

1. Predicting Periodic Points in Cycle Graphs 20

Let the string associated with Wi

be (s1, s2, . . . , sn) and the string asso-ciated with W

i+1 be (s01, s02, . . . , s

0n

). The string corresponding to Vi

is thenS = (s2, s3, . . . , sn, s01). By Lemma 1,

Yi+1 = Y

i

+ S

= (x2, x3, . . . , xn, x01) + (s2, s3, . . . , sn, s

01)

= (x2 + s2, x3 + s3, . . . , xn + sn

, x01 + s01)

= (x02, x03, . . . , x

0n

, x001).

A direct calculation of the path corresponding to Yi+1 gives

Vi+1 = (x001, x

02, x

03) ! (x002, x

03, x

04) ! · · · ! (x00

n�1, x0n

, x001) ! (x00n

, x001, x002)

= wi+1,2 ! w

i+1,3 ! · · · ! wi+1,n ! w

i+2,1

Thus Vi

leads into Vi+1, and it is a subwalk of W . As this is true for every

i, the walk V1 ! · · ·Vk

! v1,1 is simply a rotated version of the closed walkW . By Lemma 2,

Y1 7!F

Y2 7!F

· · · 7!F

Yn

7!F

Y1.

This proves the theorem for rotation by one. Repetitions of the singlerotation gives the theorem for arbitrary rotations.

Given one phase space cycle this theorem allows us to generate more.This tells us why, when we have a cycle of size k in the phase space, weusually have many of that size.

1.3.2 Finding other graphs with points of the same period

Here we prove that if an SDS over Cn

has a k-cycle in the phase space, thenthat SDS has a k-cycle in the phase space of C

n+ank

, for any natural numbera.

To do this we require some definitions. For what follows, say thatX1, . . . , X

k

are states such that X1 7!F

· · · 7!F

Xn

7!F

X1, and thatW1, . . . ,W

k

and S1, . . . , Sk

are the corresponding walks and strings.

Definition 10. For any a 2 N, define the following objects:

Wa

i

:= Wi

! Wi+1 ! · · · ! W

i+ak

Sa

i

:= the string associated with Wa

i

X a

i

:= (xi,1, . . . , xi,n, xi+1,1, . . . , xi+1,n, . . . . . . , x

i+ak,1, . . . , xi+ak,n

).

where the subscripts and superscripts are calculated mod k.

Page 21: Periodic Points and Iteration in Sequential Dynamical Systems

1. Predicting Periodic Points in Cycle Graphs 21

Observe that

Sa

i

= si,1, . . . , si,n, si+1,1, . . . , si+1,n, . . . . . . , s

i+ak,1, . . . , si+ak,n

.

Lemma 3. The walk Wa

i

is a state-walk. In particular, it is the walk cor-

responding to the state X a

i

.

Proof. Since X1 7!F

· · · 7!F

Xk

7!F

X1 we know that

xi,j

= xI,j

where I = i mod k. Likewise, Wi

= WI

. A direct calculation then verifiesthat the walk corresponding to X a

i

is Wa

i

.

Lemma 4. X a

i

7!F

X a

i+1.

Proof. Input X a

i

into the algorithm and work through the first three steps.

Step 1 Find the walk corresponding to X a

i

. By Lemma 3 it is Wa

i

.

Step 2 Find the string corresponding to Wa

i

. We see that it is

Sa

i

= (si,1, . . . , si,n, si+1,1, . . . , si+1,n, . . . . . . , s

i+ak,1, . . . , si+ak,n

).

Step 3 Calculate that

X a

i

+ Sa

i

= (xi,1, . . . , xi,n, xi+1,1, . . . , xi+1,n, . . . . . . , x

i+ak,1, . . . , xi+ak,n

)

+ (si,1, . . . , si,n, si+1,1, . . . , si+1,n, . . . . . . , s

i+ak,1, . . . , si+ak,n

)

= (xi,1 + s

i,1, . . . . . . , xi+ak,n

+ si+ak,n

)

= (xi+1,1, . . . , xi+1,n, xi+2,1, . . . , xi+2,n, . . . . . . , x

i+ak+1,1, . . . , xi+ak+1,n)

= X a

i+1

By Lemma 1 we have that X a

i

7!F

X a

i+1.

Theorem 2. Say there is a point of period k for [fCn,id]. Then there is a

point of period k for [fCn+ank , id] for any natural number a.

Page 22: Periodic Points and Iteration in Sequential Dynamical Systems

1. Predicting Periodic Points in Cycle Graphs 22

Proof. Let X1, . . . , Xk

be the states of Cn

such that X1 7!F

· · · 7!F

Xk

7!F

X1, where each Xi

is a point of period k. (Observe that this means thereare no duplicate states among X1, . . . , X

k

.)By Lemma 4, we have X a

i

7!F

X a

i+1. Since X a

k+1 = X a

1 we have

X a

1 7!F

· · · 7!F

X a

k

7!F

X a

1 .

Furthermore, because there are no duplicate states among X1, . . . , Xk

itfollows that there are no duplicate states among X a

1 , . . . ,X a

k

. (Indeed, eachXa

k

begins with Xk

.) Thus each X a

i

is a point of period k for [fCn+ank , id].

Corollary 1. If X1, . . . , Xk

forms a k-cycle in the phase space of Cn

, then

X a

1 , . . . ,X a

k

forms a k-cycle in the phase space of Cn+ank

.

Example: Recall that (0000) is a point of period 3 in C4 for par-ity+1. One can directly calculate that (0000) 7!

F

(1011) 7!F

(1101) 7!F

(0000). By Corollary 1, concatenating these states into the state (0000101111010000)yields a point of period 3 in C16. Similarly, in C28 we have that

(0000101111010000101111010000)

is a point of period 3, and in C40

(0000101111010000101111010000101111010000)

is a point of period 3.

Recall that (0, 0, 0, 0) 7!F

(1, 0, 1, 1) 7!F

(1, 1, 0, 1) 7!F

(0, 0, 0, 0). No-tice that the larger states are made up of X1 tailed by repeating chunksof X2, . . . , X

k

, X1. Below is the state for the C40 example, with dotsinserted to visually highlight this.

(0000 · 101111010000 · 101111010000 · 101111010000).

Breaking down the large chunks, we have

(0000 · 1011 · 1101 · 0000 · 1011 · 1101 · 0000 · 1011 · 1101 · 0000).

Page 23: Periodic Points and Iteration in Sequential Dynamical Systems

1. Predicting Periodic Points in Cycle Graphs 23

1.4 Future Directions.

Theorem 1’s statement was phrased in terms of

X1 7!F

X2 7!F

· · · 7!F

Xk

7!F

X1,

however, these to not necessarily correspond to k cycles within the phasespace; they may correspond to cycles of a length that divides k. For example,if x = 6, then the following would be of the above form but correspond to a3-cycle.

X1 7!F

X2 7!F

X3 7!F

X1 7!F

X2 7!F

X3 7!F

X1

Characterizing when this happens is a future goal.Relatedly, the closed walk corresponding to a k-cycle in the phase space

is not always made up of k unique state-walks – there may be repeats. Forexample, the walk W1 ! W2 ! W3 ! W4 ! W5 ! W6 may correspondto a 6-cycle in the phase space, as it meets the criteria of Lemma 3. It ispossible, however, that W1 = W4,W2 = W5, and W3 = W6. Whether thereare any special properties for such walks is open.

On another note, a fully developed theory of the transition graph maybe able completely characterize periodic points with the same ease that thecompatibility graph characterizes fixed points with. This is possible forpoints of period 2 on certain functions. We can mimic the construction ofthe compatibility graph, except we let our vertices be local swaps rather thanlocal fixed points. This creates an “anti-compatibility graph.” Through thesame methods used in the compatibility graph we can analyze a special typeof state: the inverse pair.

Definition 11. The states X and X 0are inverse pairs if X 7!

F

X 0and

X = X 0 + S

Where S = 1, 1, 1, . . . , 1. (In other words, X 0is X just with the 0’s and 1’s

flipped.)

We wonder what kind of functions have only inverse pairs, becauseit is those functions that we can completely characterize with this “anti-compatibility graph.” There do exist functions which have points of period2 which are not inverse pairs. However, we have the following conjecture.

Conjecture 1. Points of period 2 for symmetric functions on Cn

are always

inverse pairs.

Page 24: Periodic Points and Iteration in Sequential Dynamical Systems

1. Predicting Periodic Points in Cycle Graphs 24

We have confirmed the conjecture for some basic symmetric functions,including the All 0, Nand + 1, Majority, Parity, Nor + Nand, Nor + 1, Parity+ 1, Minority, Nand, and All 1 functions.

Finally, generalizing the transition graph to general SDS and arbitrarypermutations would be quite fruitful and help characterize the periodicpoints of many other structures.

Page 25: Periodic Points and Iteration in Sequential Dynamical Systems

2. THE ITERATED PHASE SPACE

What happens when we “apply an SDS to it’s own phase space?” Thisquestion leads us to the iterated phase space. To understand this structure,we must first extend our notion of SDS.

2.1 Generalized SDS

General SDS extend simple cyclic SDS to arbitrary simple graphs. Ourconstruction mimics that of [4, Chapter 4.1].

A general SDS (or, from now on, simply SDS) has the same constructionas a simple cyclic SDS. The di↵erence is, instead having each local vertexfunction be f : K3 ! K we have the family of functions

fn

: Kn ! K.

This gives us a local vertex function for arbitrary number of neighbors (in-stead of only three.) In light of this, instead of applying the local vertexfunction to a vertex, we apply the local function F to the vertex v. If v hasd(v) neighbors, F will apply the corresponding local vertex function fromour family of functions.

F : Kn ! Kn

F (v) = fd(v)+1(v)

(Note the change in notation – F is no longer shorthand for an SDS-map,but is rather a local vertex function.)

2.2 The Iterated Phase Space

In this section we define the iterated phase space (IPS).Let [F

G

,⇡] be the SDS-map with local vertex function F , graph G andpermutation ⇡. Consider the phase space of this graph. By ignoring edges,labels, and direction which we can transform the phase space into a graphG0. From G0 we can retrieve its connected components to form the set ofconnected graphs T

.

Page 26: Periodic Points and Iteration in Sequential Dynamical Systems

2. The Iterated Phase Space 26

Now, given F and G as above, consider a set of permutations S ={⇡1,⇡2, . . . ,⇡

k

} for G. Then G generates the set of graphs T = T⇡1 [

T⇡2 [ · · · [ T

⇡k under the function F and the permutation set S. (If F andS are assumed, we just say that G generates T .) We call T the children ofG.

Definition 12. Let

IPSn

[FG

, S]

denote nth iteration of an iterated phase space (IPS) formed formed by

a seed graph G, the local update function F , the permutation set S.

IPSn

[FG

, S] is a graph whose vertices are graphs. We call these verticesthe elements of our graph. In particular

1. IPS0[FG, S] has a single element, which is the graph G.

2. The elements of IPS1[FG, S] are G and the children of G. We draw andirected edge from G to each child. If two elements are the same weidentify them.

3. For IPSn

[FG

, S], we consider all the elements in IPSn�1[FG, S] but not

in IPSn�2[FG, S]. (That is, we consider all the new children created in

the previous iteration.) Call this set M .

Then IPSn

[FG

, S] consists of IPSn�1[FG, S] along with all the children

of each element in M , where each element has an edge to its corre-sponding children. We identify any elements which are the same.

If we let n go to infinity we create the IPS, which we denote as IPS[FG

, S].Taking S to be the set of all possible permutations for G gives us all possiblephase spaces from a seed graph.

It is worth emphasizing that an element of an IPS as a graph representing

a phase space.

Example: Let K2 be our seed graph. Then IPS0[ParityK2

, {id}] consistsof a single element, which is K2.

The phase space of the graph K2 gives under Parity and the identitypermutation is a directed 3-cycle and a single point. If we remove theloop, the directions, and the states the phase space appears as follows.

Page 27: Periodic Points and Iteration in Sequential Dynamical Systems

2. The Iterated Phase Space 27

Fig. 2.1: The children of K2 under Parity and {id}.

Thus K2 generates C3 and a K1. Hence IPS[ParityK2

, {id}] is as follows

Fig. 2.2: IPS1[ParityK2, {id}]

For the second iteration can apply Parity to these two new graphs. Thesingle point gives a trivial phase space consisting of two points bothlooping to themselves (which we see as the loop in the IPS). The C3

gives the phase space described in Figure 1.2 (ie. to a two cycle anda four cycle), so we draw an arrow from C3 to K2 and to a C4 in thesecond iteration.

Fig. 2.3: IPS2[ParityK2, {id}]

The only new graph is C4. This generates C3 and a single point underthe identity permutation. Thus for the fourth iteration of our IPS, whenconsidering only the identity permutation, is looks like

Page 28: Periodic Points and Iteration in Sequential Dynamical Systems

2. The Iterated Phase Space 28

Fig. 2.4: IPS4[ParityK2, {id}], which is the same as IPS[ParityK2

, {id}]

Any further iterations will not change this graph, as no new children arecreated. In particular, this means that IPS4[Parity

K2, {id}] = IPS[Parity

K2, {id}].

2.2.1 Definitions for IPS

We wish to easily refer the location of elements in an IPS. For the following,let L and K be elements in IPS[F

G

, S].

Definition 13. The generation of L is the smallest n such that IPSn

[FG

, S]contains G.

Definition 14. K is an ancestor of L (and G is a descendant of K) if

K has lower generation than G and there is a directed path from K to G in

IPS[FG

, S].

Definition 15. K is a parent of L (and L is a child of K) if K is an

ancestor of G and adjacent to K in IPS [FG

, S].

Page 29: Periodic Points and Iteration in Sequential Dynamical Systems

2. The Iterated Phase Space 29

2.3 Classification of IPS

We wish to classify the behavior of IPSs over symmetric functions. The restof this section is devoted to this goal. We will say a vertex is white if itsstate is 0, and black if its state is 1. We will freely interchange “white” with“0” and “black” with “1.” (Likewise we also interchange a graph with acoloring of black and white with the a graph with states for each vertex.)

2.3.1 Behavior of Or and And

Or and And are dynamically equivalent. We will focus on Or, but all resultsalso apply to And.

Or takes any vertex next to a 1 and turns it into 1. In particular, thismeans that any graph with a state that contains a 1 eventually updatesinto the all 1’s state. In this way Or causes the 1’s to “spread”. The onlystarting state that does not eventually becomes the all 1’s state the all 0’sstate, which will always remain the all 0’s state.

This means that, for any graph, the phase space of Or is a single un-connected fixed point (which corresponds to the all 0’s state) and a largedirected tree with all states leading towards the root, which is the fixed all1’s state.

The IPS in this case would be infinite, as after each iteration we gettwo graphs: a single point and a tree with 2n � 1 elements. Since each it-eration gives a larger tree graph we will never get a graph we had previously.

For example, IPS2[OrK2 ] looks like

Page 30: Periodic Points and Iteration in Sequential Dynamical Systems

2. The Iterated Phase Space 30

Where each color represents a separate permutation and black representsall permutations.

One question is whether, in an arbitrary element of IPS[OrK2 ], there is

branching.

It is possible for the paths to branch. Take the following graph of ourOr IPS with the permutation (1, 2, 3, 4, 5, 6, 7).

A subgraph of the above’s phase space under Or is

Thus Or can branch, which suggests that the Or IPS is not entirelysimple. We further analyze this branching now.

Page 31: Periodic Points and Iteration in Sequential Dynamical Systems

2. The Iterated Phase Space 31

Lemma 5. Consider a graph with vertices p and q distance n away. Say

that p is black and all other vertices are white. Then it takes n system

updates for q to become black under any permutation such that a vertex vupdates before v0 if and only if its distance to p is less than or equal to its

distance to v0.

Proof. Let k be the distance of q to the nearest black vertex. Because ofour choice of permutation each system update reduces k by exactly 1. Thusit takes n system updates.

We can find the length of branches in the elements of the IPS. Let Hbe an element of the IPS. These elements resemble star graphs, and have anatural center at the vertex that, in the phase space, would correspond totheir all ones state. A branch is then a path graph with one end at thecenter.

Proposition 5. Let G be an arbitrary graph. Then IPSn

[OrG

] has an el-

ement that has a branch of length 2n�1. Furthermore, there are no other

elements in IPSn

[Or,⇡] with strictly larger branches.

Proof. Proceed by induction. The base case is immediately true. For theinduction step, take an element of IPS

n

[OrG

] with longest branch of length.By induction, this length is 2n�1. Call this chosen element H. Considerthe largest path graph in H and place the natural identity permutation onit. (ie the left to right permutation.) Make this path and rest of graph allwhite except for one black at the end of the path that is not the center. ByLemma 5 it takes 2n updates to make this path all black.

This corresponds to a directed branch of length 2n in the correspond-ing phase space. Hence it we have a element with branch length 2n inIPS

n+1[OrG].

Observe that branching is common and easy to make. We expect phasespaces of Or to be littered with branches. For example, if we have a graphwith a coloring that creates branches and we reverse the coloring of thegraph, the inverse state also branches.

Next we give two definitions in preparation for Theorem 3.

Definition 16. Say L is a graph such that there exists a vertex which, if it

is deleted, makes the graph disjoint into two parts, G and K. We call this

vertex a connector vertex.

Page 32: Periodic Points and Iteration in Sequential Dynamical Systems

2. The Iterated Phase Space 32

Under Or, if L has a vertex order ⇡ such that, the connector vertex

updating to 1 on a system update implies that all of K updates to 1 on the

same system update, then we say K extends G under ⇡.

Note that the connector vertex is part of G.

Example: We can extend any graph G by adding a line to any vertex v.Here v forms the connector. This graph G with the line is an extendergraph if we give the line the identity permutation and update it after weupdate v.

Definition 17. Let G be a graph. Given a local update function F and

permutation ⇡ of G we let �G denote the phase space of G under [FG

,⇡].We say a phase space �G is embedded in the phase space �L if, when

viewed as digraphs,d �G a subgraph of �L. We denote this �G ⇢ �L.

Theorem 3. Let our local update function be Or and let our graphs be Gand L, such that L extends G under some vertex order ⇡. Then �G ⇢ �L,where �L is the phase space of [Or

L

,⇡] and �G is the phase space of [OrG

,�],where � is any vertex order for G.

Proof. Consider �G. For now we ignore that all 0’s state and consider �G0,which is �G with the all zeros state removed. Notice that �G0 is a tree.(Indeed, the phase space of Or on any graph gives a phase space with a treestructure and a single point.)

Consider one of the leaves in �G0. From this state we can follow a pathdown to the all 1’s state.

Now consider �L. Any vertex in �L has a part of its state correspondingto G. Denote G-part g and the rest as k. Observe that g and k can be seenas states that together give us a state of L.

Set g to the state of one of the outer leaves of �G. Set k to all 0’s state.Notice that, if we were applying the SDS-map to L, the k part never

influences how the g part updates. That is, if G has state g and it updatesto g0, then in L the g part of its state will also update to g0.

Why is this true? Consider any update of L from g with k all zeros.Because k is all zero, any vertex that updates ignores vertices in the k-part.This is because Or only cares about vertices that are 1. Thus the k part, aslong as it is 0, never influences how the g-part updates.

By the next update we have updated the g-part, and k is either all 0 orall 1. If the former, then we can repeat the same logic to find that the g-partupdates independent of what k is. If k is all 1’s then the same happens.

Page 33: Periodic Points and Iteration in Sequential Dynamical Systems

2. The Iterated Phase Space 33

This is because for k to have become all 1 the connector vertex must ofbecome 1. Once the connector is 1 it cannot change. From the perspectivethe g-part it only sees that the connector is 1. The rest of k does not matterto it. As the connector becoming 1 is what would of happened to G withoutk, it continues updating as always.

Thus g inside the state of L updates the same as g as a state of G would.That is, g in L follows the same path that g in G would in �G.

Note that at the end the G-part will be at the all 1’s state. The k-partwill also be all 1’s because the connector vertex must of updated to 1.

This tells us that, given a leaf in �G, we get a path down to the all 1’sstate. Furthermore, this path is embedded in �L. However, we do not knowif the branching in �G is consistent with that of �L. That is, two pathsthat intersect in �G may not intersect in �L. We show that this actuallydoes happen through two cases.

Case 1 Let v be the connector vertex. Recall that v is in G. Say that thetwo paths intersect at a point in �G such that v has a vertex state of1.

Let (a, 1), (b, 1) and (c, 1) denote states of G, where the 1 is the stateof the connector vertex and a, b and c are the states for rest of thevertices. Note that these states correspond to the g part of L.

Say that (a, 1) and (b, 1) both update to (c, 1). Then (c, 1) is theintersection in �G. This same structure will also be found in �L.That is, (a, 1, l

a

) and (b, 1, lb

) both update to (c, 1, lc

), where (a, 1, la

),(b, 1, l

b

), (c, 1, lc

) are states of L corresponding to the states (a, 1),(b, 1) and (c, 1), with the l’s being the extra states that correspond tothe k part of L.

If this structure did not exists then it would mean (b, 1, lb

) updates to(b, 1,~0). (That is, k is all zeros.) But this cannot happen because theconnector vertex v has state of 1, and since L extends G this wouldcause k to become all ones. This is a contradiction.

This this branching in �G must also be seen in �L.

Case 2 Say that the connector vertex has state of 0. This is proved similarto Case 1. Construct the same structure in �G and see that if the cor-responding structure is not found in �L then we have a contradiction.

Thus each path intersects where in should. As we know that, startingfrom a leaf, each path is in �L, and that they intersect where they should,

Page 34: Periodic Points and Iteration in Sequential Dynamical Systems

2. The Iterated Phase Space 34

and that the all end at the all 1’s state, we can conclude that

�G ⇢ �L

Finally, the all 0’s state of �G corresponds to the all 0’s state in �L.

Definition 18. Take graphs G and L,L1, . . . , Ln

such that L is L is an

extension of L1 which is an extension of L2 which is . . . an extension of Ln

which is an extension of G, all under appropriate permutations. We then

say that L multi-extends G.

Corollary 2. The previous theorem holds if G multi-extends L under a set

of permutations.

Proof. By a simple induction on the number of extensions, where the pre-vious theorem forms the base case.

Corollary 3. Say that S is the set of all permutations and K2 is the seed

graph. Let the graph G be an element of IPS[OrK2 , S]. Let C be the set of

all children of G. Each child c 2 C is generated from G via a permutation

⇡. Then c contains G as a subgraph if c extends G under ⇡.

Proof. By induction. The base case is immediate as K2 goes to K3 in theIPS. Now consider �G. Let U(�G) denote the undirected graph underlying�G. We must show that if U(�G) extends G, then U(�G) is a subgraph ofU(�U(�G)).

By induction we know that G is a subgraph of U(�G). Because Or formstrees as its phase space, U(�G) extends G for some set of permutations.(This is because U(�G) is just many path graphs added onto G.) Then byCorollary 2 we find that U(�G) is a subgraph of U(�U(�G)) for appropriatepermutations.

That is, IPS of Or, when we look only the correct permutations, consistsof growing trees. In particular, the IPS contains a tree of growing trees.

Page 35: Periodic Points and Iteration in Sequential Dynamical Systems

2. The Iterated Phase Space 35

2.3.2 Behavior of Parity and Parity+1

Parity and Parity+1 are the only invertible symmetric SDS, and so have beenwell studied by others. The following theorem early on in [4].

Theorem 4. For any graph G, [ParityG

, id] and [Parity+ 1G

, id] are invert-

ible.

Corollary 4. For arbitrary graph G and permutation ⇡, the phase space of

[ParityG

,⇡] or [Parity+ 1G

,⇡] is made up of disjoint cycles.

Proof. Since [ParityG

,⇡] or [Parity+ 1G

,⇡] are invertible, each state has ex-actly one unique parent. This is only possible in a cycle.

Theorem 5. Consider Ck

with k � 3. The phase space of [ParityCk

, id] arethe cycles in S, where

S =

({C

x

| x is a factor of k-1} if k is even

{Cx

| x is a factor of 2(k-1)} if k is odd

Proof. See [4, Section 5.4.2]

Example: Theorem 5 tells us that phase space of [ParityC3, id] is made

up of cycles with length that are factors of 4, while the phase space of[Parity

C4, id] is predicted to be made up of cycles with length that are

factors of 3.

Page 36: Periodic Points and Iteration in Sequential Dynamical Systems

2. The Iterated Phase Space 36

Fig. 2.5: The phase space of [ParityC4, id]

Given any k 2 N, Defant has completely characterized the number ofperiodic points length k of [Parity

n

,⇡] and [Parity+ 1n

,⇡] in [1].

2.3.3 Behavior of Majority and Majority+1

Majority and Majority+1 are dynamically equivalent. We will focus on Ma-jority, but all results hold for Majority+1.

We can characterize what happens to any star graph as long as thepermutation starts with the center vertex.

Theorem 6. If we apply Majority to the star graph under a permutation

that starts with the center, then the resulting phase space is two star graphs,

and appears as

Page 37: Periodic Points and Iteration in Sequential Dynamical Systems

2. The Iterated Phase Space 37

Proof. If all vertices are white or black then we have a fixed point. If half ormore of the vertices are black we will update to the system state where allvertices are black. If less than half of the vertices are black we will updateto an all white system state.

In particular, this means if we start with a star graph as our seed graph,we always get star graphs in our IPS. Because these star graphs are alwayslarger than the one we began with, repeating gives larger and larger stargraphs. This means the IPS is infinite.

Theorem 7. If we apply Majority to the star graph under a permutation

that ends with the center, then we have the following phase space.

Page 38: Periodic Points and Iteration in Sequential Dynamical Systems

2. The Iterated Phase Space 38

Proof. The proof is a case bash, and similar to the previous proof.

2.3.4 Behavior of Nor and Nand

Many have studied Nor and Nand to great detail. We have not looked attheir IPS in detail yet as they appear complicated. However we have threetheorems that give us a partial picture of their behavior.

Because the functions are dynamically equivalent, all results for Norapply to Nand.

Theorem 8. Nand or Nor are the only symmetric functions with no fixed

points.

Theorem 9. The phase space of Nor is made up of directed cycles and states

updating directly into the cycles.

[4] contains a deep result about cycles in the Norphase space. Looselyspeaking, it says

Theorem 10. There are always a fixed number of periodic points in a phase

space made from Nor, and we can distribute the cycles however we want

by choosing a correct update order (which is now no longer limited to a

permutation, but is a word).

For example, if we have 8 periodic points, then there exist a word suchthat the phase space has four 2-cycles. Likewise, there exists a word suchthat the phase space has one 8-cycle.

This means the IPS is infinite if we use words, as we can always createlarger and larger cycles. It would be interesting to investigate whether theIPS is infinite when we limit ourselves to permutations.

2.4 Other results

We have a small lemma and one conjecture about what phase-spaces cannotexist. These are useful in that they tell us what phase spaces cannot comeout of the seed graph K2.

Lemma 6. Take a symmetric binary SDS over K2. Then the phase space

made up of two disjoint 2-cycles cannot exist.

Page 39: Periodic Points and Iteration in Sequential Dynamical Systems

2. The Iterated Phase Space 39

Proof. By contradiction. Assume that it can exist. Without loss of gener-ality, assume that we update the left vertex first and then the right one.Let ”!” denote a single vetex update. Let the string ↵� denote the graph,where ↵ is the state of the first vertex and � is the state of the second. Thenthe state 11 can be paired with three di↵erent states, giving us three cases.

Case 1: Then have the following vertex updates

11 ! 01 ! 01

00 ! 10 ! 10

When we vertex update the second vertex in the state 01 we get 1. How-ever, when we do the same in 10 we get 0. This is a contradiction, becausewe are working with symmetric functions and 10 and 01 have the same num-ber of 1’s and 0’s, and so should update to the same value.

Case 2: We have the following vertex updates.

11 ! 10 ! 10

00 ! 01 ! 01

Like before, we find 10 and 01 give di↵erent updates when they shouldgive the same. Contradiction.

Case 3: Same argument as before, using the following vertex updates

11 ! 01 ! 00

00 ! 10 ! 11

Finally we have the following conjecture.

Page 40: Periodic Points and Iteration in Sequential Dynamical Systems

2. The Iterated Phase Space 40

Conjecture 2. Let (a1, a2, . . . , an) represent the state of a graph, and let

inv be the inverse function that changes any 0 to a 1 and any 1 to a 0. Then,

given a symmetric binary SDS, the following subgraph cannot exist in the

phase space.

Page 41: Periodic Points and Iteration in Sequential Dynamical Systems

3. COLOR SDS

Our research into Color SDS was motivated by looking graph coloring prob-lem in terms of SDSs.

The graph coloring problem asks how many colors one needs to color eachvertex of a graph so that no two vertices of the same color are adjacent. Wecall such a coloring a proper coloring.

3.1 What is a Color SDS?

A color SDS is a SDS that is designed to color a graph, and “stop” whenthe graph reaches a proper coloring (or never stop if there is no coloring).For SDS, stopping means we’ve reach a fixed point.

One example of a color SDS (and the one that we worked most on) isthe following

• Graph: Any graph we want colored.

• States: the colors 1 . . . n

• Vertex Function: Look at the vertex. It has color i. If it adjacent toa vertex of the same color change the color to i+1 mod n. Otherwiseleave it alone.

When we write out the phase space of this graph, the fixed points cor-respond to all possible colorings.

3.2 Color Distance

We begin our investigation with the idea of Color Distance. This is a numberassigned to each colored graph that represents how properly colored it is.One way to get such a measure is to calculate the phase space for the colorSDS and say the color distance is the number of updates a state needs beforereaching a fixed point.

Page 42: Periodic Points and Iteration in Sequential Dynamical Systems

3. Color SDS 42

However, this measure requires one to calculate the entire phase space.We would like to find another measure that approximates the same concept.Consider this alternate definition of color distance.

Definition 19. The color distance is defined as the number of vertices which

are adjacent to the same color.

In this case a lower color distance corresponds to a better coloring.

Theorem 11. Take the Color SDS and any colored graph G. After an

update the coloring distance stays the same or lowers.

Proof. Take coloring on graph G and apply the coloring function. Considerwhen an arbitrary vertex v

i

updates. We have three cases.

Case 1: vi

is not adjacent to a vertex of the same color and so does notchange. Then the color distance does not change either.

Case 2: vi

is the same color as an adjacent vertex vj

and so updates intoa new color, say c. Because v

i

is not the same color as vj

, the color distancegoes down by one. Furthermore, if we assume the updated v

i

is not adjacentto any vertices of the color c, then the color distance does not change furtherand we are done.

Case 3: This is case 2 without the last assumption. After updating, thevertex v

i

is next to at least one vertex vk

of the same color. Because vi

isnot the same color as v

j

, the color distance goes down by one. Because vi

is the same color as vk

, the color distance goes up by one. Thus the netchange in the color distance is 0.

We now asked a few questions about the color SDS. Almost all of thefollowing are conjectures that we’ve proved false.

Conjecture 3. Does running the color SDS on Kn

always improve the color

distance when possible?

Answer: No. For example, let 322 represent a coloring of K3 on threecolors. Updating this using any permutations gives

322

#332

Page 43: Periodic Points and Iteration in Sequential Dynamical Systems

3. Color SDS 43

This single update does not change the color distance.This example also works on arbitrary K

n

by

322 . . . 2

#333 . . . 2

Conjecture 4. (A weaker version of conjecture 3.) Does running the col-

oring function on Kn

eventually result in a proper coloring?

We think the above is true, but could not find a proof.

Conjecture 5. For any coloring of graph G, is there a permutation ⇡ such

that a single run of the coloring SDS improves the coloring when possible.

That is, after updating the graph requires less colors to color, unless it is

already colored with the minimal amount of colors.

(Note we are allowed to choose the permutation with each update.)

This conjecture is also false. Consider the following graph over fourcolors (where the number corresponds to the color of the vertex).

No permutation of the above graph will decrease the color distance aftera single update. However, we can update this graph so that it eventuallyends in a fixed point.

We wonder what classes of graphs give counterexamples, as this mayhelp characterize the graphs which are hard to color.

3.3 Other Coloring SDSs, Other Color Distances

We of course could modify our coloring SDS. For example, a slightly smarterone would, when it finds a vertex that is adjacent to one of the same color,

Page 44: Periodic Points and Iteration in Sequential Dynamical Systems

3. Color SDS 44

updates the vertex to the first color that is free. If no colors are free, thenit adds 1 to the color.

Note that the adding of 1 is important. If we instead did nothing thenwe can make the following graph on 3 colors which is fixed point but not aproper coloring.

Perhaps more importantly than modifying our color SDS is modifyingour concept of color distance. Our counterexample of conjecture 5 suggeststhat our color distance may not be “fine” enough. One possible measure isthe following.

Definition 20 (Fixed point coloring distance). Take a colored graph and

a color SDS. The color distance is the number of vertices such that, during

the next update, they do not change color.

In this case a higher color distance corresponds to a better coloring.

3.4 Lonely Colorings

Definition 21. A lonely coloring is a coloring of a graph such that, when

represented on the color SDS phase space, is an isolated fixed point discon-

nected from the rest of the phase space.

In other words, nothing updates to a lonely coloring and a lonely coloringonly updates to itself.

We investigated whether our Color SDS had lonely colorings. The an-swer, somewhat surprisingly, is that they do, but only if we allow more colorsthan the graph needs to color itself. These results are summed up in thenext few theorems.

Theorem 12. There does not exist lonely colorings if there are only two

colors.

Theorem 13. For a graph that needs minimum of n colors to properly color,

there does not exist a lonely coloring if we have n or less colors.

Page 45: Periodic Points and Iteration in Sequential Dynamical Systems

3. Color SDS 45

The proofs for these are not di�cult and go by contradiction. We assumethat there is a lonely coloring, then modify that coloring into a new one sothat, under a specific permutation, it updates into the lonely coloring.

However, if we have n+ 1 or more colors, then there may exist a lonelycoloring. The phase space of K2 on three colors serves as an immediatecounterexample. This gives us the following conjecture.

Conjecture 6. Let G be a graph that requires a minimum of n colors to

properly color. Then the coloring SDS over graph G with n + 1 or more

colors always has a lonely coloring.

Page 46: Periodic Points and Iteration in Sequential Dynamical Systems

FUTURE DIRECTIONS

The world of sequential dynamical systems is vast, and much waiting to bediscovered. Our foray into periodic points is only a small improvement in ourunderstanding, but we believe that the transition graph can be generalizedto further shine light on periodic points.

Little is known about the IPS of Nor and Nand, reflecting Nor and Nandsstrange complexity. IPS in general are not well understood, save for a fewsimple functions. Finally, the color SDS is just one of attempt to work SDSinto a tool with which to tackle a pure mathematical problem. Certainlymany other applications of SDS exist, both pure and applied.

Page 47: Periodic Points and Iteration in Sequential Dynamical Systems

ACKNOWLEDGMENTS

I thank my colleagues Gwendolyn Claflin, Sophie D’Arcy, Colin Defant andCory Saunders for their encouragement and mathematical support. I alsothank the University of California Santa Barbara Math REU program andNSF reward DMS-1358884 for making the research for the first chapterpossible, along with Maribel Bueno Cachadina for organizing the REU. Ithank the CCS Small Research Grant for funding the second two chapters.Finally I thank Padraic Bartlett who – along with helping organize theREU, editing, and fleshing out the algorithm in the first chapter – served asa magnificent mentor and guide.

Page 48: Periodic Points and Iteration in Sequential Dynamical Systems

BIBLIOGRAPHY

[1] Colin Defant. Mixing induced CP asymmetries in inclusive B decays.Phys. Lett., B393:132–142, 1997.

[2] Daniel C. Dennet. Darwin’s Dangerous Idea. Simon and Schuster,1995.

[3] Martin Gardner. Mathematical games: The fantastic combinations ofjohn conways new solitaire game life. Scientific American, 223(4):120–123, 1970.

[4] Henning Mortveit and Christian Reidys. An introduction to sequen-

tial dynamical systems. Springer Science & Business Media, 2007.