JUNE 1, 2009 HOW TO THINK ABOUT...
Transcript of JUNE 1, 2009 HOW TO THINK ABOUT...
JUNE 1, 2009
HOW TO THINK ABOUT ALGORITHMS THE PATH TO UNIVERSAL COMPUTATION
ELISA ELSHAMY
ABSTRACT. In the early 20th century, the famous mathematician David Hilbert
asked whether all mathematical problems could be solved by a mechanical procedure.
Hilbert believed that the answer to his question would be affirmative, but this turned out
to be very far from the truth. In order to tackle Hilbert’s question it was necessary to
formalize what was meant by a mechanical procedure. In the 1930’s several
mathematicians took up this challenge. Gödel came up with the recursive functions. Alan
Turing, the father of modern computers, came up with Turing Machines. In the later
years, following the development of actual computers, new formalizations such as
Unlimited Register Machines were introduced. There is no single formal definition of
what is an algorithm. In this research, we will give an informal intuition for the notion of
algorithms and discuss the Turing machines and the unlimited register machines as two
formalizations of this concept. The remarkable fact that came out of the study of
theoretical and practical computation is that diverse models of computation ranging from
Gödel’s recursive functions to Turing machines to modern computers all have equal
computational capabilities: in disguise they all have exactly the same algorithms! In fact,
they can even be programmed to simulate one another.
ELISA ELSHAMY (STUDENT)
DR. VICTORIA GITMAN (MENTOR)
HOW TO THINK ABOUT ALGORITHMS – ELISA ELSHAMY - 2 -
1. INTRODUCTION
We use algorithms to perform our daily tasks without even realizing it. Some of
these routines include using recipes to cook food, doing laundry,
and writing a paper. Pretty much anything where we follow an
ordered pattern to complete a desired outcome qualifies as an
algorithm. The word algorithm originates from the name of the
9th
century Persian mathematician, Abu Abdullah Muhammad
ibn Musa al-Khwarizmi whose works introduced the numeric
system of integers and the elementary algebraic concepts we still
use today. His name came to be associated with algorithms
because he introduced the modern algorithms for addition and
multiplication. Surprisingly, there is no formal definition of an
algorithm! However, there are some generally accepted criteria:
• Takes input, gives output
• Carried out mechanically
• Executed in finite time
• More than one algorithm for the same problem can exist
• Algorithms can only make sense if they aim to achieve a desired result
To simply summarize the above, algorithms are an explicit collection of steps for
carrying out a task. In computer lingo, an algorithm is anything that can be translated
into a computer program. Software is made up of many algorithms or computer
programs that work together to tell the computer how to behave in different scenarios. In
this paper, we will introduce and analyze the formal definitions of what is an algorithm
proposed by Alan Turing (1936) and Shepherdson and Sturgis (1963).
One of the first attempts to develop a machinery to solve problems
algorithmically was made by the mathematician Gottfried
Wilhelm Leibniz. Born 1646, Leibniz became fascinated
with logic and reasoning once he was introduced to
Aristotle’s work in his teenage years. Aristotle (384 BC –
322 BC) was an ancient Greek logician and a pioneer of
formal reasoning. Inspired by these studies, he began to seek
an alphabet that could represent logical reasoning and
connections. Each symbol of the alphabet would visually
translate some idea and concatenating symbols together
would evaluate the relations between them. Leibniz spoke of
this alphabet as his “wonderful idea” and held faithful to it as
his life’s journey to perfect it. Leibniz’s legacy lies in his
contribution to calculus. The integral sign and notation he
invented for derivatives are still the most widely favored
today and simplify concepts such as the substitution rule for
integration.
HOW TO THINK ABOUT ALGORITHMS – ELISA ELSHAMY - 3 -
“Leibniz has a vision of amazing scope and grandeur. The notation he had developed for
the differential and integral calculus, the notation still used today, made it easy to do
complicated calculations with little thought. It was as though the notation did the work.
In Leibniz’s vision, something similar could be done for the whole scope of human
knowledge.” [2]
Because he had to earn a living, he was prevented from pursuing this work further.
Unfortunately what Leibniz was able to document is still vague, incomplete, and to this
day has us pondering if his understanding of logic could have gotten more involved than
any we have yet been able to fathom.
Leibniz’ work captivated German mathematician
David Hilbert and in 1920’s he developed what came to be
known as Hilbert’s Programme. Recent movements in
science such as those to capture all laws of physics by a few
mathematical formulas played an instrumental role in
Hilbert’s optimism. Just as many found it necessary to
capture all of life’s truth in a collection of pre-defined laws,
Hilbert found it just as essential to mechanize mathematics in
much of the same way. Situations such as Gottlob Frege’s
Set Building Axioms being shown inconsistent by Bertrand
Russell’s Paradox irritated Hilbert enough to seek a
foundational approach which all mathematicians could agree
upon.
“With this new way of providing a foundation for
mathematics, which we may appropriately call a proof theory, I pursue a significant goal,
for I should like to eliminate once and for all the questions regarding the foundations of
mathematics, in the form in which they are now posed, by turning every mathematical
proposition into a formula that can be concretely exhibited and strictly derived, thus
recasting mathematical definitions and inferences in such a way that they are
unshakeable and yet provide an adequate picture of the whole science.” [6]
The first thing that Hilbert really hoped to obtain from his Programme was a way
to tie logical reasoning with mathematics. Hilbert proposed that there should be a general
collection of pre-defined rules as the starting grounds for logical reasoning and then each
branch of mathematics should have a specific collection of axioms to extend the logical
rules. In other words, Hilbert wanted the bare bones of reasoning (be it logical or
mathematical) to be a set of axioms that is given and universally accepted. Discoveries in
mathematics are usually spawned from what has been known and accepted as axioms for
years and then extended somewhere by some new intuition-based assumptions the
mathematician makes to extend the concepts. This is exactly what troubled Hilbert.
Hilbert worried that ideas derived in the mist of trying to prove something
groundbreaking could be faulty and erroneous. Hilbert’s approach to avoid such
situations was to promote what we can think of as an axiom library. Now we must
understand that Hilbert did not want to eliminate the axioms that had already been
established before he put forth his program nor did he attempt to create new axioms.
HOW TO THINK ABOUT ALGORITHMS – ELISA ELSHAMY - 4 -
Actually these established axioms are exactly what Hilbert wanted to use as a starting
point for building proofs. However, Hilbert did not desire to witness anymore cases of
“customary mathematics” as he may have referred to classical proof theory. If only his
wish could be accomplished, all possibility of anyone using or inventing incorrect axioms
would be eliminated. This is true because the only axioms mathematicians would be
limited to would be the axioms provided by this pre-defined library. (1.1)
These are just some of the axioms that were used before Hilbert’s time and it is these classical axioms
that Hilbert wanted to use for structuring a standard axiom toolkit.
A & B ⇒ A
A ⇒ A v B
~(~A)) ⇒ A (principle of double negation)
Hilbert’s goal of completeness in his Programme suggested that mathematics be
constructed via the use and manipulation of a formal library. This formal system became
known as the language of first-order logic and would consist of formulas and symbols.
Of course in order to obtain a library of such sorts, it means you need to somehow collect
every axiom necessary to build every new proof possible. The predicates of this starter
kit should be in their most natural form, so that they can be used to build complex
statements but not able to be reduced further. The mathematician striving to construct a
new proof would just have to understand the inventory of formulas and rules in the
library. They should also be aware of the inferences that are allowable when dealing with
such so that the new proof guarantees an efficient solution. Think of the mathematician
as a computer programmer who will use their programming language together with a set
of inbuilt functions (library) to create an algorithm for solving a problem or have the
computer perform a task. What highlights Hilbert’s philosophy is that he requested a
complete library. To be complete, would be to gather a library consisting of all the
fundamentals of mathematics. New findings would depend on one’s scope of reasoning
logical relationships that exist between the predicates.
“For in my theory contentual inference is replaced by manipulation of signs
according to rules; in this way the axiomatic method attains that reliability, and
perfection that it can and must reach if it is to become the basic instrument of .all
theoretical research.” [6]
The other milestone to fulfilling Hilbert’s wish is that he wanted consistency in
the axioms. Consistent meaning that no matter what you prove with those axioms, you
should never reach a contradiction. If by any unfortunate chance you do reach a
contradiction, you do not ever doubt your axioms but doubt the logic of what you are
trying to prove. This is so because the axioms themselves should reassure us that we
cannot prove nonsensical statements with them. For instance, no beginning axiom should
have the ability to prove anything logically incorrect as 0 ≠ 0. It may seem a bit odd that
so much confidence is put on not being able to prove just one impossible statement, but it
follows from the laws of first order logic (Principle of Explosion) that if axioms cannot
prove one contradictory statement, then it will be impossible to prove all other
HOW TO THINK ABOUT ALGORITHMS – ELISA ELSHAMY - 5 -
contradictions. We can then assume the axioms, along with any proofs obtained from
them are safe and consistent.
It definitely reassured Hilbert when he was able to apply his proof theory towards
several problems for which he for which he obtained positive solutions and even more so
when those who had been working closely among him had success in proving fragments
for his ultimate goal. For instance, Hilbert’s student Wilhelm Ackermann and John von
Neumann were getting close to proving a consistency theorem for the natural numbers. If
we want to relate this to computer science, it is comparable to requesting for a compiler
which will not only check for syntax errors, but determine logic errors as well i.e. if our
programs will get stuck in infinite loops. To Hilbert this seemed very possible and he
looked forward to the day when he would receive notification of the solution. However,
to work towards a solution for Hilbert’s Programme, the concept of algorithms would
have to be formalized. After Hilbert’s Programme had been proposed, many were asking
themselves what exactly is an algorithm? Fortunately, in efforts to grasp algorithmic
structure, many great discoveries were made and Hilbert’s Programme obtained a
solution.
Attempts to formalize algorithms began when Kurt Gödel, intrigued by the impact
of Hilbert’s Programme had set out to formulate the ultimate
consistency theorem. Gödel hoped that his work would
encapsulate all of mathematics under this consistency proof
and grant Hilbert’s wish. Unfortunately, Gödel managed to
do quite the opposite and ended all possibility in Hilbert’s
quest. Known as his Incompleteness Theorems, Gödel
proved that axioms for number theory could never be
complete. One always has to go outside the axioms to cover
an entire mathematical system and this would mean new
axioms would keep being introduced. It also means that
when mathematics is bounded by such a fixed set of rules,
there are no chances for new proofs and discovery. He also
showed that there cannot be an algorithmic proof that the
standard axioms are consistent. As it turns out, the
consistency of the axiom library itself cannot be determined
algorithmically. To formalize algorithms, Gödel introduced his library of recursive
functions. Gödel claimed these recursive N to N (operating on the set of natural
numbers) functions were exactly the functions that could be computed mechanically [4].
It turns out that this is really the case: these are exactly the functions that a computer can
compute.
2. TURING MACHINES
Shortly after in 1936 Alan Mathison Turing released his paper “On Computable
Numbers with an Application to the Entscheidungsproblem”. This paper introduced to
the world Turing Machines which were another attempt at formalizing algorithms,
showed impossibility to Hilbert’s Programme, and became a blueprint for building
computers. The hardware of a Turing machine consists of a tape which may extend right
infinitely, divided into cells for storage and a reading/printing head that moves left and
HOW TO THINK ABOUT ALGORITHMS – ELISA ELSHAMY - 6 -
right along the tape to process programs. Turing machines demonstrate how
computations are carried out mechanically. (2.1)
Depicted below is one perception of what a Turing machine may resemble if it was built:
Turing’s conception of his machine was not exactly as standalone as the
computers we have nowadays. Instead, Turing envisioned a computer to be a person who
obediently, without question follows a set of instructions
and processes them accordingly. For Turing machines,
the person was a mandatory part of the hardware. Turing
thought of the instructions as different states of the
person’s mind. This whole perception might seem
somewhat awkward to us now, but we must understand
that Turing invented his machine about two decades
before the advent of computers. Of course a mechanism
could be used to replace the person, which nowadays is
logic gates on silicon chips, and hence computers were
made possible. Although at this time a machine
containing the required technology needed to do
computations was probably still rather vague to Turing.
However, the logic of Turing’s machine grasped the concepts of hardware and software
so well that to date all computers and Turing machines are equivalent in terms of what
can be computed. That is, any algorithm we can carry out on a computer, we can run on
a Turing machine, just not as efficiently. It should be noted that computers and Turing
machines do differ in their memory capacity. Where the memory scheme for a Turing
machine is an infinite tape, a computer uses finite storage that must be upgraded if
needed. This is what clearly distinguishes a Turing machine as an abstraction when
compared to a computer.
To understand how states of mind for a human being can be comparable to
programming instructions for a computer, let us use the example of when we take a
multiple choice test that we have studied very well for. The pattern you would follow to
take such a test would be very straightforward:
HOW TO THINK ABOUT ALGORITHMS – ELISA ELSHAMY - 7 -
• Read the question.
• Read every choice.
• Select the choice you perceive as “right”.
• Move on to the proceeding question.
Similarly, a computer performs actions much in the same way, either derived from user
input or a program’s explicit instructions which tell the computer what actions to take.
When we program a Turing machine, as with any programming language, we
must follow a set of standard rules which are as follows:
1. Each state/command must be uniquely named, so that the Turing machine head
will know what write for each cell in the tape.
2. The states must be structured as so:
State name, symbol read in current cell (0 or 1), state name for the next
cell, what symbol to write in the current cell (0 or 1), and the direction the
head should move to read from the tape (R or L)
i.e. Start,1,Next,0,R
3. The H state is reserved to halt the machine from doing further computation.
4. The numbers we are reading off the tape are the natural numbers. Since 0 has
already been used for the symbol scheme, our zero will read off the tape as 1, one
will read as 11, two as 111, and so on. However, after our computations we must
ensure that our output still keeps true to our (n+1) definitions. The tape always
begins with a 0 filled in each cell that is not used by the input entered.
(2.2)
Another perception of a Turing machine showing the programming paradigm explained above:
A Turing machine before carrying out the line Start,1,Next,0,R (2.2a)
HOW TO THINK ABOUT ALGORITHMS – ELISA ELSHAMY - 8 -
After reading a 1 in the Start state, a 0 is written in the same cell (2.2b)
The head goes to the Next state (2.2c)
And then the head moves right (2.2d)
Successor program for a Turing machine – outputs successor of a number inputted
Start 1 > Start 1, R
Start 0 > H 1
The tape starts with an initial value, written in 1’s. The head continues to move right as
it reads the 1’s. Once the head hits a 0 on the tape, the Start state tells it to override this
0 with a 1 and Halt. Thus our output obtained is the increment/successor of the initial
value.
HOW TO THINK ABOUT ALGORITHMS – ELISA ELSHAMY - 9 -
Addition program for a Turing machine – adds the two original numbers inputted
Start 0 > H 0
Start 1 > Concatenate 1 R
Concatenate 1 > Concatenate 1 R
Concatenate 0 > FindEnd 1 R
FindEnd 1 > FindEnd 1 R
FindEnd 0 > Delete1 0 L
Delete1 1 > Delete2 0 L
Delete2 1 > Finish 0 L
Finish 1 > H 1
In a nutshell, the algorithm operates as so:
The input entered by the user is two numbers separated by a 0. The head skips 1’s until it
hits the 0 breaking the two inputs and fills it with a 1 to concatenate the two inputs into
one number. Remember, our inputs are always entered as n+1 and we added an extra 1
when we concatenated the two inputs, thus the head needs to delete the two extra 1’s. The
head proceeds on skipping 1’s until it hits another 0 signaling it has reached the end of
the user input. It then moves left to delete the extra 1’s.
Multiplication program for a Turing Machine – Multiplies two numbers inputted
As we can see, the multiplication algorithm programmed for the Turing machine became
very lengthy!!! It is computed as so:
marktape,_ H,_ marktape,1 marktape2,1,> marktape2,_ zerocase,_,> zerocase,1 zerocase,_,> zerocase,_ H,_ marktape2,1 marktape3,_,> marktape3,1 marktape3,1,> marktape3,_ marktape4,1,> marktape4,1 startmultiply,_ marktape4,_ H,_ startmultiply,_ zerocase2,_,< zerocase2,_ zerocase2,_,< zerocase2,1 delete,_,< delete,_ finish,_,< delete,1 ,delete,_,< finish,1 H,_ startmult,1 backtofirstinput,1,< backtofirstinput,_ backtofirstinput,_,< backtofirstinput,1 numofcopies,1,< numofcopies,_ extraone,_,> extraone,1 extraone,_,> extraone,_ findend,_,> findend,1 findend,1,>
findend,_ cleanup,1,<
cleanup,1 cleanup,1,< cleanup,_ cleanup2,_,< cleanup2,_,cleanup2,_,< cleanup2,1 H,_ numofcopies,1 minuscopy,1,< minuscopy,1 minuscopy,1,< minuscopy,_ deletecopy,_,> deletecopy,1 gotomakecopy,_,> gotomakecopy,1 gotomakecopy,1,> gotomakecopy,_ readthroughfirstcopy,_,> readthroughfirstcopy,1 readthroughfirstcopy,1,> readthroughfirstcopy,_ findfreespace,_,> findfreespace,_ moveback,1,< findfreespace,1 readthroughnewcopies,1,> readthroughnewcopies,1 readthroughnewcopies,1,> readthroughnewcopies,_ moveback,1,< moveback,1 moveback,1,< moveback,_ backtofirstcopy,_,< backtofirstcopy,1 numofones,1,< numofones,1 minusone,1,< minusone,1 minusone,1,< minusone,_ deteleone,_,> deleteone,1 readthroughfirstcopy,_,> numofones,_ refillcopy,1,<
refillcopy,_ refillcopy,1< refillcopy,1 forward,1,> forward,1 backtofirstinput,_,< *Please note that the
“_” is being used
instead of 0 and <, >
instead of L and R
respectively.
HOW TO THINK ABOUT ALGORITHMS – ELISA ELSHAMY - 10 -
The head first checks that the user has put initial values on the tape, if not the
program Halts. Otherwise the head proceeds to mark the beginning of the tape by
skipping the first “1” and then overriding the second “1” on the tape with a “0”.
Now the tape has a “10” mark to alert the front of the tape. Note that if the head
reads a “0” on the second cell, it knows that the multiplication is zero times a
number and continues to delete the rest of the tape and output zero (“1”). If this is
not the case, then continuing from the “10” marker, the head skips the “1”’s until it
hits the next “0” which split the two initial values to be multiplied. The head then
writes a “1” in this cell which momentarily concatenates the two inputs. At the next
cell which will be the start of the second input, the head writes a “0” there.
(2.3)
So after these few instructions, our tapes would look like so:
Tape before the multiplication algorithm with the initial values 1110111 (2*2) (2.3a)
Tape after the previously described instructions (2.3b)
Since we already have the initial copy of our second input written on the tape, we
do not need to make a copy of it. Note that this algorithm is computing multiplication
by making m many copies of n. The Turing machine just reads through this initial
copy of n (the second “11” in this case) and then walks back left. Our multiplication
Turing machine then deletes the “1” in the third cell of the tape (the first “1” of the
input m).
The tape at this point in the program would have the following: (2.3c)
The head proceeds to the right, skipping the “0” after the “11” (n) to make
another copy of n. Walking back left using the algorithm, the head realizes that this
was the last copy it needed to make of n, because there is only one “1” for m on the
tape. This “1” gets deleted and another “1” is written in the “0” that is breaking up
the copies of n. Now n is concatenated to one number. The head moves left one more
time to delete the “10” marker, leaving the tape with the desired output.
HOW TO THINK ABOUT ALGORITHMS – ELISA ELSHAMY - 11 -
The tape after the multiplication algorithm (2.3d)
What we have described above can be thought of as a low-level language convention
such as assembly language. However, generalizing the definition of a Turing machine
we can shorten the lines of code and upgrade to a higher-level programming scheme.
Just by introducing more symbols to the Turing machine, we can use our states more
flexibly and give the head more options to decipher the action it should take. For
example, we can expand our alphabet to three symbols: a, b, and c. Our new instructions
will now look like:
State1, a State2, a, R
State1, b State2, a, R
State1, c State3, c, L
State2, a State2, b, R
State2, b State3, a, L
We can easily translate back to the low-level language with 0’s and 1’s. Of course
we must be mindful that our tape must be filled with the symbols a, b, or c. (2.4)
Say instead of just 0 and
1, we have a, b, and c
where:
a = 11
b = 01
c = 10
And where our
instructions now look as
so:
State1, a State2, a, R
State1, b State2, a, R
State1, c State3, c, L
State2, a State2, b, R
State2, b State3, a, L
Our original lines of code
would have been:
State1, 1 State2, 1, R
State2, 1 State3, 1, R
State1, 0 State2, 1, R
State2, 0 State2, 0, L
State3, 1 State4, 0, R
State4, 1 State4, 1, R
State4, 0 State5, 1, R
State5, 1 State5, 1, L
So if we wanted to initialize our tape with 1111011000…, we would actually be entering
aabc. Even though this will allow our algorithms to be written more efficiently, the
bounds of computing and complexity of algorithms remain the same. This is due to the
fact that the actual steps of the algorithm do not change based on the syntax used, only
the description of those steps.
For added efficiency, we may even add more tapes to work with. Previously only
one tape was used to carry out all computations and execute an output. Although this is
very possible for any algorithm, it is cumbersome and sloppy. A task that is carried out
on three lines of code can now be processed in only one line.
HOW TO THINK ABOUT ALGORITHMS – ELISA ELSHAMY - 12 -
(2.5)
If we were using one tape our program
would have resembled something of the
sort:
State1, a State1, b, R
State1, b State2, a, R
Where with more tapes (2 in this case)
we have reduced our code to only one
line!
i.e. State1, a, b b, a, State2, R
Once again, this is for clearer and more legible programming and will not expand the
scope of computational limits.
In the same revolutionary work, Turing enlightened the world with the Universal
Turing Machine. A universal Turing machine can be thought of as a universal algorithm
or software for a given Turing machine. Previously a Turing machine would take our
instructions and would only be able to carry out that one algorithm, but when we think
about this in terms of hardware that would become very expensive and not to mention
tedious to reprogram a Turing machine over and over. So we need a way that we can
enter our Turing machine programs as input and only instruct the machine once to handle
any input (program) it is given. Our universal algorithm does just that. It is a general
program that is built into our Turing machine’s memory (head) that will be able to
process any Turing machine program and its initial values that we write on the tape. So
where as before we had a new Turing machine to carry out each unique task, we now
have only one machine which is preprogrammed to accept any Turing machine to execute
an individual result based on its instructions. In other words, what Turing accomplished
is a way to build a processor/CPU for a single Turing machine and have the head read the
instructions as an understood language to perform actions. This was a breakthrough
advantage from the original Turing machine which required a new computer per new
Turing machine algorithm.
This is where expanding the efficiency of algorithm writing for a Turing machine
comes into play. It was legible to have only one tape for memory when the only
information we were entering on the tape was the initial values for algorithms and only
reading two symbols. However, with a universal Turing machine, its strength comes in
the ability to be able to enter not only our initial values but our algorithm as well on the
tape. Again as possible as it may be to use our previous way of doing things (one tape, 2
symbols etc), it is not practical. Having a machine with multiple tapes and symbols we
can easily implement a system for describing a universal algorithm. Below is one way of
demonstrating a universal algorithm for a machine with 4 tapes: (2.6)
A universal Turing machine running the successor program on value 3
1. User inputs program instructions & initial values on tape 3
2. Copy the initial values to tape 2 because the machine will run the program on this
copy
3. Tape 1 keeps track of the current cell that is being read
4. Line by line record the current state of the program on tape 4 (it is necessary to
mention that instruction 11 on tape 3 is actually State 1 and it is permissible to
also label states as numbers).
HOW TO THINK ABOUT ALGORITHMS – ELISA ELSHAMY - 13 -
To summarize how all this relates to computer science, the read/write head and tapes can
be compared to a computer (processor and memory). The UTM algorithm can be
compared to an operating system for our Turing machine. The Turing machine
algorithms can be thought of as computer programs and the symbols we use to write our
Turing algorithms can be thought of as the programming syntax.
3. AN ALGORITHMICALLY UNSOLVABLE PROBLEM
Along with Turing’s machine came the Halting Problem. The Halting problem is
another example of a problem that cannot be solved algorithmically/mechanically. The
Halting problem asks if a given Turing machine program will halt for every input entered
or run infinitely. Think of asking for a computer program that can take as input another
computer program and determine if it has an infinite loop. We will argue that this
problem does not have an algorithmic solution by assuming that there is a possible
algorithm for it and showing that this leads to a logical contradiction.
Suppose there is in fact such an algorithm and let’s call the program coded for it
infinityCatcher(p, i), where p is a program (Turing machine) and i is the input entered
into the program. Since any program is just a string of symbols, we can code it by a
numerical input, thus we can think of both inputs p and i as numbers. The same holds
true for computers where the programs are translated to binary for the processor. If
infinityCatcher determines that program p halts successfully, the output is 1. Otherwise,
if infinityCatcher detects that running p on input i would get stuck in an infinite loop, the
output is 0.
Now what if we take our infinityCatcher and have it run program p with input p.
That is right, program p is both the program and input, so p will run on itself. Now let’s
introduce another program call the Liar(p) program. The Liar(p) program takes any
number as an input, calls infinityCatcher(p, p) on the input and acts in accordance with
the outcome of infinityCatcher(p, p). If infinityCatcher(p, p) outputs 0, that means
infinityCatcher detected a infinite loop in p. Liar(p) will return 1 in this case. If
infinityCatcher(p, p) outputs 1, no infinite loop was detected in p and Liar(p) proceeds to
run an infinite for loop itself.
HOW TO THINK ABOUT ALGORITHMS – ELISA ELSHAMY - 14 -
(3.1) Pseudo code for Liar(p)
Liar(p){
int a = infinityCatcher(p,p);
if (a == 0)
{
return 1;
}
else if (a == 1)
{
for (p = 0; p < 5; p++)
{
p = 2; //Resets the variable p to 2 on each iteration
}
}
Let n be the number coding the program Liar(p). Running the pseudo code
above again but considering that the input n to the Liar(p) program is really the Liar(p)
program itself leads infinityCatcher(n, n) to a horrible contradiction. If the Liar(p)
program runs and it returns 1, this means that infinityCatcher(n, n) outputted 0 and just
caught an infinite loop. But, wait the Liar(p) program which is coded by n just returned 1
when it ran on input n, meaning it didn’t hang...which one is really lying here?
This leaves us with the possibility that the Liar(p) program will run on itself (n)
and go into the infinite loop. However, for the Liar(p) program to pass into the infinite
for loop means that infinityCatcher(n, n) must output 1. The only way infinityCatcher(n,
n) will output 1 is when it does not detect a infinite loop, but Liar(p) just got stuck in an
infinite loop, meaning infinityCatcher(n, n) should have caught that very same infinite
loop as well.
What we have here is an endless paradox where we cannot take either side of the
argument and thus the proposal fails. This proof that the infinityCatcher cannot detect all
the infinite loops of every p consistently demonstrates that there do exist unsolvable
problems. This finding complemented Gödel’s theorems and also contributed to the end
of Hilbert’s Programme.
4. UNLIMITED REGISTER MACHINES
After Turing machines, it was only a matter of time that new and perhaps more
efficient models of computation would follow. In 1963 Shepherdson and Sturgis defined
Unlimited Register Machines.
Unlimited register machines also model a theoretical computer and their
programming is an abstraction of assembly language. For an extended discussion see [4].
URMs use registers for memory which can be compared to the Turing machine’s tape.
However, registers are identified by a unique address denoted by Rn and data on the
registers can be accessed at random. In other words, data can be obtained regardless of
its location and do not depend on a reading head to walk back. Unfortunately, register
machines do not realistically portray the work of a computer as does a Turing machine.
HOW TO THINK ABOUT ALGORITHMS – ELISA ELSHAMY - 15 -
(4.1)
Unlimited Register Machines use registers R1, R2…much like RAM memory would to store natural
numbers r1, r2…and have also been known as Random Access Machines (RAM).
While URMs do capture random memory, they hide so much of the work a real
modern computer to date must still carry out. A Turing machine is still the best
demonstration of how the internal hardware of a computer operates. If we were to
analyze the bare bones of chips and logic gates, we could easily see that a lot of effort
must be completed before memory locates data at the specified address.
Once again, to program a URM, there are some standards we must follow and a
basic library we use to create more complicated algorithms. (4.2)
Table of URM basic library provided to the URM programmer
Type Symbolism Effect
Zero Z(n) rn = 0
Successor S(n) rn = rn + 1
Transfer T(m,n) rm = rn
Jump J(m,n,q) If rn = rm go to instruction q –
else go to next instruction
The rules to program a URM are straightforward. The instructions in a URM program
are followed in order, starting from number 1. Jump is the only instruction that may alter
this course depending on the condition entered into the jump. It is possible to force the
program to loop by comparing the value in a register to itself when using the jump
instruct. The program halts when there are no other instructions to follow or if no such
instruction number exists.
HOW TO THINK ABOUT ALGORITHMS – ELISA ELSHAMY - 16 -
(4.3) Addition algorithm and flowchart of its computation for a URM
The addition algorithm for URMs is as follows. Two initial values are entered into
register 1 and register 2. The first instruction compares the value in register 2 with
register3, which on the first try register 3 equals 0. In the second instruction, register 1
is incremented by 1. Instruction three has register 3 increment itself by 1, this is to keep
count of how many times register 1 has been incremented. The last instruct uses a jump
to compare the value in register 1 with itself, forcing the program to loop every time and
start the process again from the first instruction. This sequence continues over and over
until register 3 has counted that register 1 has been incremented register 2 amount of
times. This means the value of register 2 has been added to register 1. Once the value of
register 3 and register 2 are equal, the first instruct orders the program to go to
instruction 5. Since there is no instruction 5, the program halts
5. EQUIVALENCE OF MODELS OF COMPUTATION
In fact, all models of computation as different as they may seem, can be simulated
using one another. This holds true even from the theoretical dimension to the touchable
peripherals. For example, computers can be programmed using high-level languages
such as Java to simulate Turing machines and URMs. Just Google “Turing machine” or
“unlimited register machine” simulators and you will find some great software
replications [7, 8]. The mechanics of computability are so equivalent that even a Turing
machine can simulate a URM!!!
Now that we have seen the universal Turing machine algorithm and unlimited
register machines, let’s implement our URMs on a TM. That is we will write a Universal
Register Machine Algorithm on a Turing machine. We will use the symbols Z, S, T, J, 1,
0, * and have 4 tapes to devise our algorithm.
Tape 1 – To keep track of the current register
HOW TO THINK ABOUT ALGORITHMS – ELISA ELSHAMY - 17 -
Tape 2 – For user input (program and register values)
Tape 3 – Stand-alone register tape, a copy of the initial register values
Tape 4 – To keep track of the URM instruct at work
The first step the universal algorithm would carry out after a user inputs their URM
program and initial “register” values on tape 2, is skip through the URM instructions and
rewrite the initial values to the register tape 3. The UTM will know that it is copying the
initial values by reading a marker, in this case any input after the three-zero indicator.
(5.1)
Tape 2 of a Turing machine with a universal Turing machine algorithm running an unlimited
register machine after user input
To tackle random access on a Turing tape is definitely a task, but it can be done.
Remember we are not striving for efficiency at this time, instead we are simply
demonstrating the equivalency of computation. One way to do this is to just copy what
ever initial values are inputted by the user, as we would normally with each cell just
following the other. The problem with doing this when it comes to register machines is
that data is accessed according to the instruction and not by any particular order or
position. Remember in a Turing machine we can only manipulate data by moving the
head left or right, cell by cell. This means that every time we need to change the value in
one of our registers (sectioned part of the tape), we must first find the data we are looking
for by counting as many cells necessary to locate it. Then once we locate the data, it
means that since its value changed, the rest of our data has to be rewritten to “be in the
right place”. For instance, imagine we have the values of 5 different registers written on
our tape. Let’s say the value in register 2 is 4 and we need to change it to 6. In order to
have the right values in the right registers, we would have to rewrite everything from the
start of register 2 till the end of register 5. As we can see this sounds extremely
horrifying and would be very prone to errors. A better approach involves using an
algorithm that allows us to split the tape efficiently into registers, and tells us which cell
on the tape belongs to which register. Using this formula, we locate and fill our tape cells
and not worry about interfering with the data in other cells. The formula* is shown
below:
< x, y > = (( x + y )( x + y + 1 ) / 2) + 1
*The formula shown is actually Cantor’s Pairing Function
We will have our x be the cell number and our y the register number. For example, if
you want to read/write to what would be in the fifth cell of register 3, x = 5 and y = 3.
HOW TO THINK ABOUT ALGORITHMS – ELISA ELSHAMY - 18 -
The beauty of this algorithm is that no two cells will have the same ID. What is in each
cell is particular to the register, and the other registers are not tampered with if they are
not accessed. So now when the head copies the user input from tape 2, it will use this
formula to rewrite the values to register tape 3. Starting with register 1, let’s just say it
had the value 111 (natural number 2) for an example. Using this function, the first 1
would normally go in the first cell of register 1, x = 1 and y = 1 which after the formula
corresponds to cell 4 of the tape. The second 1 would normally go in the second cell of
register 1, x = 2 and y = 1 corresponding to cell 7 of the tape. The last 1 would be placed
in what is third cell of register 1, x = 3 and y = 1 and after the formula is placed in cell 11
of the tape. Since the locations of the register values are so unique from one another, we
can even go as far as saying that one tape becomes a new tape per register.
6. CONCLUSION
The industrial, scientific, and simply our everyday world has become crucially
dependent on computational processes. At the same time computation gains an ever
more prominent role in our understanding of the underlying mechanisms of the universe.
This puts the theoretical study of computability at the center of scientific research. In this
research we studied the fundamental fact to come out of a century of study of
computability: the equivalence of all known models of computation!
HOW TO THINK ABOUT ALGORITHMS – ELISA ELSHAMY - 19 -
BIBLIOGRAPHY
1. Cooper, S. Barry. Computability Theory. Boca Raton, London, New York,
Washington, D.C.: A CRC Press Company, 2004.
2. Davis, Martin. The Universal Computer: The Road From Leibnitz to Turing.
London, New York: W.W. Norton & Company, 2000.
3. Elshamy, Elisa. How To Think About Algorithms: In Theory and Practice. New
York: New York City College of Technology, CLAC Seminar, 2 December 2008.
4. Gitman, Victoria. Elshamy, Elisa. Theoretical Computation and Practical
Computers: Exploring the History of Computability Theory. New York: New
York City College of Technology, 21 August 2008.
5. Hilbert’s Program (Stanford Encyclopedia of Philosophy). 31 Jul 2003. ©
Metaphysics Research Lab, CSLI, Stanford University. 03 April 2009
http://plato.stanford.edu/entries/hilbert-program/#1.4
6. The Foundations of Mathematics. 1996. Garland Publishing Inc. May 2009
http://www.marxists.org/reference/subject/philosophy/works/ge/hilbert.htm
7. Turing Machine Simulator. 29 October 2008. Copyright © 2008, All Rights
Reserved. September 2008 http://ironphoenix.org/tril/tm/
8. URM Simulator. 2004. Copyright © 2004-present, Ramin Naimi, All rights
reserved. July 2008 http://faculty.oxy.edu/rnaimi/home/URMsim.htm