Qualcomm’s Project Zeroth: Cognitive Computing in … · this foundation, Zeroth is also etched...

30
506 Research Paper Qualcomm’s Project Zeroth: Cognitive Computing in the Palm of Your Hand Susan Noh | Isaac Riddle | Yingxue Guo | Shaqueta Pierre 4 May 2015 Neuromorphic — meaning shaped like the brain — is a term coined by Carver Mead in the early 1980s to describe the chip he designed which functioned similar to neurons in the brain. Since then the fields of neuromorphic engineering, cognitive computational neuroscience, data science and many others have made great strides in getting machines and algorithms to mimic some of the functionality of the brain. Traditional computers operating on a variant of the Von Neumann architecture are highly efficient at some tasks, but woefully poor at others that human brains do effortlessly e.g. visual recognition, pattern recognition, learning behaviors, natural language processing, and optical character recognition. This paper will focus on one example of neuromorphic engineering, Qualcomm’s Zeroth project. Zeroth is a cognitive computing platform modeled after how the brain learns and processes information. The historical context, technical function, and sociotechnical implications of this technology are discussed.

Transcript of Qualcomm’s Project Zeroth: Cognitive Computing in … · this foundation, Zeroth is also etched...

Page 1: Qualcomm’s Project Zeroth: Cognitive Computing in … · this foundation, Zeroth is also etched in silicon (Blythe Towal, 2013-from video presentation). Qualcomm’s chip that will

506 Research Paper

Qualcomm’s Project Zeroth: Cognitive Computing in the

Palm of Your Hand

Susan Noh | Isaac Riddle | Yingxue Guo | Shaqueta Pierre

4 May 2015

Neuromorphic — meaning shaped like the brain — is a term coined by Carver Mead in the early 1980s to

describe the chip he designed which functioned similar to neurons in the brain. Since then the fields of

neuromorphic engineering, cognitive computational neuroscience, data science and many others have

made great strides in getting machines and algorithms to mimic some of the functionality of the brain.

Traditional computers operating on a variant of the Von Neumann architecture are highly efficient at

some tasks, but woefully poor at others that human brains do effortlessly e.g. visual recognition, pattern

recognition, learning behaviors, natural language processing, and optical character recognition. This

paper will focus on one example of neuromorphic engineering, Qualcomm’s Zeroth project. Zeroth is a

cognitive computing platform modeled after how the brain learns and processes information. The

historical context, technical function, and sociotechnical implications of this technology are discussed.

Page 2: Qualcomm’s Project Zeroth: Cognitive Computing in … · this foundation, Zeroth is also etched in silicon (Blythe Towal, 2013-from video presentation). Qualcomm’s chip that will

Neurochip 1

Introduction

As the limits of Moore’s Law loom on the horizon of computational engineering, researchers and

innovative computing firms are beginning to look at new ways of improving our devices through

reorganizing the fundamental architecture of computation. One such architecture is cognitive

computation, which is a profoundly different approach to computing inspired by the biological functions

of the brain. The biological brain is an infinitely complex organ that is energy efficient and adept at

performing multiple functions simultaneously. This was the primary inspiration for the neuromorphic

processing unit, which would allow devices to have deep machine learning capabilities as well as

seamlessly adapt to the user’s desires.

Though the technology has yet to achieve the complexity of the brain, the fundamental neural

synaptic structure has been translated into a machine equivalent. In 2014, Qualcomm announced their

Zeroth program, which aims to take neuromorphic processing onto a large scale commercial program

(Hof, 2014). Though the technology is still in its infancy, it is an important step forward for cognitive

computing. From functions like visual recognition to auditory data processing, the first iteration of the

neuromorphic processing unit will not be strikingly different in terms of functional capability from what

the average smartphone can already do; however, one can easily recognize this as the logical

predecessor to more complex artificial intelligence inventions. Like the advent of new technologies in

the past, there have already been concerns over how it will disrupt society.

From the ways it can drastically change the job market to the approaches these technologies

will take in confronting big data and privacy issues, these questions will be critical in our future

interactions with machines. In this paper, we will discuss the history of cognitive computing, the

technical functioning of Zeroth, and explore the sociocultural implications of having the power of

cognitive machines as part of our everyday lives.

Page 3: Qualcomm’s Project Zeroth: Cognitive Computing in … · this foundation, Zeroth is also etched in silicon (Blythe Towal, 2013-from video presentation). Qualcomm’s chip that will

Neurochip 2

History

The architecture for the first electronic computers was developed by John von Neumann in the

1940s, and consists of a main processing unit, control unit, memory storage, external mass storage, and

input/output structures. The central processing unit consists of an arithmetic and logic unit, which

interacts with the internal memory system (Eigenmann and Lilja, 1998). Through this simple structure,

von Neumann aimed to create a machine that could execute any function, without the need for physical

modification on the part of the machine (Von Neumann, 1945). A key feature of the Von Neumann

architecture is the stored-program technique, which primarily refers to the architectural framework for

a digital computer that can store a set of modifiable instructions and the consequent data operations

that are called for in those instructions within one memory system (Reilly, 2003).

While the stored-program architecture was revolutionary in the field of computing architecture,

having the instructions and data set in one location made these programs slower and less energy

efficient because these operations travelled through the same bus system. Because both the

instructions and the data that those instructions call for cannot travel simultaneously through the same

bus, this creates a delay, which is called the “Von Neumann Bottleneck” (Eigenmann and Lilja, 1998).

This delay reveals some of the limitations of this structure and paves the way for new kinds of

architecture.

In order to address the problem of the von Neumann bottleneck, the Harvard architecture was

developed almost simultaneously in the 1940s starting with the Mark I computer developed in 1944

(Timeline of Computer History: 1944). While this computer initially ran the von Neumann architecture, it

was later modified so that it could read its instructions from a long punch paper tape and execute the

consequential program without any conditional branching. This physical separation of instruction and

data is the key to Harvard architecture (Clements, 2013). By having separate storage and bus systems

for the instruction and data, this eliminated the problem of the von Neumann bottleneck.

Page 4: Qualcomm’s Project Zeroth: Cognitive Computing in … · this foundation, Zeroth is also etched in silicon (Blythe Towal, 2013-from video presentation). Qualcomm’s chip that will

Neurochip 3

The father of modern neuromorphic engineering (a term he coined) is Carver Mead. In 1987

Mead published a paper about a silicon chip designed to mimic visual processing in the retina (Simonite,

2013). This technology was operationalized in 1996 when two silicon retina chips were used to keep a

camera on the Flare Genesis observatory oriented towards the sun during an experiment (Simonite,

2013). In 2000, research published in the journal Nature demonstrated that a circuit of 16 “neurons”

can select and amplify input signals much like the cerebral cortex of the mammalian brain (Simonite,

2013).

Recently, several major attempts at modeling computation after brain structures have been

undertaken by DARPA’s SyNAPSE program. SyNAPSE is primarily subcontracted to IBM and HRL

laboratory both working on neuromorphic very large scale integration (VLSI)

(http://www.artificialbrains.com/darpa-synapse-program). In April 2013 President Obama announced

his Brain Initiative to advance brain research through innovative neurotechnologies

(https://www.whitehouse.gov/BRAIN). Qualcomm is a participant in the Brain Initiative and their

research has led to the Zeroth platform which is attempting to bring neuromorphic engineering to

smartphones.

How Zeroth Works

Understanding Qualcomm’s Zeroth platform is a challenging task due to the amount of highly

secretive proprietary technologies used to produce it. However, Qualcomm has spoken in broad terms

about what the platform will be capable of, and from these statements we can infer some of the theory,

construction, and functionality used to create the technology. Qualcomm has stated that Zeroth is the

company’s “first cognitive computing platform designed to enhance on-device user experience,” and

that it will “have the ability to learn and adapt to the needs of the user” (http://goo.gl/HIggbk).

Additionally, Qualcomm has indicated that the Zeroth platform utilizes spiking neural networks and a

range of other artificial neural networks such as convolutional neural networks (CNNs), recurrent neural

Page 5: Qualcomm’s Project Zeroth: Cognitive Computing in … · this foundation, Zeroth is also etched in silicon (Blythe Towal, 2013-from video presentation). Qualcomm’s chip that will

Neurochip 4

network (RNN) to achieve deep learning in their devices (http://goo.gl/JQzowU). All of this is combined

to achieve Qualcomm’s goal of, “build(ing) deep-learning neural networks on devices you carry with you

instead of in the cloud” (http://goo.gl/CDIkJh). Through these statements it is possible to identify the

main components that make up Zeroth. This section will outline what is occurring inside this technology

in order to better understand how the Zeroth platform is attempting to get transistors in silicon to

operate like the neural network in our brains.

How to make learning machines

Artificial Neural Networks

To mimic the brain Zeroth is using algorithms designed to process information in a similar

manner to the neural networks in mammalian brains. These are known as artificial neural networks

(ANN) defined in broad terms by Lek and Guegan (1999) as non-linear mapping structures based on the

function of the human brain. Put another way, ANN are processing devices (algorithms or actual

hardware) that are loosely modeled after the neuronal structure of the mammalian cerebral cortex but

on much smaller scales (http://pages.cs.wisc.edu/~bolo/shipyard/neural/local.html). There are many

types of ANNs—Zeroth incorporates several—but one of the most important is the spiking neural

network (SNN). It is through ANNs and machine learning that Zeroth gains its intelligence.

Spiking Neural Networks

Spiking neural networks are biologically plausible networks modeled on the neural structure in

our brains. In the brain, neurons send out short pulses of electrical energy as signals (information) if

they have received enough of these themselves (their action-potential). This basically simple

mechanism has been molded into a mathematical model for computer use known as spike coding

(Vreeken, 2003). When modeled computationally, the threshold (or action-potential) of a node is a

certain value or sum of values. When this threshold is reached, the node sends its signal to the other

nodes it is connected to. Either the spike or the frequency of spikes can be used to code digital

Page 6: Qualcomm’s Project Zeroth: Cognitive Computing in … · this foundation, Zeroth is also etched in silicon (Blythe Towal, 2013-from video presentation). Qualcomm’s chip that will

Neurochip 5

information to be processed by computers. SNNs have great potential for solving complicated time-

dependent pattern recognition problems because of their inherent dynamic representation (Ghosh-

Dastidar et al., 2009). This is what allows Zeroth to perform tasks such as visual recognition and optical

character recognition (OCR) in real time.

Machine learning

Machine learning is the broad term for how cognitive platforms such as Zeroth record actions

(input) from the user, and make predictions on what an appropriate output should be. Machine

learning is defined as computational methods using experience to improve performance or make

predictions, and consists of designing efficient and accurate prediction algorithms (Mohri et al., 2012).

Machine learning can be broken down into two broad categories: supervised learning and unsupervised

learning. In supervised learning the learner is provided with a labeled dataset (both the input and

correct output are given). In unsupervised learning the learner is not provided with the correct output

for a given input. The learner must rely on other sources of feedback to determine if it has provided the

correct output. A third category is semi-supervised learning which is a combination of the two broad

categories (Hu and Hao, 2012). Spiking neural networks, like those mentioned above, can be trained

using supervised and unsupervised learning (Ghosh-Dastidar et al., 2009). By training networks in this

manner, Zeroth and other cognitive platforms are able to achieve devices which react smartly to user

input. At its core, machine learning is based on probabilistic modeling. Therefore, even advanced

algorithms will not be accurate all the time.

Figure one is a diagram of a basic neural network employing these learning principles (Martins,

2015). Layer one represents a neural network with three layers. The second layer a hidden layer

because the values of the nodes are computed, not observed. The third layer is the output layer. The

process of weighting and training these nodes is what produces the predictive outcome of the network.

Page 7: Qualcomm’s Project Zeroth: Cognitive Computing in … · this foundation, Zeroth is also etched in silicon (Blythe Towal, 2013-from video presentation). Qualcomm’s chip that will

Neurochip 6

At the hardware level

Modern brain-inspired computer chips can be traced back to the early 1980’s when Carver

Mead created the first neuromorphic chip built with transistors in silicon (Simonite, 2013). Building on

this foundation, Zeroth is also etched in silicon (Blythe Towal, 2013-from video presentation).

Qualcomm’s chip that will feature Zeroth, the Snapdragon 820, will employ various kinds of processors

such as a traditional CPU, GPU, digital sensor processor (DSP), and a neuromorphic processing unit

(NPU) in an architecture called heterogeneous computing. Maheswaran et al. (1999) define

heterogeneous computing as, “the coordinated use of different types of machines, networks, and

interfaces to maximize their combined performance and/or cost-effectiveness.” Structuring chip

architecture in this manner ensures that the right processor is employed for the right job to efficiently

solve computationally difficult problems. Thus, when a user wants to take a photo, the Zeroth platform

in the form of the NPU is activated to perform visual recognition functions and take the appropriate

actions e.g. adjusting camera settings.

What sets it apart from other neuromorphic processors?

By adding the Zeroth software architecture to the Snapdragon 820 chip, Qualcomm is

attempting to push their technology to function as a System on a Chip (SoC). Instead of other attempts

Figure 1: An example of a neural network diagram with one output unit for binary classification

problems (Martins, 2015).

Page 8: Qualcomm’s Project Zeroth: Cognitive Computing in … · this foundation, Zeroth is also etched in silicon (Blythe Towal, 2013-from video presentation). Qualcomm’s chip that will

Neurochip 7

at cognitive computing which rely on cloud computing or massive banks of processors for their

computational power, Zeroth is an attempt to harness the existing, but often underused power already

contained in the average smartphone through its unique implementation of neuromorphic processing

and heterogeneous computing. As the technology progresses a user armed with this technology could

utilize the benefits of the cognitive affordances without needing an internet connection. This is clearly

an advantage in terms of power saving (the phone’s modem will not have to be constantly on), and

financial saving (the device will not have to use data because it will not have to access the internet for

processing and computing).

Applications

Visual recognition

A goal of the Qualcomm Zeroth project is to improve computer vision and allow for advanced

object recognition. Biological vision systems excel at extracting and analyzing visual information

essential for major functional needs including spotting food, predators, potential mates, and navigating

through complex environments (Komprobst et al., 2015). While the gap between humans and machines

is still quite great, the Zeroth project and neuromorphic processors aim to overcome the limits of

current machine learning algorithms. Biological processes such as visual and object recognition appear

to adapt themselves dynamically while computer solutions are most often fixed and static (Komprobst

et al., 2015).

One biological solution that humans have adapted for navigating through complex

environments is visually sensing — the process of capturing patterns of light from the environment. The

retina transforms incoming light into a set of electrical impulses – spikes – which are sent

asynchronously to high level brain structures through the optic nerve (Komprobst et al., 2015). Visually

sensing complex environments is a problem for computers and machines because visual environments

are noisy, cluttered, and diverse. Most computer vision systems are rooted in a sensing device based on

Page 9: Qualcomm’s Project Zeroth: Cognitive Computing in … · this foundation, Zeroth is also etched in silicon (Blythe Towal, 2013-from video presentation). Qualcomm’s chip that will

Neurochip 8

technology that acquires images in a framed-based manner. Each frame represents the environment as

a set of pixels whose value indicate the intensity of light (Komprobst et al., 2015).

By mimicking biological vision systems, the Zeroth spiked-based neuromorphic platform will

allow mobile devices with processors such as the Snapdragon 820 to recognize the difference between

adults and children, humans and animals, as well as humans and objects in photos and videos.

Processors with neuromorphic vision systems will not only allow devices to take better photos and

capture better videos, but they also allow the device to learn and store users’ preferences, recognize

recurring objects, images, and patterns, and with permission, connect to contact lists and social media

accounts.

Implications for the future/Social implications

According to Qualcomm, the overarching goal for the Zeroth platform is to provide brain-

inspired computing. Devices with these processors will be smarter and have the ability to anticipate

users’ needs and enable natural interactions between human and device. The implications are far-

reaching as the “Internet of Things” proliferates and more non-traditional devices are becoming “smart”

and connected to the Internet. Qualcomm’s has stated their goals for the Zeroth project are:

Biologically-inspired learning – processors that have the biologically inspired functions like the

ability to learn and do things more efficiency. The Zeroth platform aims to improve human-computer

interactions by simplifying the number of steps it takes a device to complete a given task by using past

experiences and contextual awareness to inform future actions.

Improved device vision – Deep machine learning will enable devices to recognize and understand

the environments in which they are situated and to better detect and perceive visual objects. The brain

is power efficient and has superior capabilities for image detection.

Page 10: Qualcomm’s Project Zeroth: Cognitive Computing in … · this foundation, Zeroth is also etched in silicon (Blythe Towal, 2013-from video presentation). Qualcomm’s chip that will

Neurochip 9

New standards – With the creation and definition of a Neural Processing Unit (NPU) Qualcomm

hopes to standardize neuromorphic architecture in future systems-on-chips to train devices for more

human-like interactions and behaviors.

Other implications for this specific technology beyond the stated goals include autonomous

robots. With the Zeroth project, Qualcomm is also focused on enabling the creation of integrated low

power systems needed to make robots “smarter.” The core technologies that are enabling smarter

robots include computer vision, sensors, machine learning, navigation and wireless communication.

Lastly, neuromorphic processing platforms, like Zeroth, could solve big data problems by allowing

devices to classify and search data more efficiently. All of the annotating, tagging, digitizing of

information creates an enormous amount of information for cognitive computing systems to learn from.

Figure 2 maps a sociotechnical example of how this kind of cognitive computing could be applied to the

doctor/patient relationship. The figure depicts the interaction of a doctor who after interviewing a

patients consults a cognitive computing program. The program, utilizing cognitive learning algorithms,

accesses stored medical facts and learned corrections from previous interactions with doctors. The

program could ask more questions or make recommendations based on the input.

Page 11: Qualcomm’s Project Zeroth: Cognitive Computing in … · this foundation, Zeroth is also etched in silicon (Blythe Towal, 2013-from video presentation). Qualcomm’s chip that will

Neurochip 10

While Qualcomm has laudable goals for the Zeroth project, there are other implications that

must be considered. Security and privacy are major concerns because cognitive computing requires that

computers learn from massive amounts of data. Collecting that data will mean a significant amount of

invasive sensing, tracking, recording of user behavior and actions. There are also larger societal issues to

be considered, such as the impact on employment. Job displacement and unemployment will become

more of a concern as tasks are increasingly automated. But, as noted in a recent report from the Pew

Foundation, it is entirely possible that society will adapt by inventing new kinds of work that take

advantage of capabilities that are uniquely human.

Future Research

As the Zeroth platform develops and more engineers and programmers become familiar with

the language, there are innumerable research possibilities both practical and philosophical. For

instance, how will better natural language processing change Internet searching? Or how could smart

visual recognition technology change policing and what would be the societal impacts? What will be the

human cognitive impacts of offloading so much of our mental efforts onto a machine? Or what is the

threshold of sentience?

Conclusion

Neuromorphic and cognitive computing mix two distinct goals, to explore biology and improve

technology (Cour et al., 2014). Qualcomm focuses on the latter. Through innovative combination of the

methods described Qualcomm is attempting to build intelligent, fully on-device cognitive capabilities

that will allow devices to learn and adapt to the user. The company does not intend to build specific

applications, but rather just make them possible. By allowing programmers to take advantage of the

new architecture of neuromorphic processors that work alongside traditional processors to bring

Page 12: Qualcomm’s Project Zeroth: Cognitive Computing in … · this foundation, Zeroth is also etched in silicon (Blythe Towal, 2013-from video presentation). Qualcomm’s chip that will

Neurochip 11

cognitive computing to personal devices, the Zeroth platform could pave the way for advances in mobile

technologies, robotics, and biologically-inspired computing.

The name Zeroth comes from Isaac Azimov’s zeroth law of robotics which states, “A robot may

not harm humanity, or, by inaction, allow humanity to come to harm.” While this technology is

nowhere close to needing such a provision, it is interesting that those who created Zeroth understood

its possible implications. For now Zeroth’s place in today’s sociotechnical system will more closely

resemble a very attentive digital secretary than a sentient robot. But the future for Zeroth is bright and

the possible applications for this technology are endless.

Page 13: Qualcomm’s Project Zeroth: Cognitive Computing in … · this foundation, Zeroth is also etched in silicon (Blythe Towal, 2013-from video presentation). Qualcomm’s chip that will

Neurochip 12

References

Bataller, Cyrille, and Jeanne Harris. 2015. Turning Cognitive Computing into Business Value Today.

http://www.accenture.com/us-en/Pages/insight-turning-cognitive-computing-business-value-

today.aspx.

Clements, Alan. Computer Organization & Architecture: Themes and Variations. Cengage Learning, 2013. Cour, G., Cet, A., & Proje, S. (2014). Neuromorphic computing gets ready for the (really) big time.

Communications of the ACM, 57(6).

Eigenmann, Rudolf, and David Lilja. "Von Neumann Computers." Purdue. January 30, 1998. Accessed May 3, 2015. https://engineering.purdue.edu/~eigenman/reports/vN.pdf. Ghosh-Dastidar, Samanwoy, and Hojjat Adeli. 2009. “Spiking Neural Networks.” International Journal of

Neural Systems 19 (04): 295–308.

Hof, Robert. "Neuromorphic Chips." Technology Review. April 23, 2014. Accessed May 3, 2015. <http://www.technologyreview.com/featuredstory/526506/neuromorphic-chips/>. Hu, Fei, and Qi Hao. 2012. Intelligent Sensor Networks: The Integration of Sensor Networks, Signal

Processing and Machine Learning. CRC Press.

Kumar Sr, Samir. 2015. “Qualcomm Zeroth Is Advancing Deep Learning in Devices.” Qualcomm. Accessed

April 16. https://www.qualcomm.com/invention/cognitive-technologies/zeroth.

Lek, Sovan, and J. F. Guégan. 1999. “Artificial Neural Networks as a Tool in Ecological Modelling, an

Introduction.” Ecological Modelling 120 (2–3): 65–73.

Maheswaran, M., S. Ali, H.J. Siegal, D. Hensgen, and R.F. Freund. 1999. “Dynamic Matching and

Scheduling of a Class of Independent Tasks onto Heterogeneous Computing Systems.” In Heterogeneous

Computing Workshop, 1999. (HCW ’99) Proceedings. Eighth, 30–44.

Martins, Thiago G. 2015. “4th and 5th Week of Coursera’s Machine Learning (neural Networks).”

Accessed May 3. https://tgmstat.wordpress.com/2013/06/05/coursera-machine-learning-neural-

networks/.

Neuromorphic Engineering at Qualcomm (2013). 2014.

https://www.youtube.com/watch?v=2DQZTzvgxbo&feature=youtube_gdata_player.

Medathati, N. K., Neumann, H., Masson, G., & Kornprobst, P. (2015). Bio-Inspired Computer Vision:

Setting the Basis for a New Departure (Doctoral dissertation, INRIA Sophia Antipolis, France; Institut de

Neurosciences de la Timone, Marseille, France; University of Ulm, Germany).

Mohri, Mehryar, Afshin Rostamizadeh, and Ameet Talwalkar. 2012. Foundations of Machine Learning.

MIT Press.

Reilly, Edwin D. "Stored Program Computer." In Milestones in Computer Science and Information Technology. Westport, Conn., Conn.: Greenwood Press, 2003.

Page 14: Qualcomm’s Project Zeroth: Cognitive Computing in … · this foundation, Zeroth is also etched in silicon (Blythe Towal, 2013-from video presentation). Qualcomm’s chip that will

Neurochip 13

Simonite, Tom. 2013. “Thinking in Silicon.” MIT Technology Review. December 16.

http://www.technologyreview.com/featuredstory/522476/thinking-in-silicon/.

"Timeline Highlights." Timeline of Computer History: 1944. Accessed May 3, 2015. <http://www.computerhistory.org/timeline2014/1944/.> Von Neumann, John. "First Draft of a Report on the EDVAC." Web.archive.org. June 30, 1945. Accessed

May 3, 2015.

<https://web.archive.org/web/20130314123032/http://qss.stanford.edu/~godfrey/vonNeumann/vnedv

ac.pdf.>

Vreeken, J. 2003. “Spiking neural networks, an introduction” 2003-008.

http://dspace.library.uu.nl/handle/1874/24416.

Page 15: Qualcomm’s Project Zeroth: Cognitive Computing in … · this foundation, Zeroth is also etched in silicon (Blythe Towal, 2013-from video presentation). Qualcomm’s chip that will

Neurochip 14

Appendices

Appendix 1: Design Justification Our project revolved around innovative, future-oriented technology, so we decided to opt for a

sleek and futuristic look in terms of our website design and Twitter account. Our primary color palette

was teal, pale grey, and black for accenting, so our website could maintain a standard mechanical look,

without depending on ominous and unfriendly dark colors. We decided to use customized HTML/CSS to

box in our paragraphs on our website, because it gave the feeling of a segmented motherboard, and

also kept our blog entries from looking like they were free-floating in space. Our primary header image

for both our Twitter account and blog depicts a brain, stylized to look like etchings in a motherboard,

with glowing nodes, which imitate interconnected neurons. This was only fitting for our topic, and while

eye-catching, did not distract from our content.

Our poster continued to incorporate our standard color palette, but used some orange to

highlight parts of our more important aesthetic points. Our poster continued with the motherboard-like

image, and we used stylized images of microchips and bus systems, so that the viewer can immediately

get an idea of what our topic was, before even reading any of the content.

In terms of font for our poster, we used century gothic, because it was clear and easy to read,

even from afar, without feeling sterile. For our postcard and Powerpoint presentation, we opted for the

computerized looking Consolas font, which continued to lend a clean and futuristic look to our brand

image.

Page 16: Qualcomm’s Project Zeroth: Cognitive Computing in … · this foundation, Zeroth is also etched in silicon (Blythe Towal, 2013-from video presentation). Qualcomm’s chip that will

Neurochip 15

Appendix 2: Typed Interview Transcription Transcript of Interview with Professor Max Reisenhuber

April 13, 2005

Isaac: Would you mind just introducing yourself?

Professor Reisenhuber: I'm Max Reisenhuber, professor here in the department

of neuroscience at Georgetown.

Isaac: Excellent. Thank you. So we'll just start off with the first question

which is kind of a broad intro 101 cognitive science overview of what is

cognitive computing?

Professor Reisenhuber: Uh huh. So cognitive computing is the idea that, let's

look at the brain for how it does complex tasks. So the tasks that computers

currently aren't as good as people like recognizing objects for instance.

That's a very basic one, recognizing speech, recognizing visual objects and

problem that AI has been working on for decades but the solutions they've

produced are still not as good as things we can do, even little kids can do

very easily. And so the question is well can we figure out how the brain does

it and then leverage that to have computers do the task in the same way the

brain does.

Isaac: And then so a follow up question to that is, kind of a tie in, a broad

overview of neuromorphic engineering of processors. Is this a buzzword or I

guess does this have real meaning for you and then, how would you explain it?

Professor Reisenhuber: Yes, so it's a very different way of thinking about

how to do computing. The traditional von Neumann architecture in your iPhone

or whatnot. Right, this idea that you have a few processors that are very

fast and then they have some memory and they can shoot things back and forth,

but that's not how your brain works. Right. Your brain has a hundred billion

neurons hopefully, right. So in the order of and each one of them has

connections with ten thousand other neurons right, so you have this big

Page 17: Qualcomm’s Project Zeroth: Cognitive Computing in … · this foundation, Zeroth is also etched in silicon (Blythe Towal, 2013-from video presentation). Qualcomm’s chip that will

Neurochip 16

network of processing units that each one of them is very slow like only a

million times as fast as a current processor but the power really comes from

one of these simple units all working together and breaking down the problem and then this way being able to, in a very different way, do these tasks that

we are interested in, AI for instance, right now, and understanding how the

brain does it.

Isaac: So you mention about how the brain is good at this visual recognition.

Why is it so good? Why is that so intuitively easy and why is that so

difficult for like you said, for traditional architecture for computers?

Professor Reisenhuber: Yeah, I was once riding in a cab and so the cab driver

asked me, so what are you working on. I said, well I work on vision and how to recognize objects and he said, oh vision is easy. The reason is it easy is

because one-third of your brain is doing just that, right, and so right now

your oratory system is trying to listen to my words, right, same thing. So,

computation is very hard. What people realized in AI when they tried to have

computers see the way people do and the challenge is that you can have the

same object look very different on your retina. So, you see my face, but if I

change the lighting a bit, if I move around, of if I come closer, on your

retina the pattern changes completely. It's different photoreceptors being

active to different degrees. An your brain through several stages manages to

generalize over all these transformations and focus on, okay, well what's the

identity, what's the object that I recognize, and not get confused by these

transformations. That's what makes it so hard. A connected challenge is how can you learn then from examples. We, humans, are very good at learning from

some examples. I can show you a few pictures like of, even with kids, a few

examples of dogs, and you can generalize to other dogs. But that's again a

challenge if you have to train a computer and it turns out that computers are

much worse at generalizing than humans are.

Isaac: So, kind of switching gears a little bit, going into the nuts and

bolts of some of how this works. So, what are spiking neural networks in

general?

Professor Reisenhuber: So, spiking neural networks is an attempt to model how

these neurons compute that do the work in the brain and so normally we talk

of the von Neumann architecture and this idea that you have continuous

variables and then you have them in memory and you have millions of them and

you're computing by combining these values, adding then, and then you have a

new value but now that doesn't work as a, very well, as a model of how the brain computes because you have neurons and neurons are connected and the way

Page 18: Qualcomm’s Project Zeroth: Cognitive Computing in … · this foundation, Zeroth is also etched in silicon (Blythe Towal, 2013-from video presentation). Qualcomm’s chip that will

Neurochip 17

neurons communicate is through spikes. So, it's kind of a digital code and

sending it down the wire it's a very robust way of communicating so you

either spike or don't spike and so it's a very effective way, as we know now, everything is digital right because it's so robust, it's either one or zero

and so that way your brain can send impulses over long distances and the idea

then you convert these values, like if a neuron is very excited you have a

lot of spikes per second, if it's not excited you have a few spikes per

second, but you're just passing these digital messages between the neurons.

So, again, it's a very different way of thinking about how to do computing and traditional architectures.

Isaac: So what exactly is the spike?

Professor Reisenhuber: So the spike, at the neural level, you have these, a

little channel, so you have these connections from one neuron to another. And

so these channels let ions in and then if you have enough ions then you

trigger to cascade where you have a sudden spike in voltage that then travels

down the axon to other neurons. So, it's this pulse that basically says these

inputs comes into a neuron, the neuron integrates them and if it reaches a

certain level then something comes out of the neuron. And that's this kind of

voltage pulse that then travels down and tell other neurons something just

happened.

Isaac: So, moving from that then to this machine learning, or I've seen it

referred to as "deep machine learning," so what exactly is that and why does

cognitive computing architecture support that better or more efficiently than

the von Neumann architecture.

Professor Reisenhuber: The challenge is that if have all these neurons, let's say you have a hundred million neurons, how are you going to wire them up? So

all learning happens in how you connect these neurons and so one of the

questions is how do you set these connections. So if you think about the

brain, and vision in particular, from your eyes to where decision are made

here in the frontal cortex, it's about ten different stages and so now you

have, think about it as a tree with ten different levels and so each one of them has a connection and now the challenge is, and that's the deep learning,

because you have a deep hierarchy. You have lots of different levels and so

now the question is, how do I adjust these weights. In particular, in

recognition, the idea is that I look at an object and when I have a label and

now if my label, if the program or the network has the correct label, it's

all good. But what if it's wrong? You have an error at the top, but now I have connections all the way down ten levels, so the challenge is how do I

Page 19: Qualcomm’s Project Zeroth: Cognitive Computing in … · this foundation, Zeroth is also etched in silicon (Blythe Towal, 2013-from video presentation). Qualcomm’s chip that will

Neurochip 18

know how to adjust this connection down here based on an error that happens

up there. And, that's the deep learning problem.

Isaac: So, in doing that you then effectively have to train the network. How

is that accomplished?

Professor Reisenhuber: Right. So that was a big challenge and that kind of

led to the halt of connections approaches to AI in the 60s because people

didn't have a good way of training these deep networks. So they could train

networks with two levels, but they couldn't do a lot of things and then in

the 80s the backprop algorithm was discovered or rediscovered, apparently it

was already done a little bit earlier, and that provided one way of training

more complicated networks that could now do interesting things. But they still couldn't do, the algorithm still didn't work well if you had more than

three levels. And, for instance with the brain, with ten levels it just

wasn't a very good way of training it. So, only a few years ago people have

come up with efficient ways of training these deep networks that now make it

feasible to have networks with ten levels, with hundreds of millions of

units, to train them in a way that only takes a few days or even less, but

then do tasks with almost human-like performance.

Isaac: So we talked about some of the things these networks are good at, the

learning, the visual recognition, what are some of big challenges to

cognitive computing, what's the problem that you're running into right now?

Professor Reisenhuber: So recognition is pretty much solved to some extent.

So if you can train a network to recognize thousands of different classes,

but now you want to get one level higher. Okay now you know the meaning, but

what am I going to do with it? There was interesting work, you might have seen it a couple of months ago, on training computers to play video games. So

now you can recognize the objects and now you want to learn strategy, so how

can I then optimize rewards in the future? There's another body of research

in AI called reinforcement learning that has dealt with this problem in robot

navigation for instance, something that has been used a lot. And so the

accomplishment was to combine reinforcement learning and deep learning, so now you can recognize objects and you can also then based on your recognition

of these objects optimize strategies to then make the robot happy in the

future, so optimize rewards. That's the first step toward now learning

strategies and doing things to autonomously optimize behavior. So you have

the goal, you know I'm going to get a reward, and so the computer can now

figure out by itself what's the best strategy of maximizing this reward.

Page 20: Qualcomm’s Project Zeroth: Cognitive Computing in … · this foundation, Zeroth is also etched in silicon (Blythe Towal, 2013-from video presentation). Qualcomm’s chip that will

Neurochip 19

That's exciting progress. Now we go to the question of intelligence, how can

we have the computer figure out by itself to do certain tasks.

Isaac: So this is the question about future implications of this and

something that just came into my mind while you were talking is so we have these computers that are learning, especially from the reinforcement learning

you were talking about, and so I guess what happens when you have a powerful

computer, so this is not a smartphone, so you have a powerful computer that

has these learning networks into it and they have to seek about this

particular reinforcing criteria whatever it is and then you hook that up to

the internet. What do you see as possible emergent states? I don't wanna sprint toward this is when skynet comes online but it kind of feels like that

because when you're training these things inside a controlled environment and

you know all of the inputs that was given to it, I don't know, I'm assuming

that you're able to with a little bit more clinical precision control what

happens, but what happens when it's hooked up to a network that we don't

fully understand, even by a little bit, which is the whole internet.

Professor Reisenhuber: Right. So it kind of depends on, so computers, all the

algorithms that we've talked about, it's called supervised learning. So you

have an objective function, you have a goal, and the computer has to figure

out how do I get to that goal, how do I maximize my score in Space Invaders

or how do I recognize these objects, so if you can define a goal function,

then you can just throw deep learning at it and see can we figure out a way

to reliably then do this task. So it's the same architecture whether you want to predict the stock market, write or recognize objects, it's all this

deep learning architecture. And so, depending on what your goal is then if

the information is somewhere in your data then it depends on your learning

algorithm whether it can find it, but the question still is what's the goal

so it's not going to run away and do something crazy because these algorithms

all depend on you defining, okay, I want you to learn this. So I don't think it's any skynet yet because it's still very well limited by, here, this is

what you're supposed to do, and then the algorithms are great at figuring out

ways of doing that.

Isaac: Perfect. So, almost done. I guess we'll do two more questions. We kind

of alluded to it with some of the challenges, the question about the

challenges in terms of kind of where's cognitive computing go from here. But,

where do you see the field going and some of the potential innovations that will come along the way?

Page 21: Qualcomm’s Project Zeroth: Cognitive Computing in … · this foundation, Zeroth is also etched in silicon (Blythe Towal, 2013-from video presentation). Qualcomm’s chip that will

Neurochip 20

Professor Reisenhuber: Well I think it's, one is just having more and more

efficient ways of training these networks, these very complex networks. It's

going to allow us to effectively learn more and more complex task. And so right now, we're somewhat limited by how complex a system can you simulate.

Now these, the brain has a hundred billion neurons and then again each one

ten thousand connections, so you're looking at trillions of values that you

have to adjust and so this is just way beyond what we can learn right now and

we still have to get a little bit better at coming up with good learning

algorithms. If you remember maybe like a year ago, there was this Google team that had millions of CPUs learning to detect cats and nothing against cats,

but it's not so exciting. Then as we want to go to more high level tasks, we

still have to find ways of more efficiently learning from fewer examples for

instance, so to really come up with ways to make do with, again that's what

we humans are great at, but we don't yet understand how the brain does it,

how do we leverage prior learning. Something that we humans are very good at is connecting new things to things we already know. If I tell you that a

German shepard is a kind of dog, then you have some sort of idea of what it

is, and being able to leverage this type of prior learning, if we can get

computers to be good at that, then we're going to be much more efficient at

learning and have something that can really pick up on things the way we can

pick up on it and that's something that we don't really have a good way of doing yet. So it's still very much focused on individual tasks, but this, now

okay I've learned something, I'm going to build on that for a new task.

That's still in its infancy and that's really where a lot more interesting

developments are going to come from. So, building on this prior knowledge and

combining it in new ways that potentiates what we can currently do.

Isaac: Awesome. Well, I know that you have to run pretty soon, so anything

you think I'm missing in terms of an overview of this or an important thing for your cognitive science 101 student that you need to know this in order to

best understand this field, these technologies.

Professor Reisenhuber: I think this idea of cognitive computing is great, but

you still have to have a good idea of what is the problem that you want to

solve. So, just having a bunch of neurons with connections isn't going to do

the trick. But you have to say, okay, well, what do I want the computer to do

and then how can I then translate that in a way that the computer can learn it. So just having a neuromorphic processor just by itself isn't going to do

much. It's just going to sit there. Then the question is well, now what do I

want to get the system to do and what's the best way of doing that. For

instance, how does the brain? So, what's great about the brain is that we

know the brain can do it. We have existing proof that with this kind of

architecture it can work. With von Neumann, we don't have that. So there, we can turn to traditional approaches but ultimately we don't know can this work

Page 22: Qualcomm’s Project Zeroth: Cognitive Computing in … · this foundation, Zeroth is also etched in silicon (Blythe Towal, 2013-from video presentation). Qualcomm’s chip that will

Neurochip 21

on the von Neumann architecture. But here with the brain, we have the

existence proof that the neuromorphic architecture can do it and that's very

encouraging. So anything the brain can do then, well unless we get to lofty things like consciousness or whatnot, to a very high level, the idea is then,

if we have the right architecture, computational architecture, we can get the

computer to do that, too. But we still have to understand how it's all wired

together and that's something that we still have to do. So there's, just

having the hardware isn't going to solve the problem, but you still have to

be smart about how you're going to train this thing us.

Isaac: Well, that's awesome. I really appreciate your time.

Page 23: Qualcomm’s Project Zeroth: Cognitive Computing in … · this foundation, Zeroth is also etched in silicon (Blythe Towal, 2013-from video presentation). Qualcomm’s chip that will

Neurochip 22

Appendix 3: Oral Presentation Slides

Page 24: Qualcomm’s Project Zeroth: Cognitive Computing in … · this foundation, Zeroth is also etched in silicon (Blythe Towal, 2013-from video presentation). Qualcomm’s chip that will

Neurochip 23

Page 25: Qualcomm’s Project Zeroth: Cognitive Computing in … · this foundation, Zeroth is also etched in silicon (Blythe Towal, 2013-from video presentation). Qualcomm’s chip that will

Neurochip 24

Page 26: Qualcomm’s Project Zeroth: Cognitive Computing in … · this foundation, Zeroth is also etched in silicon (Blythe Towal, 2013-from video presentation). Qualcomm’s chip that will

Neurochip 25

Page 27: Qualcomm’s Project Zeroth: Cognitive Computing in … · this foundation, Zeroth is also etched in silicon (Blythe Towal, 2013-from video presentation). Qualcomm’s chip that will

Neurochip 26

Page 28: Qualcomm’s Project Zeroth: Cognitive Computing in … · this foundation, Zeroth is also etched in silicon (Blythe Towal, 2013-from video presentation). Qualcomm’s chip that will

Neurochip 27

Appendix 4: Poster and Visual Materials

Poster

Page 29: Qualcomm’s Project Zeroth: Cognitive Computing in … · this foundation, Zeroth is also etched in silicon (Blythe Towal, 2013-from video presentation). Qualcomm’s chip that will

Neurochip 28

Postcard Front

Page 30: Qualcomm’s Project Zeroth: Cognitive Computing in … · this foundation, Zeroth is also etched in silicon (Blythe Towal, 2013-from video presentation). Qualcomm’s chip that will

Neurochip 29

Postcard Back