Artificial Intelligence in 2027...in widespread use in 2027, what they thought would still show...

11
AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018 Artificial Intelligence in 2027 Maria Gini (University of Minnesota; [email protected]) Noa Agmon (Bar-Ilan University; [email protected]) Fausto Giunchiglia (University of Trento; [email protected]) Sven Koenig (University of Southern California; [email protected]) Kevin Leyton-Brown (University of British Columbia; [email protected]) DOI: 10.1145/3203247.3203251 Introduction Every day we read in the scientific and popu- lar press about advances in AI and how AI is changing our lives. Things are moving at a fast pace, with no obvious end in sight. What will AI be ten years from now? A tech- nology so pervasive in our daily lives that we will no longer think about it? A dream that has failed to materialize? A mix of successes and failures still far from achieving its promises? At the 2017 International Joint Conference on Artificial Intelligence (IJCAI), Maria Gini chaired a panel to discuss “AI in 2027.” There were four panelists: Noa Agmon (Bar-Ilan Univer- sity, Israel), Fausto Giunchiglia (University of Trento, Italy), Sven Koenig (University of South- ern California, US), and Kevin Leyton-Brown (University of British Columbia, Canada). Each of the panelists specializes in a different part of AI, so their visions span the field, providing an exploration of possible futures. The panelists were asked to present their views on possible futures, specifically addressing what AI technologies they expected would be in widespread use in 2027, what they thought would still show potential but not have become widely accepted, and what they expected the AI research landscape to look like ten years from now. This article summarizes the main points that each panelist made and their reflections on the topics. The focus in each contribution is not much on predicting the future but on bringing up specific open problems in each subarea and discuss how the current AI technologies could be steered to address them. Copyright c 2018 by the author(s). Noa Agmon, Bar-Ilan University 1 The discussion about the fourth industrial rev- olution, and the part of AI and robotics within it, is wide. In the context of this revolution, au- tonomous cars and other types of robots are expected to gain popularity and, among other things, to take over human labor. While surveys like “When will AI exceed human performance?” (GSD + 17) report that some researchers ex- pect robots to be capable of performing human tasks, such as running five kilometers, within ten years, this will probably take much longer given the current state of robotic development. Today, the use of robots is generally limited to three categories: non-critical tasks; settings in which robots are semi-autonomous, tele- operated, or remote-controlled (namely, not fully autonomous); and highly structured set- tings in which uncertainties are minimal. Ex- amples of such settings include the Amazon Robotics warehouse robots, which work au- tonomously in a structured environment (the warehouse), semi-autonomous drones oper- ated in military settings (usually follow a speci- fied route autonomously, though operative de- cisions are made by human operators), robots that perform cleaning tasks, which are consid- ered non-critical, Mars rovers, which operate semi-autonomously in unstructured environ- ments, and more. When robots are required to operate fully autonomously in unstructured set- tings requiring them to handle unbounded un- certainties or completely unpredictable events, they tend to fail. One of many examples is the Knightscope robot, which drove into a foun- tain on its first day of deployment as a security guard in Washington, D.C. Rather than arguing about the ability of robots 1 Acknowledgments: I would like to thank Gal Kaminka and David Sarne from Bar-Ilan University for their helpful comments. 10

Transcript of Artificial Intelligence in 2027...in widespread use in 2027, what they thought would still show...

Page 1: Artificial Intelligence in 2027...in widespread use in 2027, what they thought would still show potential but not have become widely accepted, and what they expected the AI research

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

Artificial Intelligence in 2027Maria Gini (University of Minnesota; [email protected])Noa Agmon (Bar-Ilan University; [email protected])Fausto Giunchiglia (University of Trento; [email protected])Sven Koenig (University of Southern California; [email protected])Kevin Leyton-Brown (University of British Columbia; [email protected])DOI: 10.1145/3203247.3203251

Introduction

Every day we read in the scientific and popu-lar press about advances in AI and how AI ischanging our lives. Things are moving at a fastpace, with no obvious end in sight.

What will AI be ten years from now? A tech-nology so pervasive in our daily lives that wewill no longer think about it? A dream that hasfailed to materialize? A mix of successes andfailures still far from achieving its promises?

At the 2017 International Joint Conference onArtificial Intelligence (IJCAI), Maria Gini chaireda panel to discuss “AI in 2027.” There werefour panelists: Noa Agmon (Bar-Ilan Univer-sity, Israel), Fausto Giunchiglia (University ofTrento, Italy), Sven Koenig (University of South-ern California, US), and Kevin Leyton-Brown(University of British Columbia, Canada). Eachof the panelists specializes in a different partof AI, so their visions span the field, providingan exploration of possible futures.

The panelists were asked to present their viewson possible futures, specifically addressingwhat AI technologies they expected would bein widespread use in 2027, what they thoughtwould still show potential but not have becomewidely accepted, and what they expected theAI research landscape to look like ten yearsfrom now.

This article summarizes the main points thateach panelist made and their reflections on thetopics. The focus in each contribution is notmuch on predicting the future but on bringingup specific open problems in each subarea anddiscuss how the current AI technologies couldbe steered to address them.

Copyright c© 2018 by the author(s).

Noa Agmon, Bar-Ilan University1

The discussion about the fourth industrial rev-olution, and the part of AI and robotics withinit, is wide. In the context of this revolution, au-tonomous cars and other types of robots areexpected to gain popularity and, among otherthings, to take over human labor. While surveyslike “When will AI exceed human performance?”(GSD+17) report that some researchers ex-pect robots to be capable of performing humantasks, such as running five kilometers, withinten years, this will probably take much longergiven the current state of robotic development.

Today, the use of robots is generally limited tothree categories: non-critical tasks; settingsin which robots are semi-autonomous, tele-operated, or remote-controlled (namely, notfully autonomous); and highly structured set-tings in which uncertainties are minimal. Ex-amples of such settings include the AmazonRobotics warehouse robots, which work au-tonomously in a structured environment (thewarehouse), semi-autonomous drones oper-ated in military settings (usually follow a speci-fied route autonomously, though operative de-cisions are made by human operators), robotsthat perform cleaning tasks, which are consid-ered non-critical, Mars rovers, which operatesemi-autonomously in unstructured environ-ments, and more. When robots are required tooperate fully autonomously in unstructured set-tings requiring them to handle unbounded un-certainties or completely unpredictable events,they tend to fail. One of many examples isthe Knightscope robot, which drove into a foun-tain on its first day of deployment as a securityguard in Washington, D.C.

Rather than arguing about the ability of robots

1Acknowledgments: I would like to thank GalKaminka and David Sarne from Bar-Ilan Universityfor their helpful comments.

10

Page 2: Artificial Intelligence in 2027...in widespread use in 2027, what they thought would still show potential but not have become widely accepted, and what they expected the AI research

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

to outperform humans, and when this mighthappen, the following discussion examines thechallenges and opportunities that will influencethe development of intelligent robotics in thenext ten years.

Dependence on hardware. As opposed tothe progress of AI, which relies mainly on al-gorithmic development and benefits from pro-cessing improvements, progress in robotics isalso intimately tied to the capabilities of electro-mechanics, physical sensors, and energy stor-age and management. Whatever apocalypticor euphoric visions we have for working withrobots, their realization is much more depen-dent on physical components than we, AI re-searchers and practitioners, tend to consider.For example, most quad-copters, which areconsidered to be a basis for breakthrough ap-plications (such as home deliveries and emer-gency services), can only fly for 30 minutes orso. Likewise, vacuum cleaners are limited inthe total area that they can cover before theyhave to be recharged. These energy concernsradically impact the usefulness of robots inapplications which are otherwise within reachfrom a pure software perspective.

The good news is that the intimate connectionbetween software and hardware works bothways. Just as modern SLAM algorithms (e.g.,(DNC+01)) were able to overcome intrinsic sen-sor limitations to create reliable and accuratemaps for navigation, advances in software canovercome some of the limitations posed byhardware.

AI influences on robotics. AI algorithms influ-ence robotics not only in compensating for andimproving the utilization of existing hardwarecapabilities, but also in enabling new tasks.Progress in natural language processing (NLP)and machine learning (used for chatbots, per-sonal digital assistants, and surveillance, for in-stance) enables more natural forms of human-robot interaction with physical robots, and au-tonomous cars. However, such positive influ-ences are somewhat asymmetric: AI will influ-ence robotics more than robotics will influenceAI. A personal robot benefits from NLP morethan NLP can benefit from the considerationof multi-modal interactions (as in “talking withyour hands.”)

Growing role for multi-robot systems(MRS). The academic research on MRS dates

back to the early 1980’s, when robots werescarce and not autonomous. Research hasprogressed far beyond the deployment of suchsystems outside of labs. Improvements in thereliability of robots will make it easier to deployMRS in various applications, continuing andaccelerating current successful trends (e.g., inwarehouses and hospitals). This, in turn, willaccelerate research on fully distributed, fullyautonomous systems, which are beyond cur-rent capabilities. It is obvious that human-robotinteractions will be a major focus of researchin the next ten years, as robots enter a greaternumber of unstructured environments in whichhumans operate. However, given the fore-seen growth in the role of multi-robot systems,human-MRS and multiple operator-single robotcollaborations will likely see increased efforts.

Increasing ties with other disciplines. Agood example of large-scale fully distributed,fully autonomous systems also raises an addi-tional trendthat of increasing ties with otherdisciplines. Swarms of molecular robots(nanobots), the size of which is measured innanometers, are becoming a reality in medicalapplications (for example, targeted drug deliv-ery). Trillions of such robots will be let loosein a patient’s body - the largest-scale MRS inrobotics history. The computation of interac-tions between different types of nanobots hasan immense impact on the ease and durationof development of new treatments (WKKH+16;KSSA+17). The capability to plan and rea-son about the interactions of these robots witheach other and with the body requires deepcollaboration between AI experts, biologists,and chemists.

Another example is reconfigurable robots,which can transform themselves into differentshapes, depending on the environment andtask. Research on such systems will benefitfrom close coordination with chemistry, physics,and biology, to take new findings into account.

Increasing accessibility, lower entry bar-rier, greater impact potential. A positivetrend which I believe will continue to grow is thelowering of the entry barrier into robotics prac-tice and research, at multiple levels. Research-grade robots for labs have seen dramatic de-creases in cost, and the common availabilityof 3D printing, cheap embedded computers(Arduino, Raspberry Pi), as well as continued

11

Page 3: Artificial Intelligence in 2027...in widespread use in 2027, what they thought would still show potential but not have become widely accepted, and what they expected the AI research

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

push on STEM education will make develop-ment of robots cheaper and easier than ever.The availability of common robot software mid-dleware, such as ROS, make it easier for re-searchers to focus their attention on bringingtheir expertise to bear on specific components.

Bottom line: More of the same (which isgood!) Within the next ten years and be-yond, we will not see general-purpose robots.That is, robots will still be dedicated to onetask, for example delivery, cleaning, or surveil-lance. Progress in the development of intelli-gent robotic systems will continue to focus onexcellence in the performance of specific tasks,and on the introduction of new tasks to newtypes of robots. To some extent, this will makerobot use more popular.

Fausto Giunchiglia, University of Trento2

Providing machines with knowledge, e.g., com-mon sense or domain knowledge, has alwaysbeen one of the core AI issues. Two are themain approaches to this problem. The firstdeductive approach, usually categorized un-der the general heading of “Knowledge Repre-sentation and Reasoning (KRR)”, dates backto John McCarthy’s advice taker proposal(McC60), and it is based on the idea of tellingmachines what is the case, for instance inthe form of facts codified as logical axioms.The second inductive approach, usually cate-gorized under the general heading of “MachineLearning (ML)”, consists of providing machineswith a set of examples from which to learn gen-eral statements, usually with a certain level ofconfidence.

A lot of relevant research has been be done inKRR, not least the work on the Semantic Web(BLHL01), and much more will be done. How-ever, the success of ML mainly, but not only,because of the work in Deep Learning (see,e.g., (LBH15)), has been so overwhelming that,thinking of what the research in KRR could bein the next ten years and where it could lead, arelevant question is the extent to which thesetwo lines of work should integrate in an effort tojointly produce results that either of them alone

2Acknowledgments: I would like to thank KobiGal, Daniel Gatica-Perez, Loizos Michael, DanieleMiorandi, Andrea Passerini and Carles Sierra forour many useful discussions on this topic.

could not produce.

A lot of successful work in this area has beendone, see for instance (GT07; RKN16). How-ever, the extent to which the KRR researchcould be improved by exploiting the researchdeveloped in ML, or dually, the extent to whichML would really need to exploit any of the re-sults developed in KRR is still unclear, at leastfor two reasons. The first is that this integrationis far from being trivial. It is a fact that these twoapproaches start from somewhat opposite as-sumptions, the first assuming that knowledgeconsists of a set of facts which are either true orfalse, with nothing in between, the second hav-ing to deal with the issue that any fact learnedvia ML will hardly ever be guaranteed to betrue or false with an infinite number of interme-diate levels. Furthermore, the need for suchan integration is far from being clear, at leastfrom an ML point of view. Among other things,it is a fact that the current ML techniques haveproven so powerful that, whenever applicable,they seem to be able to learn virtually unboundamounts of knowledge, far more knowledgethan could be codified by any knowledge engi-neer.

At the same time both the KRR and ML tech-niques have their own weaknesses. Thus, onone side, KRR presents a main difficulty in howto express the inherent complexity and vari-ability of the world, in particular but not only,when perception is involved. On the other side,instead, ML presents a main difficulty in mak-ing sense, in human terms, of the knowledgewhich is learned. In other words, the knowl-edge generated via ML does not often fit thepeople “intended semantics”, namely how theywould describe what is the case, for instanceas perceived or as learned from a large amountof text messages.

This difficulty of ML techniques, and datadriven approaches in general, has been knownfor many years. An explicit reference to thisproblem comes from the field of Computer Vi-sion, where it is named the Semantic GapProblem (SGP). The SGP was originally de-fined in (SWS+00) as follows: “... The seman-tic gap is the lack of coincidence between theinformation that one can extract from the visualdata and the interpretation that the same datahave for a user in a given situation. ...” It canbe noticed how this notion is completely gen-

12

Page 4: Artificial Intelligence in 2027...in widespread use in 2027, what they thought would still show potential but not have become widely accepted, and what they expected the AI research

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

eral and can be taken to refer to the human-machine misalignment which may arise withany type of information that a machine can ex-tract, i.e., learn, from any type of data. Thenovelty of these last years is that many moreinstances of the SGP are showing up andmany of them are also discussed in the news,the main reason being the increased use, in-creased power, and increased popularity of MLsystems and AI in general. Thus, we haveread of cases when a system learns biasedopinions, or when it learns a language that it isnot human, or when an autonomous car doesnot track another car thus causing an accident.And this, in turn, is the cause of a lot of publicdiscussions about the interaction between AIand humans, about AI and ethics and also ofan increased fear of AI.

A convincing explanation of how to deal withthe SGP seems necessary for AI to be used be-yond a set of niche (possibly very large) appli-cation areas and to be adopted by the generalpublic. I believe it will be very hard to convincepeople to use machines that they do not feelthey fully control, in high value application do-mains, e.g., health, mobility, energy, retail. Buta convincing explanation will not be enough. Asolution of the SGP is also needed for AI tobe used in practice. There are at least threemainstream application scenarios where somesolution to the SGP seems crucial. The firstis the anytime anywhere delivery of personal-ized services, as enabled by personal digitaldevices, e.g., smart phones or smart watches.But for this to happen, people will have to beable to make sense of why certain decisionshave been taken by the machine, and to agreewith them. The second is the empowerment ofsocial relations, exactly for the same reasonsmentioned above. Facebook, Whatsapp, orSnapchat are just the beginning and I foreseethe rise of a new generation of social networksempowering more specialized, more personal-ized, more diversity-aware interactions amongpeople. The third, and maybe the most im-portant, again because of the pervasivenessof digital devices, is that we are more mov-ing towards open world application scenarios.By this I mean application scenarios where, atdesign time, it is impossible to anticipate thesystem functional and non-functional require-ments. In this type of applications the effectsof the SGP can be devastating, as the diver-

gence between people and machines can onlyget worse in time.

In my opinion, a general solution of the SGPproblem, and in particular a solution whichis viable in open world application scenarios,can only be achieved via a tight integrationof knowledge-based approaches and machinelearning. The knocking down argument is thatthe only way to avoid the SGP is to makesure that machines learn representations of theworld which are the same as their referenceusers. But, the fact that knowledge shouldbe presented in human-like terms is exactlythe assumption underlying all the work in KRRand also logic. More specifically, whateverknowledge will be learned via a data-driven ap-proach, it will have be compared and ultimatelyaligned to the human knowledge. Someonecould argue that this is exactly what super-vised learning does. But this is not the case,as also witnessed by the fact that even the hu-man supervision, how it is implemented up tonow, does not make the SGP disappear. Theproblem is far more complex and it will requiremajor advances in both KRR and ML, and inAI in general, many of which, I believe, will bedisruptive. A list of four open issues is providedbelow, with the understanding that this list isnot meant to be complete nor correct. This listreflects only my current personal understand-ing of some of the problems which will have tobe dealt with when trying to solve the SGP.

1. Since the early days of AI a fundamentalissue has been that of building machineswhich would exceed human-level intelligence.This goal has been reached in many do-mains, e.g. chess or GO playing, while itis very far from being reached in other do-mains, e.g., robotics, as mentioned above.A solution of the SGP will require buildingmachines which will show human-like intelli-gence, representation and reasoning, as thebasis for the mutual human-machine under-standing. In this context, exceeding human-level intelligence seems a desired propertybut not strictly necessary.

2. A fundamental property of life, and of hu-mans in particular, is their ability to adaptto unpredicted events and evolve. Both theresearch in KRR and in ML seems very farfrom achieving this goal. It is however in-teresting to notice how a particular instance

13

Page 5: Artificial Intelligence in 2027...in widespread use in 2027, what they thought would still show potential but not have become widely accepted, and what they expected the AI research

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

of this inability to adapt was recognized byJohn McCarthy and named the problem oflack of generality of the current representa-tion formalisms (McC87).

3. The net result of the ability to adapt andevolve as a function of the local context willbe that the resulting knowledge will be highlydiversified. In turn, the diversity of knowl-edge will generate the need for further adap-tation in an infinite loop with will result in theprocess of knowledge evolution, somewhatanalogously to the kind of evolution we seein life. Notice how the proposed approachis quite different from that taken by the Se-mantic Web for the solution of the problemof semantic heterogeneity. The focus is noton representation tools, e.g., ontologies, orformalisms, e.g., Description Logics, but onthe process by which knowledge gets gener-ated, stored, manipulated, and used. In thisperspective the standard logics, e.g., mono-tonic non-monotonic logics, seem to solve, atmost, only part of the problem, and for surenot the most important.

4. A specific, but core, subproblem of the prob-lem of managing knowledge diversity, is theintegration of the knowledge obtained via per-ception, e.g., via computer vision, and theknowledge obtained via reasoning or by be-ing told. An implicit assumption which hasbeen made so far is that the linguistic repre-sentation of an object we talk about, e.g., theword “cat”, and the representation of whatwe perceive as a cat is one-to-one. As dis-cussed in detail in (GF16) this in general isnot the case and there is a many-to-manymapping between linguistic representationsand perceptual representations. On top ofthis, these mappings are highly dependenton the culture and on the single person and,even for the same person, change in time,as a function of the person current interests.A full understanding of how these mappingsare built, of how linguistic representationsinfluence the construction of perceptual rep-resentations, and vice versa, is a largely un-explored research area. Still, some form ofsolution to this problem will be needed inorder to guarantee that the machine will de-scribe what it will perceive coherently withwhat humans do.

Sven Koenig, University of Southern Cali-fornia3

AlphaGo (SHM+16) shows that it can be verydifficult to judge technical progress, as also no-ticed by Stuart Russell in his invited IJCAI-17talk. When it beat Lee Sedol in 2016, many ex-perts thought that such a win was still at leasta decade away. The AI techniques behind italready existed in principle. The ingenuity wasin figuring out how to put them together in theright way. Progress on AI technology is oftensteadier than it appears, yet such engineeringbreakthroughs happen only from time to time,are difficult to predict, and often make AI tech-nology visible in the public eye - creating theperception of waves of progress.

Various recent studies shed light on theexpected progress of AI by 2027, suchas the study on “AI and Life in 2030” aspart of the One Hundred Year Study on AI(ai100.stanford.edu) and a recent survey of allICML-15 and NIPS-15 authors (GSD+17). Thissurvey, for example, predicts that AI will outper-form humans around 2027 on tasks such aswriting high school essays, explaining actionsin games, generating top 40 pop songs, anddriving trucks. Furthermore, humanoid robotswill soon afterward beat humans in 5k races.Interestingly, North American researchers pre-dicted that it will take about 74 years to reachhigh-level machine intelligence across humantasks, while Asian researchers thought it wouldtake only 30 years. Indeed, there is currentlylots of excitement and optimism, for example,in China about the potential of AI with large in-vestments into application-oriented AI researchby both the government and private sector.

In the following, I view AI as the study of agentsto structure the discussion which kinds of re-search topics will be popular in 2027. I dis-tinguish rational agents (that make good de-cisions), believable agents (that interact likehumans), and cognitive agents (that think likehumans). A large amount of AI research cur-rently focuses on building rational agents onthe task level - by studying single AI techniquesin isolation and applying them to single tasks,resulting in narrowly intelligent agents. Thecurrent excitement about AI is often based on

3Acknowledgments: I would like to thank PaulRosenbloom and Wolfgang Honig from the Univer-sity of Southern California for helpful comments.

14

Page 6: Artificial Intelligence in 2027...in widespread use in 2027, what they thought would still show potential but not have become widely accepted, and what they expected the AI research

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

the power of a small set of AI techniques com-bined with the availability of large amounts ofdata (due to progress on sensor technologiesand the ubiquity of both smart phones andthe internet) as well as progress in roboticsfor the embodiment of AI. For example, theterm “big data” is typically used to characterizethe current AI era, driven by the capability ofmachine learning techniques. In fact, 49 per-cent of submissions to the International JointConference on AI (IJCAI) in 2017 used as firstkeyword “machine learning,” which is about ac-quiring good models of the world. However,these models need to serve a bigger purpose,for example, to make good decisions. Whilemachine learning can sometimes acquire eval-uation functions that help with making gooddecisions (as AlphaZero (SHS+17), the suc-cessor of AlphaGo shows), it often requireslots of data, has limited capability for transfer,and has difficulty integrating prior knowledge(Mar16) and thus is of limited help for makingdecisions in novel or dynamic environments.Perhaps the term “big decisions” will be usedto characterize the AI era around 2027, drivenalso by the capability of AI planning and simi-lar AI techniques. Current faculty hiring in theUS lags in research areas such as AI planningalthough the research community is alreadyheading in that direction. For example, thepopular textbook by Stuart Russell and PeterNorvig (RN09) views AI as the study of ratio-nal agents and thus essentially as a science ofmaking good decisions with respect to givenobjectives. But many other disciplines couldbe characterized similarly, including operationsresearch, decision theory, economics, and con-trol theory (Koe12). AI researchers make useof techniques from some of these disciplinesalready. For example, the textbook by StuartRussell and Peter Norvig discusses utility the-ory (from decision theory), game theory andauctions (from economics), and Markov deci-sion processes (from operations research), yetresearch collaborations across these and otherdisciplines are still developing, which is whywe should reach out more to researchers inother decision-making disciplines. There al-ready exist some good but narrow interfaces,such as the Conference on the Integration ofConstraint Programming, AI, and OperationsResearch (CPAIOR) or the ACM Conferenceon Economics and Computation (EC). Therealso exists an attempt to put a broader interface

in place, namely the International Conferenceon Algorithmic Decision Theory (ADT), which”seeks to bring together researchers and prac-titioners coming from diverse areas such asAI, Database Systems, Operations Research,Discrete Mathematics, Theoretical ComputerScience, Decision Theory, Game Theory, Multi-agent Systems, Computational Social Choice,Argumentation Theory, and Multiple CriteriaDecision Aiding in order to improve the the-ory and practice of modern decision support”(sma.uni.lu/adt2017). Such interdisciplinaryintegration can result in economic success.CPAIOR, for example, started in 2004 (pre-ceded by five workshops) and still thrives. Inparallel, ILOG successfully integrated softwarefor constraint programming and linear optimiza-tion and was acquired by IBM in 2009. Myhope is that we will have a thriving conferenceon intelligent decision making by 2027 that willbe attended by researchers from all decision-making disciplines, including AI. Of course,the different decision-making techniques alsoneed to be integrated into systems. AI canlead the way by developing agent architectureswith good theoretical foundations for how dif-ferent parts should interact, resulting in morebroadly intelligent agents on the job level (thatis, across tasks). This is no simple feat as therestricted applications of current robot architec-tures show. Integrating decision-making tech-niques from different disciplines is even moredifficult, for example, because of their differ-ent assumptions (often due to different applica-tion areas studied by different disciplines) anddifferent ideas about what constitutes a goodsolution (due to disciplinary training), whichis why we should start to give students multi-disciplinary training in decision making.

While rational agents will continue to be impor-tant, human-aware agents will become moreand more important and, with them, also be-lievable agents that allow for interaction withgestures, speech, and other human-like modal-ities, understand human conventions and emo-tions, predict human behavior, and - in general- appear to be human-like. We already useintelligent assistants on a variety of platforms(such as Apple’s Siri or Amazon’s Echo) andwill soon routinely have conversations - includ-ing negotiations - with all kinds of apparatus,perhaps including our elevators and toilets ,.

The progress on cognitive agents, one of the

15

Page 7: Artificial Intelligence in 2027...in widespread use in 2027, what they thought would still show potential but not have become widely accepted, and what they expected the AI research

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

early dreams of AI, is more difficult to judge.The research community currently works onhybrid approaches that combine ideas fromsymbolic, statistical, and/or neural processingand on a community-wide “Common Model ofCognition” (KT16).

Finally, AI researchers and practitioners slowlygain an understanding that they should not justdevelop AI techniques but also have some sayin how they are being used. We need to askourselves questions such as:

Do we need to worry about the reliabil-ity, robustness, and safety of AI systemsand, if so, what to do about it? How do weguarantee that their behavior is consistentwith social norms and human values? Whois liable for incorrect AI decisions? Howto ensure that AI technology impacts thestandard of living, distribution and qualityof work, and other social and economic as-pects in the best possible way? (BGK+17)

AAAI and ACM recently co-founded theAAAI/ACM Conference on AI, Ethics, and Soci-ety (AIES, www.aies-conference.com) to comeup with answers to these questions. AIES wasfilled to capacity. I hope that AIES and its topicswill be even more popular in 2027.

Kevin Leyton-Brown, University of BritishColumbia4

It is a daunting task to predict the directionAI research will take a decade from now, par-ticularly given the checkered history of suchprognostication in the past. In an attempt togo beyond idle speculation, I have thereforestructured this reflection around three differentapproaches a forecaster might use in makingsuch predictions. Despite recognizing the likeli-hood that some of what follows will appear fool-ish in retrospect, I strive to make bold claimsabout what the future will hold. I hope that AIresearchers of 2027 will forgive me!

I. Forecasting via prototypes. Ten years

4Acknowledgments: I would like to thank themembers of the AI100 2015–16 Study Panel forhelping to shape my thinking about the future of AI:P. Stone, R. Brooks, E. Brynjolfsson, R. Calo, O. Et-zioni, G. Hager, J. Hirschberg, S. Kalyanakrishnan,E. Kamar, S. Kraus, D. Parkes, W. Press, A. Saxe-nian, J. Shah, M. Tambe, A. Teller (SBB+16).

sounds like a long time, but in fact it takes aboutthat long for technologies to move from thelab to widespread practice: the transformativetechnologies of today existed in prototype forma decade ago. One approach to AI forecastingis thus to look at today’s prototypes and toimagine their more widespread deployment.

Broadly speaking, today’s AI prototypes offertailored solutions for specific tasks rather thangeneral intelligence. Some AI research top-ics that I expect to see making considerablybroader social impact by 2027 include:

• Non-text input modalities (vision; speech)• Consumer modeling (recommendation; mar-

keting)• Cloud services (translation; question answer-

ing; AI-mediated outsourcing)• Transportation (automated trucking; some

self-driving cars)• Industrial robotics (factories; some drone ap-

plications)• AI knowledge work (logistics planning; radi-

ology; legal research; call centers)• Policing & security (electronic fraud; cam-

eras; predictive policing)

By considering where today’s prototypes haveachieved less traction, it is also possible toforecast sectors in which AI technologies areless likely to take off quickly. Overall, theseare often areas in which major entrenched reg-ulatory regimes need to be navigated; wherethere exist substantial social or cultural barri-ers to the adoption of new technologies; and/orwhere broad impact would depend on nontrivialhardware breakthroughs. Many such sectorsare the focus of concerted research today andare likely to remain important in the researchlandscape in 2027; however, I believe that theyare less poised for short-term practical impact.Some key examples are childcare, healthcare,and eldercare; education; consumer robots be-yond niche applications; and semantically richlanguage understanding.

II. Forecasting via consumer desires. Asecond strategy is to assume that investment,entrepreneurial energy, and industrial R&D willfocus on meeting consumer needs that are al-ready apparent today, and hence that theseareas will see future breakthroughs.

16

Page 8: Artificial Intelligence in 2027...in widespread use in 2027, what they thought would still show potential but not have become widely accepted, and what they expected the AI research

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

Labor automation. A fundamental consumerneed is for someone else to perform unpleas-ant, routine tasks. The promise of automatingsuch tasks has been part of the AI story fromthe beginning (e.g., Shakey the robot deliver-ing coffee in an office setting (Nil84)) and isincreasingly becoming a reality (e.g., robot vac-uum cleaners in the home; ordering books viaAmazon Alexa). However, there is much scopefor additional innovation in this space, center-ing on currently unaddressed tasks to whichlarge numbers of people currently devote con-siderable time. Some potential examples arehousehold cleaning, yard work, pet care, shop-ping, and food preparation. Some needs maybe met by directly replacing human with roboticlabor; others may be met via “gig economy”platforms that use AI on the back end to moreefficiently allocate human labor; and still oth-ers may be met in entirely new ways, such asby combining AI-driven logistics platforms withcentralized industrial processes (e.g., replac-ing supermarkets with apps, warehouses, andcourier services).

Social connection. We are highly socialcreatures, and are willing to pay handsomelyfor technologies that help us to make andstrengthen connections with others. Currentinstantiations of such technologies (e.g., so-cial networks; remote work platforms; onlinedating) are highly valuable, but relatively primi-tive from an AI perspective, relying mainly onmicro-blogging, direct messaging, user model-ing, and newsfeed curation. There is scope formore AI mediation of social connection, reduc-ing the frictions that prevent people from easilyfinding others to interact with in the momentand making those interactions richer.

Entertainment. Our research community’sfocus on solving industrially or socially im-portant problems sometimes may cause usto pay insufficient attention to AI’s potentialfor transforming the entertainment industry,which addresses another fundamental con-sumer need. Gaming is already bigger thanHollywood (Che17), but the future of AI in en-tertainment will go far beyond what we now seeas computer games. Future AI entertainmentswill increasingly be interactive and multimodal,and will intersect with sectors we now see asdistinct, such as fitness, learning, performinguseful tasks, and spending quality time withfriends. AI will also play an increasingly criti-

cal role in the creation, delivery, and personal-ization of traditional, broadcast entertainmentsuch as TV.

Education. Education is poised to grow asa consumer sector (ROO18), both as workersrespond to the need to reskill and as individ-uals with extra leisure time follow their pas-sions. I argued above that education will notbe transformed by AI in a decade; however,particularly because of the dual role many AIresearchers hold as educators, there is never-theless considerable scope for AI technologyto make incremental progress in improving thecontent and delivery of educational materials.Some examples include tailoring lessons to astudent’s skill level, making exercises more in-teractive, facilitating communication betweenboth peers and instructors outside classroomsettings, and reducing the drudgery currentlyentailed by grading student work. Such innova-tions could improve student outcomes, lowerthe cost of education, and broaden its reach.

III. Forecasting via extrapolation. A finalstrategy is to ask what we will be concernedwith if current progress in AI continues.

AI beyond ML. Much recent progress in AIhas arisen from improved techniques for learn-ing to make predictions: finding a model thatis currently built by hand and replacing it by amodel that is learned from data (LBH15). Wemight therefore ask what problems would re-main or become important if our capacity tobuild black-box models from data were to be-come arbitrarily effective. It is clear that evenin such a world, we would be far from hav-ing achieved strong AI. Some problems thatwould remain open are still close to machinelearning: explaining why a model made theprediction it did, or certifying fairness or compli-ance with legal requirements. Others, such asmaking counterfactual predictions (how woulda system perform under a perturbation of thegenerating distribution?), go beyond the as-sumptions inherent in most supervised learn-ing methods, requiring instead new, structuralassumptions about an underlying setting. Stillother problems extend beyond prediction todecision making, both in single-actor settings(e.g., optimization; planning) and multi-agentdomains (weighing competing objectives viapreference aggregation or mechanism design).

Increasing regulation. AI is touching the

17

Page 9: Artificial Intelligence in 2027...in widespread use in 2027, what they thought would still show potential but not have become widely accepted, and what they expected the AI research

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

lives of individuals, the economy, and the polit-ical system in ever increasing ways. Many insociety find specific instantiations of AI frighten-ing; many special interests are threatened bynew technologies. It is thus inevitable that politi-cians will increasingly see a need to respond,and that AI technologies will face increasingregulation. This is something we should wel-come; any mature technology must be account-able to the society in which it operates. But thedetails will matter enormously. A major focusof AI research in 2027 will be helping to shaperegulations before they become law and de-signing systems within the constraints impliedby these regulations afterwards.

Superhuman intelligence. AI systems will in-creasingly become capable of reaching human-level performance in a variety of applicationdomains. There is nothing special about thisthreshold, and so we should expect the adventof AI systems exhibiting superhuman intelli-gence in a growing set of domains. This isoften cast as a frightening prospect, but I ar-gue that we will quickly become comfortablewith it. After all, superhuman intelligences arealready commonplace: governments, corpora-tions, and NGOs are all autonomous agentsthat exhibit behavior much more sophisticatedand complex than that of any human. We aretypically unconcerned that no one person caneven fully understand decisions made by theFrench government, by General Motors, or bythe Red Cross. Instead, we aim to manage andto gain high-level understanding about such ac-tors via reporting requirements, specificationsof the interests that they must act to advance,and laws that forbid bad behavior. Society en-courages the creation of such superhuman in-telligences today for the same reason it will wel-come superhuman AI tomorrow: many impor-tant problems are beyond the reach of individ-ual people. Some key examples are improvedcollective decision making; more efficient allo-cation and use of scarce resources; addressingunder-served communities; and limiting and re-sponding to climate change.

References

E. Burton, J. Goldsmith, S. Koenig, B. Kuipers,N. Mattei, and T. Walsh. Ethical considera-tions in Artificial Intelligence courses. ArtificialIntelligence Magazine, 38(2):22–34, 2017.

Tim Berners-Lee, James Hendler, and Ora Las-sila. The semantic web. Scientific American,284(5):34–43, 2001.Chet Chetterson. How games overtookthe movie industry, 2017. http://www.geekadelphia.com/2017/06/29/how-games-overtook-the-movie-industry,Accessed March 22, 2018.M.W.M. Gamini Dissanayake, Paul Newman,Steve Clark, Hugh F Durrant-Whyte, andMichael Csorba. A solution to the simultaneouslocalization and map building (SLAM) problem.IEEE Transactions on Robotics and Automa-tion, 17(3):229–241, 2001.Fausto Giunchiglia and Mattia Fumagalli. Con-cepts as (recognition) abilities. In Formal Ontol-ogy in Information Systems: Proc. 9th Int. Con-ference (FOIS 2016), pages 153–166, 2016.K. Grace, J. Salvatier, A. Dafoe, B. Zhang, andO. Evans. When will AI exceed human per-formance? Evidence from AI experts. ArXivPreprint arXiv:1705.08807v2, 2017.Lise Getoor and Ben Taskar. Introduction toStatistical Relational Learning (Adaptive Com-putation and Machine Learning). The MITPress, 2007.S. Koenig. Making good decisions quickly. TheIEEE Intelligent Informatics Bulletin, 13(1):14–20, 2012.Gal A. Kaminka, Rachel Spokoini-Stern, YanivAmir, Noa Agmon, and Ido Bachelet. Molecularrobots obeying Asimov’s three laws of robotics.Artificial Life, 23(3):343–350, 2017.I. Kotseruba and J. Tsotsos. A review of 40years of cognitive architecture research: Corecognitive abilities and practical applications.ArXiv Preprint arXiv:1610.08602, 2016.Yann LeCun, Yoshua Bengio, and Geoffrey Hin-ton. Deep learning. Nature, 521(7553):436,2015.G. Marcus. Deep learning: A critical appraisal.ArXiv Preprint arXiv:1801.00631, 2016.John McCarthy. Programs with common sense.RLE and MIT Computation Center, 1960.John McCarthy. Generality in Artificial In-telligence. Communications of the ACM,30(12):1030–1035, 1987.Nils J Nilsson. Shakey the robot. Technical re-port, SRI International, Menlo Park, CA, 1984.Luc De Raedt, Kristian Kersting, and SriraamNatarajan. Statistical Relational Artificial Intel-

18

Page 10: Artificial Intelligence in 2027...in widespread use in 2027, what they thought would still show potential but not have become widely accepted, and what they expected the AI research

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

ligence: Logic, Probability, and Computation.Morgan & Claypool Publishers, 2016.

S. Russell and P. Norvig. Artificial Intelligence:A Modern Approach. Pearson, 2009.

Max Roser and Esteban Ortiz-Ospina. Financing education, 2018.https://ourworldindata.org/financing-education, Accessed March22, 2018.

Peter Stone, Rodney Brooks, Erik Brynjolfs-son, Ryan Calo, Oren Etzioni, Greg Hager,Julia Hirschberg, Shivaram Kalyanakrishnan,Ece Kamar, Sarit Kraus, Kevin Leyton-Brown,David Parkes, William Press, AnnaLee Sax-enian, Julie Shah, Milind Tambe, and AstroTeller. Artificial intelligence and life in 2030.One Hundred Year Study on Artificial Intelli-gence: Report of the 2015-2016 Study Panel,2016.

D. Silver, A. Huang, C. Maddison, A. Guez,L. Sifre, G. van den Driessche, J. Schrittwieser,I. Antonoglou, V. Panneershelvam, M. Lanctot,S. Dieleman, D. Grewe, J. Nham, N. Kalch-brenner, I. Sutskever, T. Lillicrap, M. Leach,K. Kavukcuoglu, T. Graepel, and D. Hassabis.Mastering the game of Go with deep neural net-works and tree search. Nature, 529:484–489,2016.

D. Silver, T. Hubert, J. Schrittwieser,I. Antonoglou, M. Lai, A. Guez, M. Lanctot,L. Sifre, D. Kumaran, T. Graepel, T. Lillicrap,K. Simonyan, and D. Hassabis. MasteringChess and Shogi by self-play with a generalreinforcement learning algorithm. ArXivPreprint arXiv:1712.01815, 2017.

Arnold WM Smeulders, Marcel Worring, Si-mone Santini, Amarnath Gupta, and RameshJain. Content-based image retrieval at theend of the early years. IEEE Transactionson Pattern Analysis and Machine Intelligence,22(12):1349–1380, 2000.

Inbal Wiesel-Kapah, Gal A. Kaminka, GuyHachmon, Noa Agmon, and Ido Bachelet.Rule-based programming of molecular robotswarms for biomedical applications. In Proc.International Joint Conference on Artificial In-telligence (IJCAI), pages 3505–3512, 2016.

Maria Gini is a Professorin the Department of Com-puter Science and Engi-neering at the Universityof Minnesota. She stud-ies decision making for au-tonomous agents for dis-tributed task allocation forrobots, robotic exploration,teamwork for search andrescue, and navigation indense crowds. She is a

AAAI and a IEEE Fellow, and current pres-ident of IFAAMAS. She is Editor-in-Chief ofRobotics and Autonomous Systems, and ison the editorial board of numerous journals,including Artificial Intelligence, AutonomousAgents and Multi-Agent Systems, and Inte-grated Computer-Aided Engineering.

Noa Agmon is an Assis-tant Professor in the De-partment of Computer Sci-ence at Bar-Ilan University,Israel. Her research fo-cuses on various aspectsof single and multi robotsystems, including multi-robot patrolling, formation,coverage and navigation,with specific interest inrobotic behavior in adver-sarial environments. She

is a board member of IFAAMAS, and on theeditorial board of JAIR and IEEE Transactionson Robotics.

Fausto Giunchiglia is aProfessor in the Depart-ment of Computer Sci-ence and Information En-gineering (DISI) at theUniversity of Trento, Italy.His current research is fo-cused on the integration ofperception and knowledge

representation as the basis for understandinghow this generates the diversity of knowledge.His ultimate goal is to build machines whichreason like humans. HE is an ECCAI Fellow.He was a President and a Trustee of IJCAI, anAssociate Editor of JAIR, and a member of theEditorial Board of many journals, including theJournal of Data Semantics and the Journal ofAutonomous Agents and Multi-Agent systems.

19

Page 11: Artificial Intelligence in 2027...in widespread use in 2027, what they thought would still show potential but not have become widely accepted, and what they expected the AI research

AI MATTERS, VOLUME 4, ISSUE 1 4(1) 2018

Sven Koenig is a profes-sor in computer scienceat the University of South-ern California. Most of hisresearch centers aroundtechniques for decisionmaking that enable singlesituated agents (such asrobots or decision-supportsystems) and teams ofagents to act intelligentlyin their environments and

exhibit goal-directed behavior in real-time, evenif they have only incomplete knowledge of theirenvironment, imperfect abilities to manipulateit, limited or noisy perception or insufficient rea-soning speed. He is a AAAI and AAAS fellowand chair of ACM SIGAI. Additional informa-tion about Sven can be found on his webpages:idm-lab.org.

Kevin Leyton-Brown isa Professor of ComputerScience at the Univer-sity of British Columbia inVancouver, Canada. Hestudies the intersection ofcomputer science and mi-croeconomics, addressingcomputational problems ineconomic contexts and in-centive issues in multia-gent systems. He also ap-plies machine learning to

various problems in artificial intelligence, no-tably the automated design and analysis ofalgorithms for solving hard computational prob-lems. He is a AAAI Fellow, chair of ACM SIGe-com, an IFAAMAS board member, and hasserved as an Associate Editor for AIJ, JAIR,and ACM-TEAC.

20