Ethical Considerations in the Design of Artificial Intelligence
-
Upload
john-c-havens -
Category
Technology
-
view
522 -
download
1
Transcript of Ethical Considerations in the Design of Artificial Intelligence
Ethical Considerations in the Design of Artificial Intelligence
John C. Havens * Mike Van der Loos * Alan Mackworth * John P. Sullins
#AIEthics
The Delight In The Data
Welcome and Introductions
Agenda
• Introductions
• John C. Havens
• Mike Van der Loos
• Alan Mackworth
• John Sullins
• Moderated Panelists Discussion
• Audience Q&A
• End#AIEthics
4
IEEE Global Ethics Initiative
• Launched April 5, 2016
• Executive Committee of twelve global thought leaders in AI, autonomous tech, Ethics
• Eleven Committees featuring over eighty additional thought leaders from over twelve countries
• IEEE Staff/Society Involvement: Representatives from SA, TA, RAS, SSIT, Computer Society, IEEE P2040*
• AI Association Involvement: AAAI, EurAI, IJCAI
• Policy orgs represented: WEF, UN, FCC, Future of Privacy Forum*
• Companies represented include: IBM, EMC, Cisco, NXP, LucidAI, Google DeepMind*
• Academic Institutions represented include: University of Texas, TU Delft, University of British Columbia, Arizona State University, University of Washington, University of Cambridge, Duke University, Harvard University, MIT, Georgia Institute of Technology*
*Partial listing
05/02/2023 7
Committees:
• Executive Committee
• AI Ecosystem Mapping Committee
• General Principles and Guidance
• Legal Issues
• Affective Computing
• Safety and Beneficence of AGI and ASI
• Individual/Personal Data Control
• Economics of Machine Automation/Humanitarian Issues
• Methodologies to Guide Ethical Research, Design and Manufacturing
• How to Imbue Ethics/Values into AI
• Reframing Lethal Autonomous Weapons Systems (LAWS)
• Global Initiative invited to have satellite meeting as part of Europe’s largest AI Conference
• Initiative Committees gather for first face-to-face meeting
• Initiative Committees bring Charter Language (Crowdsourced Code of Conduct) to event
• Committees Bring Standards Projects to Workshops (to submit to SA)
• Attendees at Workshops help iterate Language
• Attendees to Workshops provide feedback and vote on Projects
• Second face to face meeting at UT in March, 2017 before SXSW Conference
• Attendees evolve Charter 2.0 to Charter 3.0
• Charter available via Creative Commons License for good of technology community at large
• By March 2017, over Multiple Standards Projects will be recommended to SA as PARs
• At UT, Global Initiative announces its formation as an Alliance, global University partnerships
• Alliance iterates Charter annually via meetings around the world, creates Certifications/Workshops to implement Charter in multiple verticals, serves as an ongoing, global R&D Standards Pipeline for SA
11
Mike Van der Loos
WHAT SHOULD A ROBOT DO? – A quest to develop interactive robots with ethics in mind
H.F. MACHIEL VAN DER LOOSELIZABETH A. CROFT
AJUNG MOON
THE UNIVERSITY OF BRITISH COLUMBIACOLLABORATIVE ADVANCED ROBOTICS AND INTELLIGENT SYSTEMS LAB
13
HFM VAN DER LOOS
CARIS LAB
MAY 13, 2016
Collaborative Advanced Robotics and Intelligent Systems Lab
ELIZABETH A. CROFT
Elizabeth A. Croft Mike Van der Loos
14
HFM VAN DER LOOS
ROBOTS ARE COMINGHUMAN-ROBOT COLLABORATION
CARIS lab, UBC (2010)
www.plasticsnews.com
Baxter, Rethink Robotics (2012)
MAY 13, 2016
15
HFM VAN DER LOOS
ROBOETHICS
MAY 13, 2016
ETHICS APPLIED TO ROBOTICS
Roboethics
- Human ethics- Applied ethics adopted by
designers / manufacturers / users
- Code of conduct implemented in the artificial intelligence of robots
- Artificial ethics for robots to exhibit ethically acceptable behaviour
Roboethics Robot Ethics
- Morality of a hypothetical robot that is equipped with a conscience and freedom to choose its own actions
Robot’s Ethics
Fiorella Operto, Ethics in Advanced Robotics, 18 IEEE ROBOT. AUTOM. MAG. 72–78 (2011)
16
HFM VAN DER LOOS
PROBLEM
MAY 13, 2016
What is right / wrong? Fair / unfair?
What should / ought a robot do?
Who knows the answers?
Design decision
Policy decisions
Technical implementations
Culture
ReligionContext
Philosophical stance
…
ANSWER
17
HFM VAN DER LOOS
DEMOCRATIC APPROACH
MAY 13, 2016
OPEN ROBOETHICS INITIATIVE (ORI)
18
HFM VAN DER LOOS
WHO WE ARE
MAY 13, 2016
INTRODUCING THE MEMBERS
Jason Millar
19
HFM VAN DER LOOS
AUTONOMOUS CARS
MAY 13, 2016
STUDYING WHAT PEOPLE THINKA total of 10 polls and 766 responses on autonomous cars polls since April 25, 2014
20
HFM VAN DER LOOS
AUTONOMOUS CARS
MAY 13, 2016
Image by: Craig Berry
Continue straight and kill the child
64%
Swerve and kill the passenger (you)
36%
If you find yourself as the passenger of the tunnel problem, how should the car react?
N=113. Analyzed on June 22, 2014
Difficult24%
Moderately difficult28%
Easy48%
How hard was it for you to answer the Tunnel Problem question?
N=116. Analyzed on June 22, 2014
Passenger 44%
Lawmakers33%
Other12%
Manufacturer / designer
12%
Who should determine how the car responds to the Tunnel Problem?
N=113. Analyzed on June 22, 2014
STUDYING WHAT PEOPLE THINK
21
HFM VAN DER LOOS
A DEMONSTRATION
MAY 13, 2016
IMPLEMENTING PEOPLE’S DECISIONS
22
HFM VAN DER LOOS
CONCLUSIONTAKE HOME MESSAGES
MAY 13, 2016
PROBLEM:What should a
robot do? Public acceptance & design decisions
Democratic approach to moral decisions
Delegating decision making to atomic
interactions
Human-Robot Interaction (HRI)
Roboethics
23
HFM VAN DER LOOS
ACKNOWLEDGMENTS CARIS Lab ICICS UBC Dept. of Mechanical Engineering CFI NSERC Vanier Canada Graduate Scholarships
MAY 13, 2016
CONTACT INFORMATION:Mike Van der Loos, Ph.D., P.Eng.Assoc. Prof., Dept. of Mechanical Engineering, UBC6250 Applied Science LaneVancouver, BC V6T 1Z4 CANADAphone: +1-604-827-4479email: [email protected]: http://mech.ubc.ca/machiel-van-der-loos/research: http://caris.mech.ubc.ca; http://rreach.mech.ubc.caOri: http://www.openroboethics.org
24
Alan Mackworth
Trusted Artificial Autonomous AgentsAlan Mackworth
• New ontological category: Artificial Autonomous Agents (AAAs)• Q: Can we trust them? • A: No!• Q: Why not? • A: E.g. ‘Deep Learning’: opaque, with massive, inaccessible training sets• Ethical agents have to be trustworthy• Need new methods to build trusted, ethical agents • Ensure AAAs values are aligned with users’ and society’s values
Five Approaches to Building Trusted Agents
1. Formal methods for specification and verification
2. Hierarchical constraint-based modular architectures
3. Inferring human values: e.g. inverse reinforcement learning
4. Semi-autonomy, human in the loop
5. Participatory Action Design: user-centered with Wizard of Oz techniques
What We NeedAny ethical discussion presupposes we (and agents) can:
• Model agent structure and functionality
• Predict consequences of agent commands and actions
• Impose constraints on agent actions such as goal reachability, safety and liveness (absence of deadlock and livelock)
• Determine if an agent satisfies those constraints (almost always)
Formal Methods to Build Trustworthy AAAs
To show that implementation satisfies specification, we need a tripartite theory:
1. Language to express agent structure and dynamics
2. Language for constraint-based specifications
3. Method to determine if an agent will (be likely to) satisfy its specifications, connecting 1 to 2
A Constraint-Based Agent (CBA)
CBA Structure
Constraint Solver
Formal Methods for Agent Verification
The CBA framework consists of:
1. Constraint Net (CN) → system modelling
2. Timed -automata → behavior specification
3. Model-checking and Liapunov methods → behavior verification
A
(Zhang & Mackworth, 1993, …)
Hierarchical Modular CBA in CN
← CBA Structure
↑Control Synthesis with Prioritized Constraints
Constraint1 > Constraint2 > Constraint3
>
Artificial Semi-autonomous Agents (ASAs)
• Keep human(s) in the loop• Shared autonomy at the higher control levels• Provide ‘sliders’ for users to adjust autonomy levels• Not one size fits all• Case study: smart wheelchairs for cognitively and physically impaired
older adults
Docking and Back-in Parking Assistance Driving Scenario at Long Term Care Facility
Shared Autonomy Wheelchair Control Modes Level 1: Basic safety by limiting speed
Level 2: Level 1 + non-intrusive steering guidance
Level 3: Level 1 + intrusively turning away from obstacles
Level 4: Completely autonomous
The Wizard [Baum, 1900]
Systems developed using user-centered Participatory Action Design methodology and Wizard of Oz techniques
Closing Thoughts• More R&D on building trusted AAAs and ASAs required • Formal specification and verification of AAAs needed• Governments lack technical expertise to develop standards• Lack of effective global standards bodies with enforcement • Regulatory capture: power of corporations to fend off regulation• Poor education of AI scientists & roboticists in morals and ethics • AI singularity & superintelligence hype overshadows real concerns• See One Hundred Year Study of AI https://ai100.stanford.edu
Thanks to: Y. Zhang, P. Viswanathan, A. Mihailidis, B. Adhikari, I. Mitchell, J. Little, …. Contact: [email protected] @AlanMackworth URL: http://www.cs.ubc.ca/~mack
36
John P. Sullins
John P. SullinsProfessor of PhilosophySonoma State University
Embedded Ethics Design for AI and Robotics• Building workable solutions requires many disciplines to work
together• When it is working well, philosophy is a big picture discipline and it
has much to offer in our quest of building beneficial AI and robotics applications
• Especially in the area of ethics and the design of artificial moral agents
Bryant Walker Smith• Lawyers and Engineers Should Speak the S
ame Robot Language, Bryant Walker Smith, 2015
• Each application has many uses• Actual• Legal• Reasonable• Use intended by the designer
• “An open question is the extent to which product design should attempt to confine actual uses to those that are legal, reasonable, or intended.”
Ethical DesignI recommend we add ethical use to the list of potential uses as well
A-Actual Use
B-Reasonable Use
C-Intended Use
D-Legal Use
E-Ethical Use
Actual Use
Reasonable use
Intended UseLegal Use
Ethical Use
Ethics Applied to AI and Robotics
Image from: Are Deontological Moral Judgements Rationalizations?
Some problems• Classical Ethics is only
concerned with human agency
• What is the best ethical system to apply?• No science is ever truly
finished so the science of ethics will not result in one unified theory either
A Helpful Alternative• The following discussions can be
distracting• Egoism vs Altruism
• Self interest vs Benevolence• Free Will vs Determinism
• Responsibility
• Morality has roots in evolution• Ethics is a tool or instrument
that we use to design new forms of beneficial behavior
American Pragmatist Philosopher:1859-1952
Three Active Areas of AI Ethics Research
Embedded Ethics Design• The "...engineer, carries on the great
part of his work without consciously asking himself whether his work is going to benefit himself or someone else. He is interested in the work itself; such objective interest is a condition of mental and moral health.... Nevertheless, there are occasions when conscious reference to the welfare of others is imperative." Dewey, Ethics 1935.
• We need embedded ethics professionals at the level of the design team
• To meet the needs for engineers who must focus on their work
• And for the organization that employs them to pay appropriate concern to the ethical impacts of their work
• This can take the form of consultants but it would be best to have some of the designers trained in values sensitive design
• Their job is to find the areas of ethical concern in a design and suggest constructive means for mitigating problems in the design stage
• This prevents the approach we often see• release-disaster-beg forgiveness
• Since embedded ethicists might be susceptible to something like the Stockholm syndrome, we must also have ethics review boards
AI and Robotics Ethics BoardsShort term ethical concerns are met by creating a dialog that follows these steps
1. Identify the ethical concerns raised by the new technology.
a. Anticipate consequences. Create proactive ethics rather than merely reactive ones.
b. Enhance the standard model IRB and replace it with one that fosters embedded ethicists in the design groups that closely work with them and help foster a community of practice around ethical deliberation.
2. Vet the overall design strategy of the organization.
a. Define the ethical goals—what does the organization want to craft as its legacy?
3. Help operationalize the ethical code of the organization as it is applied to AI and robotic projects and update this code as new challenges are resolved.4. Keep a repository of these deliberations to facilitate future discussions
Artificial Ethical/Moral Agents (AEA, AMA)• Artificial Practical Wisdom
• Virtues for robots• Security• Integrity• Accessibility• Ethical trust
• Functional moral sensibility• Accurate choice of ethical actions and
goals• Context sensitive• Accurate ranking of exemplar cases
and reasoning
For More InformationApplied Professional Ethics for the Reluctant Roboticist. Open Robotics, 2015
Ethics Boards for Research in Robotics and Artificial Intelligence: Is it Too Soon to Act? Chapter 5 in Social Robots: Boundaries, Potential, Challenges, edited by Marco Nørskov, Ashgate
Q&A – Ethics in AI
John C. [email protected]
@johnchavensjohnchavens.com
Alan Mackworth [email protected] @AlanMackworth
http://www.cs.ubc.ca/~mack
Mike Van der Loos [email protected]
http://mech.ubc.ca/machiel-van-der-loos/
John [email protected],
sonoma.academia.edu/JohnSullins
Thank you.