Igor Markov, Software Engineer, Google at MLconf SEA - 5/20/16
-
Upload
mlconf -
Category
Technology
-
view
351 -
download
1
Transcript of Igor Markov, Software Engineer, Google at MLconf SEA - 5/20/16
Can AI Become a Dystopian Threat to Humanity?
A Hardware Perspective
Igor MarkovGoogle and The University of Michigan
The views expressed are those of my own and do not represent my employers
How did the humanity survive?• by being smart• by knowing the adversary• by controlling physical resources• by using the physical world to advantage
Now, back to the dystopian AI myth• AI may become smarter than us• Possibly malicious• The physical embodiments are unclear
Computing machinery is designed using an abstraction hierarchy• From transistors to CPUs to data centers• Each level has a well‐defined function
Introduce hard boundaries between different levels of intelligence, trust• Can toasters and doorknobs be trusted?• Who can use weapons?• Each agent should have a key weakness
Limit AIs access to energy• Firmly control the electric grid• No long‐lasting batteries, fuel cells
or reactors
Constraints on AI to intercept dystopian threats
1. Hard boundaries between levels of intelligence and trust2. Limits on self‐replication, self‐repair and self‐improvement3. Limits on access to energy4. Physical and network security of critical infrastructure
Tame potential threats and use them for protection