Post on 21-Apr-2017
Filip MaertensFounder, VP Business Development at
Faction XYZ
Data Innovation SummitMarch, 30 2017
#DIS2017
Can A.I. help us build a better world
“Terminator vs. Idiocracy?”, Or our indulgence to spot the threats when dealing with A.I.
When discussing the threats of A.I. we naively portray the advent of an
AGI as the precursor to the doom of our human race. Economically, we
envision mass unemployment, a new divide between have’s and have
not’s. We worry how Google invades our homes, yet our complacency
prevents us from acting on it. However these threats are all valid, A.I.
researchers have a moral duty.
1
New tech and approachesHACKERS & CRIMINALS
Weaponizing A.I. as hacking tools, or vulnerability detection tools can equally be used for uncontrolled mass surveillance, lower the cost of hacking, and continuously detect zero day exploits.
ROGUE GOVERNMENTS
Profiling millions of online users and targeting them with personalized content may influence millions of users, spread hate, and overthrow governments. Influence
systems are a weaponized version of A.I.
Endangering democracies
2
We can create a better world!A.I. researchers should be driven by curiousness, ethics and morality. Not by law, gain or politics.
3
Adhere to a strong moral code of conduct.In a field of research where we have the ability to impact billions of
people we have the duty to adhere to a strong moral code of conduct. We
put morals above law. The Ethics Advisory Panel (EAP) could be a good
beginning, but needs further adoption worldwide, much like the Socrates
process for medical doctors. How will machines know what we value, if
we don’t know ourselves?
4
Reducing bias from training dataWhile a great focus is put on the results of new learning algorithms,
computationally more efficient techniques and more, we grossly overlook
the layman’s principles of machine learning in general: shit in, shit out.
We need to be vigilant to bias in training data, in order to prevent racist,
sexist,
5
Embrace privacy & data protection as an opportunity to do goodWhile admittedly the GDPR will continue to cause a lot of concern and
friction within the A.I. community, we cannot dismiss it as political
invention that aims to bound our profession. Consider it as a security
layer around our expertise. While we have our duty to challenge the law,
we also need to adhere to best practices, such as data minimization, and
the right to explain.
6
Embedding morality into algorithmsJust like OpenAI has committed to program morality into its algorithms,
morality systems should be an intrinsic point of discussion in any A.I.
debate. When dealing with autonomous decision making systems fueled
by A.I. considering morality systems or security as an afterthought can be
a trigger for another A.I. winter.
7
Finally. We need to cultivate ourselves.If algorithms learn from humans, then we’re about to give birth to the
first tax avoiding, chain smoking, wife beating, cussing badass chatbot
we’ve ever seen. Oh, wait… As humans, we live in an age where
everything is recorded and in the open, and everything can be used as a
training set. Yet we still behave as brutes in our online lives. So, did we
really expect anything else from Tay?
8
9
But the opportunities to do good are everywhere!