Rapidly Emerging Technologies: What are the Ethical and Legal Challenges?

 by Marlene Prinz

The Vienna Center of Disarmament and Non-Proliferation (VCDNP) and the International Institute for Peace organised a round table on the 20th of April, moderated by Angela Kane, former high UN commissioner for disarmament and vice-president of the IIP, on the topic mentioned above. The event revolved around the benefits, opportunities but also challenges of artificial intelligence (AI). Two experts Sean Legassick and Jane Zavalishina were invited to give some interesting insights into this topic.

Sean Legassick is Co-Lead of the Ethics and Society group at DeepMind, where he has spent the last three years researching the social, economic, ethical, legal and philosophical implications of AI. He is a software engineer and technology policy specialist with over 20 years of industry experience. Legassick was a lead developer at Demon Internet, the UK’s first Internet provider, and has made notable contributions to the Apache Software Foundation and Django Foundation open source software projects.

IMG_20180420_121911345_HDR.jpg

Jane Zavalishina is the President and co-founder of Mechanica AI, a provider of AI-based solutions for the industrial sector. For over 15 years, she held various executive positions within Yandex, one of Europe’s largest internet companies, being in charge of using innovative technologies to create new and transform traditional businesses. She also serves on the World Economic Forum’s Global Future Councils. In 2016, she was named in Silicon Republic’s Top 40 Women in Tech as an Inspiring Leader.

Starting with a short presentation Sean Legassick draws out that the social mission of DeepMind is “doing good in the world”. Its approach is to use artificial intelligence to make the world a better place and to use it efficient for problem solving.

Explaining the field of research Legassick is working in, he states that DeepMind started their mission with Atari Games and used AI to identify new strategies and solutions. This evokes the question “what impact can AI have in this world?” The issue of climate change is a very good example. Right now data centres consume about 3% of the total energy consumption. With the help of AI this amount could be reduced to 1.6%.

IMG_20180420_121835053.jpg

When it comes to the impact of ethics and society, research offers a solid benefit in understanding the wide range of opportunities AI offers but also raises some significant questions. Legassick draws out how important it is to discuss social and ethical questions now because as we are experiencing right now - technological innovations evolve much more rapid than the laws that restrict and frame them. The main fields DeepMind is working in are exploring the possible “real world” impacts of AI, helping to put ethics into practices and helping societies anticipate and control the effects of AI. He emphasises that one of the key ethical challenges is transparency. Secondly managing AI risks and thinking about the economic impact are also very important issues. Furthermore, the relation of AI and the world’s complex challenges should be kept in mind. Lastly Legassick mentions the importance of governance and accountability, morality and values when it comes to AI’s ethical questions. The cooperation with experts and the collaboration between researchers becomes crucial.

The second expert, Jane Zavalishina, draws out that the advantage of AI is that it has no morality. AI does not know what is right or wrong. This is beneficial as the intentions of AI are transparent. So when it comes to applying AI, it’s goals and interests are clear to identify unlike when it comes to humans where real interests and intentions can be veiled. The difficulty is now to set the metric right. So decisions made by AI are not influenced by elements like racism or sexism. Here the question of what the right metric is arises. But how are our ethical and moral values measurable? Zavalishina adds that this question becomes essential but still stays unanswered.

IMG_20180420_122132383_HDR.jpg

Another big topic Zavalishina is addressing is the relation between AI and humans. She claims that people mostly act suspiciously towards AI but do not acknowledge its advantages enough. There is so much AI can improve, especially when it comes to new strategies on how to work more efficient using less resources. It would be negligent not to include and work with AI.

On the question of what would be necessary to change the common fear towards AI, Legassick draws out that the theoretical academic insight of how AI is working, would really decrease negative emotions. Also, the development of additional modern hardware and computer power would improve the current research situation.

When it comes to the concern that humanity may lose its intellectual capacity by relying too much on AI, Zavalishina asserts that this another key question that should be investigated. She adds that even if AI offers a lot of help in finding rational solutions for problems the weakness of AI is in predicting certain events. As an example she mentions the financial crisis in 2008. These kinds of events have a huge impact on society but only appear rarely, so AI has not enough information to “learn” from it. Such events are difficult to apply on AI, as it is only acting in fields that have a deterministic approach. Humans are benefiting from their creativity that makes them so unique. AI can’t offer that.

Laeggassick and Zavalishina both agree that AI should never be involved when it comes to war strategies or anything related to this area. Still there are a lot of controlling mechanisms and multiple layers of security systems. Caution becomes here as well an essential element. Research would of course widen the knowledge and awareness of AI and could also point out the future challenges more precisely.