DeepMind Will Research How To Keep AI Under Control, For Society's Benefit

DeepMind announced the creation of a new “Ethics & Society” division inside the company, which will focus on ensuring that AI benefits society and doesn’t get out of control.

Avoiding Dangers Of AI

Some prominent personalities, including people such as Elon Musk, Stephen Hawking, and Bill Gates, have warned about the dangers of artificial intelligence if we let it run loose. We may still be decades away from AI gaining consciousness and then deciding to kill us all to save the planet, but it’s probably not a bad idea to start researching just how an AI should think and act, especially in relation to humans.


Besides the sci-fi dystopian future we can easily imagine, there are indeed some real dangers that AI can already create, even if not through its own fault but through the fault of humans who develop it. One such danger is that AI can develop, or rather replicate, human biases and then accelerate them to the extreme.

We’ve already seen something like this in action when Microsoft launched its own Twitter-based AI bot. In mere hours, what was otherwise a neutral technology and “intelligence” became a neo-Nazi racist AI, all due to much prodding and testing from humans in the real world. Fortunately for us, that AI was merely in charge of a Twitter bot, and it wasn’t in charge of a nuclear power’s defense systems.

However, obviously we can’t simply assume that AI will do good when left to its own devices, because it may incorporate ideas that its developers never thought it would (unless we believe Microsoft actually intended to build a neo-Nazi AI from the start).

There’s also the age-old idea of the “paperclip maximizer,” which says that an AI could simply follow its mission in a very rigid way (making as many paperclips as possible) to the point where that mission starts harming humans, even if the AI itself never had any harmful “thoughts.” It’s just that the AI would use all of our planet’s resources to build those paperclips, leaving us with nothing…except for a lot of paperclips.

Controlling AI

DeepMind’s AI technology is perhaps the most advanced in the world right now, having already proven that it can beat the best players in the world at a game that people thought AI could never conquer. It also proved to have more real-world uses such as cutting Google’s data center cooling costs by 40%, and the technology is being integrated into some UK hospital’s systems to improve healthcare.

The DeepMind team believes that no matter how advanced AI becomes, it should remain under human control. However, it’s not clear how true that will be in the future, because we won’t be able to monitor every single little action the AI takes. For instance, will a human always have to approve when an AI technology decides to switch to the green light or red light on a city’s streets? Probably not, as that would defeat the purpose of using an AI in the first place.

That one is an easy example, but what about having a human always approving what medicine patients should take? Perhaps this will be the default procedure in the beginning, but can we guarantee this will always be the case in the future? Perhaps hospitals will decide AI will have gotten smart enough 20 years from now that they will allow it to distribute 95% of the medicine to patients, without human supervision.

The bottom line is that it’s it’s not clear where to draw the line in the first place, and even if it was, it will likely be a moving goalpost, as AI keeps getting smarter. Somewhere along the way things could go wrong, and at the moment in time it may be too late to fix it, because we’ll have very little control over the AI.

In the hospital example above, the AI could, for instance, suffer from a software bug or a hack, and then distribute the wrong medicine to the whole hospital, while the human supervisors would trust the AI to do its routine job safely, just as it would have done thousands of times before.

They may not necessarily notice the wrong medicine in time, just like nobody would notice that a streetlight-managing AI turned the green light on too fast on some roads, because nobody would supervise these individual actions.

DeepMind’s Ethics & Society Group

To understand the real-world impacts of AI, the DeepMind team has started researching ethics for AI, so that the AI they build can be shaped by society’s concerns and priorities.

Its new Ethics & Society unit will abide by five principles, which include:

  1. Social benefit. DeepMind’s ethics research will focus on how AI can improve people’s lives and how to build more fair and equal societies. Here, DeepMind also mentions previous studies showing that the justice system currently uses AI with built-in racial biases. The group wants to study this phenomena more so that the same biases won’t be built into its own AI or other AI systems in the future.
  2. Evidence-based research. The DeepMind team is committed to having its papers peer-reviewed to ensure there are no errors in its research.
  3. Transparency. The group promises not to influence other researchers through the grants it may offer and to always be transparent about its research funding.
  4. Diversity. DeepMind wants to include the viewpoints of experts from other disciplines, too, outside of the technical domain.
  5. Inclusiveness. The DeepMind team said that it will also try to maintain a public dialog, because ultimately AI will have an impact on everyone.

The DeepMind Ethics & Society division will focus on key challenges involving privacy and fairness, economic impact, governance and accountability, unintended consequences of AI use, AI values and morals, and other complex challenges.

DeepMind hopes that with the creation of the Ethics & Society unit, it will be able to challenge assumptions, including its own, about AI, and to ultimately develop AI that’s responsible and that’s beneficial to society.

Go to Source