English

How Can You Teach Ethics To An Artificial Intelligence?

There is no question that artificial intelligence harbors enormous positive potential, but at the same time, also makes major demands of programmers and for ethically correct usage. We at Deutsche Telekom aren’t the only ones who have realized that AI needs ethical “guardrails”. As such, I flew to where the big players are located: Silicon Valley in the U.S. and Silicon Wadi in Israel.

The subject of ethical AI has attracted much attention lately from the digital industry, politics, and the media. After all, it’s ultimately all about how we can use AI to give our customers an excellent service and product experience, while at the same time avoiding losing their trust and ensuring the security of their data. That’s why guidelines are so important. We were one of the first companies to establish binding internal guidelines and submit them for debate.

In the drafting of these guidelines, my team and I as well as my colleagues Stefan Kohn  from Telekom Design Gallery, Elmar Arunov from T-Labs and Amit Keren from the partnering side intentionally sought interaction with other companies and organizations at an early stage. We do not claim to have the ability to develop a “philosopher’s stone” on our own, to say nothing of already having done so.

We wanted to know what the major players in this field – Google, Facebook, Microsoft, and Amazon, as well as organizations like Open AI, the Partnership on AI, and even universities and startups think about ethics and AI. How much weight do they attach to this topic and how are they rising to the challenges?

Before my trip to the U.S. and Israel, I admit that I wondered how open these companies would be to my questions and how honest our exchange would be. Needless to say, I was positively surprised.

All of our appointment requests were granted, high-ranking representatives everywhere opened their doors – as the saying goes, this subject obviously has “management attention”. It isn’t just relevant for programmers, but for top corporate managers, too.

 

We conducted intensive talks and an open, constructive, honest interchange, which we plan to continue. Here are a few examples:

There was broad consensus that the ethically correct handling of AI isn’t just “nice to have”; it’s decisive for gaining and retaining customers’ trust in a company and its products.

I found my talks with the developers of Microsoft’s voice assistant Cortana and chatbot Tay in Israel to be particularly fascinating and insightful. You might remember how quickly Twitter users turned Tay into a racist before Microsoft took it offline. This example is a sobering reminder of how fast a company’s reputation can be damaged and how much we still have to learn when it comes to using and deploying AI.

The lessons learned by Microsoft were that – just like with humans – behavior cannot be predicted with 100% reliability. That’s why they developed a kind of emergency off-switch that deactivates the bot immediately if it gets confused and, for example, starts to use inappropriate language.

In this clear-cut case, the definition of the purpose of an emergency off-switch is fairly simple. That’s not always the case, however, which means there is also debate about where such control points could be placed. We also have to define these places internally.

Microsoft has developed Aether (which stands for “AI and ethics in engineering and research”) to develop a master framework for implementing ethical AI. This committee is chaired by Harry Shum and Brad Smith, heads of AI research and the Legal department, respectively.

And Google has trained several thousand employees in the matter of fairness for machine learning. Facebook is developing “fairness flow” software, to check AI systems with regard to diversity and prejudice, and is also seeking interaction and feedback from outside the company to determine how to best handle digital ethics.

 

“The last mile will always remain human.”

Nearly all companies are thinking about establishing “safety ethics teams”. This means the decisions made by the AI must not be trusted blindly; humans must always have the final say and bear responsibility for them. This is something we also intend to implement at Deutsche Telekom. As Claudia Nemat put it, “The last mile will always remain human.”

Another pleasant result – and one I didn’t expect – is that the others also want to learn from us and see us as a counterpart on equal footing in Europe. Stanford University, for example, wants to intensify its efforts in the area of digital ethics and asked us for support in preparing dilemma cases for testing smart technologies.

And to continue to focus public awareness on AI together, we strive to become a member of the “Partnership on AI” and, if the organization thinks we are up to it, establish a European arm. I can hardly wait.


Manuela Mackert, Thought leader in Digital Ethics & Chief Compliance Officer

The author contributed to this article in his personal capacity. The views expressed are his/her own and do not necessarily represent the views of TEİD.