What may AI do in companies?

Zurich| Software, which is based on the use of AI, is likely to be increasingly used in companies in the coming years. However, there is still a long way to go to achieve widespread acceptance, both in general and in the area of in-company training. According to the 2019 eLearning BENCHMARKING study, only 2.7% of all companies surveyed are already actively using AI applications in their training and further education. At the same time, four out of ten respondents to the World Economic Forum are concerned about the use of AI. In the course of an aspired higher distribution of AI software, ethical guidelines play a decisive role. Anna Jobin, a scientist at ETH Zurich, and colleagues from ETH Zurich examined these same issues developed by states and companies with regard to AI and identified 84 publicly available ethical guidelines worldwide.

In the analysis of the 84 identified guidelines by ETH Zurich, there was hardly any overlap, but a lot of divergence, which could not be reduced to a common denominator. In some individual regulations, there were even contradictions such as the goal of preventing discrimination by means of a balanced data set and, at the same time, that users should have control over their data.

The authors of the study identified the regulatory requirement for transparency in the use of data as the main ethical concern with regard to AI. Transparency in the use of AI is intended to prevent misuse and undesirable side effects as well as to clarify liability issues.

At the same time, the study authors note that the ethical evaluation of AI also reveals a global imbalance. The discussion is dominated by industrially developed countries, while Africa, South and Central America and Central Asia are underrepresented.

Image: © Sergey Nivens – stock.adobe.com