The IAG addressed questions of ethical and legal responsibility that arise in the digital era through machine learning and artificial intelligence.
Computer systems will in future increasingly assist us in both private and professional contexts, or will entirely take over activities that have until now been carried out by people. The use of “intelligent” systems can open up great opportunities, since the strengths of such systems lie where many humans have weakpoints, for example in the recognition of patterns and correlations in large datasets. Whether it be diagnosis in the field of health, optimising resources in the energy sector or improvements in the education system – the application of artificial intelligence has great potential. But what if some people are disadvantaged or physically harmed? Who is responsible for such actions, which are not directly carried out by humans? Those who selected and entered the data, those who programmed the algorithm, or those who did not monitor the system well enough?
Must we therefore rethink “responsibility” or does our existing understanding of it suffice for the new technical possibilities? What consequences must we draw from the current developments? The goal of the Interdisciplinary Research Group (IAG) was to describe in detail which challenges for responsibility in both an ethical and a legal sense arise from automation, machine learning and artificial intelligence in the digital era. These findings found their way into the publication series #VerantwortungKI – Künstliche Intelligenz und gesellschaftliche Folgen and, above all, into the IAG’s recommendations for the responsible and competent use of AI.