Artificial Intelligence in economic decision making: how to assure a trust? Cover Image

Artificial Intelligence in economic decision making: how to assure a trust?
Artificial Intelligence in economic decision making: how to assure a trust?

Author(s): Sylwester Bejger, Stephan Elster
Subject(s): Business Economy / Management, Ethics / Practical Philosophy, ICT Information and Communications Technologies
Published by: Wydawnictwo Naukowe Uniwersytetu Mikołaja Kopernika
Keywords: AI regulations; AI law; explainability of AI; local intepretability;

Summary/Abstract: Motivation: The decisions made by modern ‘black box’ artificial intelligence models are not understandable and therefore people do not trust them. This limits down the potential power of usage of Artificial Intelligence. Aim: The idea of this text is to show the different initiatives in different countries how AI, especially black box AI, can be made transparent and trustworthy and what kind of regulations will be implemented or discussed to be implemented. We also show up how a commonly used development process within Machine Learning can be enriched to fulfil the requirements e.g. of the Ethics guidelines for trustworthy AI of the High-Level Expert Group of the European Union. We support our discussion with a proposition of empirical tools providing interpretability. Results: The full potential of AI or products using AI can only be raised if the decision of AI models are transparent and trustworthy. Regulations which are followed over the whole life cycle of AI models, algorithms or the products they using these are therefore necessary as well as understandability or explainability of the decisions these models and algorithms made. Initiatives on every level of stakeholders started, e.g. international level on the European Union, country level, USA, China etc. as well on a company level.

  • Issue Year: 19/2020
  • Issue No: 3
  • Page Range: 411-434
  • Page Count: 24
  • Language: English