The dark side of generative artificial intelligence:
A critical analysis of controversies and risks of ChatGPT Cover Image

The dark side of generative artificial intelligence: A critical analysis of controversies and risks of ChatGPT
The dark side of generative artificial intelligence: A critical analysis of controversies and risks of ChatGPT

Author(s): Krzysztof Wach, Doanh Dương Công, Joanna Ejdys, Rūta Kazlauskaitė , Paweł Korzyński, Grzegorz Mazurek, Joanna Paliszkiewicz, Ewa Ziemba
Subject(s): Business Economy / Management, ICT Information and Communications Technologies, Business Ethics
Published by: Uniwersytet Ekonomiczny w Krakowie
Keywords: artificial intelligence (AI); generative artificial intelligence (GAI); ChatGPT; technology adoption; digital transformation; OpenAI; chatbots; technostress;

Summary/Abstract: Objective: The objective of the article is to provide a comprehensive identification and understanding of the challenges and opportunities associated with the use of generative artificial intelligence (GAI) in business. This study sought to develop a conceptual framework that gathers the negative aspects of GAI development in management and economics, with a focus on ChatGPT. Research Design & Methods: The study employed a narrative and critical literature review and developed a conceptual framework based on prior literature. We used a line of deductive reasoning in formulating our theoretical framework to make the study’s overall structure rational and productive. Therefore, this article should be viewed as a conceptual article that highlights the controversies and threats of GAI in management and economics, with ChatGPT as a case study. Findings: Based on the conducted deep and extensive query of academic literature on the subject as well as professional press and Internet portals, we identified various controversies, threats, defects, and disadvantages of GAI, in particular ChatGPT. Next, we grouped the identified threats into clusters to summarize the seven main threats we see. In our opinion they are as follows: (i) no regulation of the AI market and urgent need for regula- tion, (ii) poor quality, lack of quality control, disinformation, deepfake content, algorithmic bias, (iii) automation- spurred job losses, (iv) personal data violation, social surveillance, and privacy violation, (v) social manipulation, weakening ethics and goodwill, (vi) widening socio-economic inequalities, and (vii) AI technostress. Implications & Recommendations: It is important to regulate the AI/GAI market. Advocating for the regula- tion of the AI market is crucial to ensure a level playing field, promote fair competition, protect intellectual property rights and privacy, and prevent potential geopolitical risks. The changing job market requires workers to continuously acquire new (digital) skills through education and retraining. As the training of AI systems becomes a prominent job category, it is important to adapt and take advantage of new opportunities. To mitigate the risks related to personal data violation, social surveillance, and privacy violation, GAI developers must prioritize ethical considerations and work to develop systems that prioritize user privacy and security. To avoid social manipulation and weaken ethics and goodwill, it is important to implement responsible AI practices and ethical guidelines: transparency in data usage, bias mitigation techniques, and monitoring of generated content for harmful or misleading information. Contribution & Value Added: This article may aid in bringing attention to the significance of resolving the ethical and legal considerations that arise from the use of GAI and ChatGPT by drawing attention to the controversies and hazards associated with these technologies.

  • Issue Year: 11/2023
  • Issue No: 2
  • Page Range: 7-30
  • Page Count: 24
  • Language: English