Regulating the Risks Associated with Malicious Use of Artificial Intelligence in the US, EU and China

Capa

Citar

Texto integral

Acesso aberto Acesso aberto
Acesso é fechado Acesso está concedido
Acesso é fechado Somente assinantes

Resumo

The article presents an analysis of the main mechanisms for regulating risks caused by the malicious use of artificial intelligence (MUAI) in the USA, the EU and China. The relevance of the MUAI problem is proven by numerous data on the use of artificial intelligence (AI) technologies by antisocial actors. The authors set a goal – to identify the specifics of regulating MUAI risks in the USA, EU and China – due to the innovative experience of these three jurisdictions. It was found that in the USA counteraction to MUAI has not yet been shaped into systemic decisions at the level of federal authorities. It is more about decisions that take into account the growing risks of MUAI within the framework of general regulation of AI and the safety of its use. The EU has adopted the world's first Law on AI, which however pays little attention to the MUAI issues, and the main initiators of proposals to counter and regulate risks are law enforcement agencies, such as Europol. In China, MUAI risk regulation is most centralized and is becoming the subject of strategic documents and legislative acts. The authors use a systemic approach when considering various options for MUAI threats and formulating the research conclusions.

Sobre autores

D. Bazarkina

Institute of Europe RAS; Saint Petersburg State University

Email: bazarkina-icspsc@yandex.ru
Doctor of Sciences (Politics), Leading Researcher, Department of European Integration Research; School of International Relations Moscow, Russia; Saint Petersburg, Russia

E. Pashentsev

Diplomatic Academy of the Ministry of Foreign Affairs of Russia; Saint Petersburg State University

Email: icspsc@mail.ru
Doctor of Sciences (History), Professor, Chief Researcher, Institute of Contemporary International Studies; Professor, Faculty of International Relations Moscow, Russia; Saint Petersburg, Russia

E. Mikhalevich

Gazprom Neft PJSC; Saint Petersburg State University

Email: ekaterina_mikhalevich@mail.ru
Chief specialist of the Organizational Development Department; Research engineer at the School of International Relations Saint Petersburg, Russia

Bibliografia

  1. Базаркина Д.Ю. (2023) Цифровое десятилетие Европейского союза: цели и промежуточные результаты. Аналитические записки ИЕ РАН. Выпуск 4. С. 89-98. DOI: http://doi.org/10.15211/analytics43420238998
  2. Потемкина О.Ю. (2019) Лучше, чем люди? Политика ЕС в области искусственного интеллекта. Научно-аналитический вестник ИЕ РАН. № 5. С. 16-21. DOI: http://dx.doi.org/10.15211/vestnikieran520191621
  3. Маслова Е.А., Сорокова Е.Д. (2022) Диалектика этики и права в регулировании технологии искусственного интеллекта: опыт ЕС. Современная Европа. № 5(112). С. 19-33. doi: 10.31857/S0201708322050023
  4. Сорокова Е.Д. (2023) Реагирование Европейского союза на глобальные технологические риски: практика и эффективность. Дис. ... канд. полит. наук. МГИМО, Москва. 280 с.
  5. Antunes H.S. (2024) European AI Regulation Perspectives and Trends. Legal Aspects of Autonomous Systems. ICASL 2022. Data Science, Machine Intelligence, and Law. Ed. by D. Moura Vicente, R. Soares Pereira, A. Alves Leal. Vol. 4. Springer, Cham, Switzerland. P. 53-65. DOI: https://doi.org/10.1007/978-3-031-47946-5_4
  6. Bazarkina D. (2023) Current and Future Threats of the Malicious Use of Artificial Intelligence by Terrorists: Psychological Aspects. The Palgrave Handbook of Malicious Use of AI and Psychological Security. Ed. by E. Pashentsev E. Palgrave Macmillan, Cham, Switzerland. P. 251-272. DOI: https://doi.org/10.1007/978-3-031-22552-9_10
  7. Cao L. (2023) AI and data science for smart emergency, crisis and disaster resilience. International Journal of Data Science and Analytics. No. 15. P. 231-246. DOI: https://doi.org/10.1007/s41060-023-00393-w
  8. Dixon R.B.L. (2023) A principled governance for emerging AI regimes: lessons from China, the European Union, and the United States. AI Ethics. No. 3. P. 793-810. DOI: https://doi.org/10.1007/s43681-022-00205-0
  9. Doss C., Mondschein J., Shu D. Wolfson T., Kopecky D., Fitton-Kane V. A., Bush L., Tucker C. (2023) Deepfakes and scientific knowledge dissemination. Scientific Reports. No. 13. 13429. DOI: https://doi.org/10.1038/s41598-023-39944-3
  10. Hemberg E., Zhang L., O’Reilly U.M. (2020) Exploring Adversarial Artificial Intelligence for Autonomous Adaptive Cyber Defense. Adaptive Autonomous Secure Cyber Systems. Ed. by S. Jajodia, G. Cybenko, V. Subrahmanian, V. Swarup, C. Wang, M. Wellman. Springer, Cham, Switzerland. P. 41-61. DOI: https://doi.org/10.1007/978-3-030-33432-1_3
  11. Laux J. (2023) Institutionalised distrust and human oversight of artificial intelligence: towards a democratic design of AI governance under the European Union AI Act. AI & Society. DOI: https://doi.org/10.1007/s00146-023-01777-z
  12. Lukacovic M.N., Sellnow-Richmond D.D. (2023) The COVID-19 Pandemic and the Rise of Malicious Use of AI Threats to National and International Psychological Security. The Palgrave Handbook of Malicious Use of AI and Psychological Security. Ed. by E. Pashentsev. Palgrave Macmillan, Cham, Switzerland. P. 175-201. DOI: https://doi.org/10.1007/978-3-031-22552-9_7
  13. Malatji M., Tolah A. (2024) Artificial intelligence (AI) cybersecurity dimensions: a comprehensive framework for understanding adversarial and offensive AI. AI Ethics. DOI: https://doi.org/10.1007/s43681-024-00427-4
  14. Orwat C., Bareis J., Folberth A., Jahnel J., Wadephul C. (2024) Normative Challenges of Risk Regulation of Artificial Intelligence. Nanoethics. Vol. 18. Article 11. DOI: https://doi.org/10.1007/s11569-024-00454-9
  15. Pashentsev E. (2023) General Content and Possible Threat Classifications of the Malicious Use of Artificial Intelligence to Psychological Security. The Palgrave Handbook of Malicious Use of AI and Psychological Security. Ed. by E. Pashentsev. Palgrave Macmillan, Cham, Switzerland. P. 23-46. DOI: https://doi.org/10.1007/978-3-031-22552-9_2
  16. Pauwels E. (2023) How to Protect Biotechnology and Biosecurity from Adversarial AI Attacks? A Global Governance Perspective. Cyberbiosecurity. Ed. by D. Greenbaum. Springer, Cham, Switzerland. P. 173-184. DOI: https://doi.org/10.1007/978-3-031-26034-6_11
  17. Thomann P.E. (2023) Geopolitical Competition and the Challenges for the European Union of Countering the Malicious Use of Artificial Intelligence. The Palgrave Handbook of Malicious Use of AI and Psychological Security. Ed. by E. Pashentsev. Palgrave Macmillan, Cham, Switzerland. P. 453-486. DOI: https://doi.org/10.1007/978-3-031-22552-9_17
  18. Vassilev A., Oprea A., Fordyce A., Anderson H. (2024) Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations. NIST Artifcial Intelligence (AI) Report, NIST AI 100-2e2023. National Institute of Standards and Technology, Gaithersburg, USA. 99 p. DOI: https://doi.org/10.6028/NIST.AI.100-2e2023
  19. Wells D. (2024) The Next Paradigm-Shattering Threat? Right-Sizing the Potential Impacts of Generative AI on Terrorism. Middle East Institute, Washington, USA. 16 p.

Arquivos suplementares

Arquivos suplementares
Ação
1. JATS XML

Declaração de direitos autorais © Russian Academy of Sciences, 2024