Well-designed policies alone cannot prevent social harm from new technologies. Watchdogs must also have tools to pave the road for digital accountability.
In May 2023, the leaders of the G7 nations called for “guardrails” to limit potential damage caused by artificial intelligence. Days later, the CEO of OpenAI, the company that developed ChatGPT, advised U.S. Congress to pass safety regulations over AI models. In 2022 alone, nine AI-related U.S. federal laws and 21 state-level laws were passed. Since 2015, AI has been discussed with growing frequency in congressional committees: The term was mentioned 73 times in committee reports produced in 2021–2022 by the House and Senate. Meanwhile, the European Union is working out the Artificial Intelligence Act to minimize what threats the application of machine learning might pose to privacy, security, and democratic values.
In June 2023, Volker Wissing, Germany’s Minister of Digital Affairs and Transport, and Judith Gerlach, State Minister of Digital Affairs of Bavaria, try out augmented reality glasses at the medical technology manufacturer Brainlab, after a conference of government officials responsible for digitization. Among the topics discussed at the conference were the regulation of artificial intelligence in Europe and Germany’s implementation of the European Union’s Digital Services Act.
To discuss our articles or comment on them, please share them and tag American Scientist on social media platforms. Here are links to our profiles on Twitter, Facebook, and LinkedIn.
If we re-share your post, we will moderate comments/discussion following our comments policy.
American Scientist Comments and Discussion
To discuss our articles or comment on them, please share them and tag American Scientist on social media platforms. Here are links to our profiles on Twitter, Facebook, and LinkedIn.
If we re-share your post, we will moderate comments/discussion following our comments policy.