Why is it in the news?
- The United Kingdom (U.K.) recently hosted the first Artificial Intelligence (AI) Safety Summit. It is taking place at Bletchley Park, a renowned location that served as the home to codebreakers during the Second World War.
- The Main resolution was to establish shared understanding of frontier AI opportunities and risks. “Frontier AI” refers to powerful generative AI models with potential risks to public safety.
- 28 countries including the US, China, Japan, UK, France, and India, plus the EU, agreed on a declaration addressing AI risks.
- South Korea will host a virtual AI summit in 6 months, France will host the next in-person summit in a year.
Bletchley Park Declaration
- A “world-first” accord to address AI safety concerns.
- Focuses especially on advanced AI models from major entities like OpenAI, Google, and Microsoft.
- Recognizes risks from misuse or control issues, especially in cybersecurity, biotech, and disinformation.
- Emphasizes international cooperation to address risks.
· A branch of computer science creating machines that mimic human intelligence.
· Further, enables machines to model or surpass human mind capabilities.
· Examples: self-driving cars, generative AI tools like ChatGPT, and Google’s Bard.
Need for Regulation
· AI tools and models can be opaque, even to experts.
· AI decisions can have inaccuracies, embedded biases, and produce discriminatory outcomes.
· Rise in AI-generated media like images, voice alterations, and deep fakes affecting political and social arenas.
· AI systems collect personal data, raising privacy concerns.
· Concerns about AI becoming uncontrollable or acting maliciously.
US Actions on AI
- US President Joe Biden issued an executive order on AI safety, focusing on overseeing AI benchmarks.
- Must for AI companies to share test results of new products with the federal government.
- Rules to introduce watermarks on AI-enabled products and standards for biotech firms using AI.
- Federal agencies to implement AI changes, setting industry best practices.
Global AI Regulatory Landscape
- European Union (EU) proposes an AI Act based on AI risk and invasiveness.
- UK adopts a light-touch approach to foster AI innovation.
- US takes a moderate stance, with Biden’s order laying groundwork for AI regulation.
- China has its own AI regulations.
- However, notable tech leaders, including Elon Musk and Steve Wozniak, called for a six-month pause in AI development due to uncontrollable AI advancements.
India’s Stance on AI
- Union Minister of state for IT, Rajeev Chandrasekhar emphasized AI safety, trust, and avoiding weaponization via social media.
- Further, Prime Minister Narendra Modi called for ethical AI tools and a global framework.
- Need to move towards a risk-based, user-harm regulatory approach.
- Proposals are made to establish a domestic authority for AI and to differentiate between online intermediaries.
- Concerns about AI include privacy, system bias, and intellectual property rights violations.
- Recently, OpenAI founder Sam Altman called for an international AI regulatory body similar to nuclear non-proliferation oversight.