- Governments and businesses need to work together to address the risks of AI.
- The UK is setting up the world’s first AI safety institute.
- Other countries are also taking steps to oversee AI tools.
AI’s capacity to process vast datasets, discern patterns, make informed choices, and adapt to new data is revolutionizing numerous industries, spanning healthcare, finance, transportation, and entertainment.
Prominent figures in the tech realm, including Tesla’s CEO Elon Musk, AI luminary Geoffrey Hinton, and Sam Altman, CEO of Microsoft-backed OpenAI, have previously raised concerns about AI’s potential to pose a threat to humanity.
They’ve urged governments to implement regulations governing the technology’s usage.
Nevertheless, the rapid progress in AI, exemplified by innovations like OpenAI’s ChatGPT, is adding complexity to governments’ attempts to reach a consensus on AI-related legislation, as per Reuters’ report.
Prime Minister Rishi Sunak emphasized the imperative for governments and businesses to confront the risks associated with AI directly.
He made this statement on October 26, in anticipation of the inaugural Global AI Safety Summit, scheduled to take place at Bletchley Park on November 1-2.
Sunak added Britain would set up the world’s first AI safety institute to “understand what each new model is capable of, exploring all the risks from social harms like bias and misinformation through to the most extreme risks”.
On Tuesday, October 10, the UK’s data regulatory authority announced that it had served Snap Inc’s Snapchat with an initial enforcement notice.
This notice pertained to concerns about the company’s apparent failure to adequately evaluate the privacy risks associated with its generative AI chatbot, especially in relation to its use by children.
Here are the most recent actions being taken by international governing bodies to oversee AI tools:
In September, Australia’s internet regulator announced its intention to require search engines to develop new codes aimed at preventing the dissemination of AI-generated child sexual abuse content and the production of deepfake versions of such material.
On October 12, China unveiled its proposed security guidelines for companies providing services powered by generative AI. These guidelines include the establishment of a blacklist of sources that are prohibited for training AI models.
In August, China had already introduced interim measures that mandated service providers to undergo security assessments and obtain approval before launching mass-market AI products.
On October 24, European legislators reached consensus on a pivotal aspect of forthcoming AI regulations, determining the criteria for designating AI systems as “high risk.”
This development marked significant progress toward finalizing the comprehensive AI Act, as confirmed by five individuals with knowledge of the situation.
Two co-rapporteurs have expressed their anticipation of a formal agreement to be reached in December.
Notably, European Commission President Ursula von der Leyen, on September 13, advocated for the establishment of a global panel tasked with evaluating the advantages and potential risks associated with AI.
In April, the privacy regulator of France announced that it had initiated an investigation in response to complaints concerning ChatGPT.
In May, G7 leaders collectively urged for the establishment and widespread acceptance of technical standards to ensure the trustworthiness of AI.
Plans to review AI platforms and enlist experts in the field, with a temporary ChatGPT ban in March and its subsequent reinstatement in April.
Expects to introduce regulations by the end of 2023, aligning more closely with the US approach than the stringent EU regulations. The country’s privacy watchdog has cautioned OpenAI against collecting sensitive data without proper consent.
The Personal Data Protection Office initiated an investigation into OpenAI on September 21 based on a complaint alleging ChatGPT’s violation of EU data protection laws.
Launched a preliminary investigation in April into potential data breaches by ChatGPT.
Held its first formal discussion on AI in July, focusing on both military and non-military applications that could impact global peace and security. Secretary-General Antonio Guterres endorsed the idea of an AI watchdog and plans to establish a high-level AI advisory body.
Anticipates the unveiling of a long-awaited AI executive order on October 30, which would require federal assessments for advanced AI models. The US Congress held hearings on AI in September, with participation from tech leaders like Meta CEO Mark Zuckerberg and Elon Musk, where the need for government regulation of AI was widely acknowledged.
President Joe Biden secured voluntary commitments from companies like Adobe, IBM, and Nvidia to govern AI, including steps like watermarking AI-generated content. Additionally, a Washington DC district judge ruled in August that AI-generated art without human input cannot be copyrighted under US law, and the US Federal Trade Commission initiated an investigation into OpenAI in July for potential violations of consumer protection laws.
Taiwan Ex-Colonel Sentenced to 20 Years for Spying for China
Retired Taiwanese air force colonel Liu Sheng-shu sentenced to 20 years for...
To stay informed about current events, please like our Facebook page https://www.facebook.com/BOLUrduNews/.
Follow us on Twitter https://twitter.com/bolnewsurdu01 and stay updated with the latest news.
Subscribe to our YouTube channel https://bit.ly/3Tv8a3P to watch news from Pakistan and around the world.