South Korea Enacts World's First Comprehensive AI Safety Law

South Korea has formally enacted the world's first comprehensive law governing the safe use of artificial intelligence. The AI Basic Act requires developers and companies to take responsibility for addressing deepfakes and misinformation generated by AI models. It introduces the concept of "high-risk AI" for areas like employment and finance, mandating user notifications and safety measures. The law also requires AI-generated content to carry watermarks and imposes obligations, including potential fines and the appointment of local representatives, on major global AI service providers.

Key Points: South Korea Passes First Comprehensive AI Safety Law

  • First comprehensive national AI law enacted
  • Targets deepfakes and misinformation
  • Defines "high-risk AI" for key sectors
  • Mandates AI content watermarks and user alerts
  • Imposes fines and requires local reps for global firms
2 min read

S. Korea becomes 1st nation to enact comprehensive law on safe AI usage

South Korea becomes the first country to enact a comprehensive AI law, targeting deepfakes, misinformation, and high-risk AI with new safety and transparency rules.

"Applying watermarks to AI-generated content is the minimum safeguard to prevent side effects from the abuse of AI technology, such as deepfake content. - Ministry official"

Seoul, Jan 22

South Korea on Thursday formally enacted a comprehensive law governing the safe use of artificial intelligence models, becoming the first country globally in doing so, establishing a regulatory framework against misinformation and other hazardous effects involving the emerging field.

The Basic Act on the Development of Artificial Intelligence and the Establishment of a Foundation for Trustworthiness, or the AI Basic Act, officially took effect Thursday, according to the science ministry, reports Yonhap news agency.

It marked the first governmental adoption of comprehensive guidelines on the use of AI globally.

The act centres on requiring companies and AI developers to take greater responsibility for addressing deepfake content and misinformation that can be generated by AI models, granting the government the authority to impose fines or launch probes into violations.

In detail, the act introduces the concept of "high-risk AI," referring to AI models used to generate content that can significantly affect users' daily lives or their safety, including applications in the employment process, loan reviews and medical advice.

Entities harnessing such high-risk AI models are required to inform users that their services are based on AI and are responsible for ensuring safety. Content generated by AI models is required to carry watermarks indicating its AI-generated nature.

"Applying watermarks to AI-generated content is the minimum safeguard to prevent side effects from the abuse of AI technology, such as deepfake content," a ministry official said.

Global companies offering AI services in South Korea meeting any of the following criteria -- global annual revenue of 1 trillion won ($681 million) or more, domestic sales of 10 billion won or higher, or at least 1 million daily users in the country -- are required to designate a local representative.

Currently, OpenAI and Google fall under the criteria.

Violations of the act may be subject to fines of up to 30 million won, and the government plans to enforce a one-year grace period in imposing penalties to help the private sector adjust to the new rules.

The act also includes measures for the government to promote the AI industry, with the science minister required to present a policy blueprint every three years.

- IANS

Share this article:

Reader Comments

A
Arjun K
Interesting. First mover advantage in regulation. But I hope our policymakers don't just copy-paste. Our digital ecosystem is different. We need rules that protect users but also don't stifle our own startups. The 'high-risk AI' definition is smart.
R
Rohit P
$681 million revenue threshold for foreign companies to appoint a local rep? That's quite high. Means only the biggest players like OpenAI and Google get regulated directly. What about the smaller foreign AI tools? The law seems to have loopholes.
S
Sarah B
The one-year grace period is a sensible approach. It gives companies time to adapt. Hope India's upcoming Digital India Act learns from this. Regulation is needed, but it shouldn't come as a sudden shock to the industry.
V
Vikram M
Focusing on employment, loans, and medical advice as 'high-risk' is spot on. AI bias in these areas can ruin lives. In India, we need even stricter rules for financial and job-related AI to prevent discrimination based on caste, region, or background.
M
Meera T
A good blueprint, but enforcement is key. We have good laws on paper too sometimes, but implementation is weak. Who will check millions of AI-generated posts for watermarks? The fine of 30 million won (~1.8 crore INR) might be peanuts for big tech.

We welcome thoughtful discussions from our readers. Please keep comments respectful and on-topic.

Leave a Comment

Minimum 50 characters 0/50