January 20, 2025
by Soham Jethani, Pankhuri Malhotra, Hena Ayisha and Tanvi Nimje
in ArticlesKey Takeaways
Introduction
South Korea’s National Assembly passed the Basic Law on the Development of Artificial Intelligence and Creation of Trust Base (“AI Act”) in December 2024 as a singular, unified framework governing the regulation of Artificial Intelligence (“AI“). The Ministry of Science and ICT (“MSIT”) has been working towards making South Korea a global leader in AI, and the AI Act sets the foundation for the development of a regulatory framework that can sustain South Korea’s momentum in the AI industry.[1] The purpose of the new law is the sound development of AI by creating a trustworthy foundation for its use, protecting the rights and interests of the people, while enhancing national competitiveness in AI.[2]
Salient features of the AI Act
The following provisions of the AI Act are noteworthy:
The AI Act lays down certain key definitions, namely that of an AI, a high-impact AI, generative AI, AI ethics and AI business operators.[3]
High-impact AI: AI business operators are obligated to assess and identify whether their AI qualifies as a high-impact AI before offering it to customers. AI business operators offering high-impact AI are required to notify customers and undertake additional measures to ensure its trustworthiness by implementing risk management mechanisms, user protection protocols, and adequate human oversight.[4]
Generative AI: Businesses offering generative AI must explicitly notify users of its use and clearly label AI-generated outputs, especially when the output mimics real-world sounds, images, or videos. The obligation can be fulfilled in a manner that doesn’t interfere with the appreciation of works involving artistic or creative expression.[5]
Computational thresholds: AI systems exceeding certain computational thresholds are required to enforce risk management mechanisms throughout their lifecycle and ensure continuous monitoring and responsiveness to AI safety incidents.[6]
As framework legislation, the AI Act focuses heavily on establishing an organisational system and supporting measures to foster AI development. The AI Act also mandates the MSIT to undertake certain activities to encourage AI developments, including the establishment of AI data centres and other initiatives aimed at supporting the adoption of AI technologies and research and development.[7]
Businesses acting in violation of the AI Act face fines of up to 30 million Korean Won (approximately USD 20,500).
Comparison to EU AI Law
After the European Union introduced the EU Artificial Intelligence Act (“EU AI Act”) in March 2024, South Korea has become the second jurisdiction to adopt a comprehensive framework approach to the regulation of AI. Several elements of the Korean AI Act mirror those of the EU AI Act. The EU AI Act undertakes a risk-based approach to AI regulation, assessing whether the risk associated with certain applications of AI is so high that any corresponding benefit is not balanced, and subsequently, certain use cases are explicitly banned.
The Korean AI Act also takes a risk-based approach to identifying high-impact systems, defining them based on their impact and judging the end-use cases to determine their treatment. Where the EU AI Act classifies an AI system as “high risk” if it poses a significant risk to human safety, health, or fundamental rights,[8] the Korean AI Act follows a similar definition for “high-impact AI.”[9] In both cases, additional obligations and safeguards are implemented through the law to ensure that this risk is adequately mitigated. However, in its current form, the Korean AI Act does not outright ban any AI systems, no matter how high their assessed risk level.
The Korean AI Act’s obligations of transparency and disclosure when offering high-impact or generative AI products similarly mirror the EU AI Act’s focus on deepfakes and other AI-generated content which may be used in a manipulative or deceptive manner.[10] The Korean AI Act also imposes risk management and reporting obligations on AI systems exceeding certain set computational thresholds, which parallels the EU AI Act’s treatment of general-purpose AI models with systemic risk. The substantive law in this regard will provide more context as to the rigour with which this will be implemented, considering the EU AI Act’s definition of systemic risk greatly exceeds the ordinary processing capabilities of any AI in the market.[11]
Conclusion
The AI Act is set to take effect in January 2026. While several specific details of the framework are pending to be clarified through executive decrees and notifications from the MSIT, the framework places obligations on the MSIT and other authorities to implement substantive laws soon. By aligning with the EU and setting a common trajectory for AI regulation, South Korea is paving the way for other leading jurisdictions to introduce frameworks addressing the complexities of AI governance.
***
TLP Advisors is a dynamic and forward-thinking consulting, strategy and law firm specialising in providing cutting-edge solutions to our diverse clientele. With our roots deeply embedded in financial services, gaming, web3, and emerging tech, we offer unparalleled knowledge and support tailored to these rapidly evolving sectors’ unique challenges and opportunities.
TLP Advisors has consistently been the firm of choice for L1 chains, DeFi protocols, gaming companies, fintech and payment companies, foundations, funds, and investors. We have built a reputation for excellence through frequent collaborations with regulators, funds, and technology incubators. Our deep understanding of the intricate regulatory landscapes and industry dynamics allows us to provide strategic guidance and innovative solutions that empower our clients to navigate complex challenges and seize emerging opportunities.
***
[1] Blueprint for Korea’s Leap to Become One of the Top Three Global AI Powerhouse (AI G3), https://www.msit.go.kr/eng/bbs/view.do?sCode=eng&mId=4&bbsSeqNo=42&nttSeqNo=1037
[2] Basic Law on the Development of Artificial Intelligence and Creation of Trust Base, https://likms.assembly.go.kr/bill/billDetail.do?billId=PRC_R2V4H1W1T2K5M1O6E4Q9T0V7Q9S0U0
[3] Article 2, Basic Law on the Development of Artificial Intelligence and Creation of Trust Base
[4] Article 33, Basic Law on the Development of Artificial Intelligence and Creation of Trust Base
[5] Article 31, Basic Law on the Development of Artificial Intelligence and Creation of Trust Base
[6] Article 32, Basic Law on the Development of Artificial Intelligence and Creation of Trust Base
[7] Article 25, Basic Law on the Development of Artificial Intelligence and Creation of Trust Base.
[8] Article 6, Regulation (EU) 2024/1689 of the European Parliament.
[9] Article 2(4), Basic Law on the Development of Artificial Intelligence and Creation of Trust Base.
[10] Article 50, Regulation (EU) 2024/1689 of the European Parliament
[11] The TOP500, which lists the 500 most powerful commercially available computer systems, has named Hewlett Packard’s El Capitan as the world’s fastest supercomputer. Its speed is measured in exaFLOPs, which is in the range of 10^18 floating point operations per second (“FLOPs”). Available here: https://top500.org/. Article 51 of the EU AI Act has set the threshold for systemic risk as 10^25 FLOPs, which remains exponentially higher than any supercomputer available.
© 2024 TLP Advisors