South Korea’s ‘world-first’ AI laws face pushback amid bid to become leading tech power
The laws have been criticised by tech startups, which say they go too far, and civil society groups, which say they don’t go far enoughSouth Korea has embarked on a foray into the regulation of AI, launching what has been billed as the most comprehensive set of laws anywhere in the world, that could prove a model for other countries, but the new legislation has already encountered pushback.The laws, which will force companies to label AI-generated content, have been criticised by local tech startups, which say they go too far, and civil society groups, which say they don’t go far enough.Add invisible digital watermarks for clearly artificial outputs such as cartoons or artwork. For realistic deepfakes, visible labels are required.“High-impact AI”, including systems used for medical diagnosis, hiring and loan approvals, will require operators to conduct risk assessments and document how decisions are made. If a human makes the final decision the system may fall outside the category.Extremely powerful AI models will require safety reports, but the threshold is set so high that government officials acknowledge no models worldwide currently meet it. Continue reading...
The laws have been criticised by tech startups, which say they go too far, and civil society groups, which say they don’t go far enough
South Korea has embarked on a foray into the regulation of AI, launching what has been billed as the most comprehensive set of laws anywhere in the world, that could prove a model for other countries, but the new legislation has already encountered pushback.
The laws, which will force companies to label AI-generated content, have been criticised by local tech startups, which say they go too far, and civil society groups, which say they don’t go far enough.
Add invisible digital watermarks for clearly artificial outputs such as cartoons or artwork. For realistic deepfakes, visible labels are required.
“High-impact AI”, including systems used for medical diagnosis, hiring and loan approvals, will require operators to conduct risk assessments and document how decisions are made. If a human makes the final decision the system may fall outside the category.
Extremely powerful AI models will require safety reports, but the threshold is set so high that government officials acknowledge no models worldwide currently meet it. Continue reading...
What's Your Reaction?



