China to create and implement national standard for large language models in move to regulate AI, wh

Publish date: 2024-02-22

LLMs are deep-learning AI algorithms that can recognise, summarise, translate, predict and generate content using very large data sets.

China’s latest standardisation initiative reflects how local authorities have extolled AI’s potential to help drive economic growth and become a useful daily tool, while maintaining caution about its risks and asserting regulation of the technology.

It also listed the areas where China could benefit from AI, including daily office work, biopharmaceuticals, remote sensing and meteorology.

After Microsoft Corp-backed start-up OpenAI released ChatGPT in November, Chinese Big Tech firms have been rushing to develop challengers with Beijing pinning its hopes on AI development to bolster industrial productivity and fuel post-pandemic growth in the world’s second-largest economy.

Will AI be China’s ace in the hole to surpass the US and become the top economy?

At the WAIC event, which concludes this Saturday, the China Academy of Information and Communications Technology (CAICT), another institute under the MIIT, said it was also working to boost domestic LLM development and manage potential risks from the technology.

“[We will] promote systematic breakthroughs and original innovation in large [AI] models,” CAICT president Yu Xiaohui said in a presentation at the conference on Friday.

He said that it was important to accelerate development in the “verification and application of core areas such as algorithms and high-performance chips”, while strengthening research and coordination in “risk governance” to make large AI models “reliable tools for society’s development”.

Still, internet regulator the Cyberspace Administration of China (CAC) has yet to issue a licence for any generative AI product in the country, even as Big Tech firms like Baidu, Alibaba and iFlytek have rolled out ChatGPT-like services on a trial basis.

Generative AI describes algorithms that can be used to create new content, including audio, code, images, text, simulations and videos. Recent breakthroughs in the field have the potential to drastically change the way people approach content creation.

All generative AI algorithms and products must go through security testing and review by the CAC before these can be made available to the public.

ncG1vNJzZmivp6x7tK%2FMqWWcp51kwaavx2inqKSZmMZwrdGtoJyklWSAc36VcmtrZ5Odtq%2BtjJypnpmkmnqiusNmoKaonJq6prrTZqWarJmku6K4jKyrmqaUlr%2BlecuaqaCdXaGur7PUmp6eZZ2ksaa40makqK6VYr%2Bms9SlmK2dXZa2bsPHoqOeZaWotq%2BzjKKrrA%3D%3D