Growing Pains: About ChatGPT and Other Chatbots
Regulation and supervision based on laws and moral principles will push generative artificial intelligence (GAI) in the right direction.
The Chinese Cyberspace Administration issued a draft policy on the management of generative artificial intelligence (GAI) services on April 11, soliciting suggestions and advice from the public. The policy aims to promote the sound development of GAI technology. A modified version of the document is expected to be formally rolled out as early as the end of this year.
GAI is a broad label that’s used to describe any type of AI that can be used to create new text, images, video, audio, computer code or synthetic data.
ChatGPT, all the rage nowadays, falls into this category. In March, China’s largest online search engine Baidu launched Ernie Bot, also a GAI tool.
As the draft policy has pointed out, China supports the application of as well as international cooperation on innovative technologies such as AI. In the six months or so since ChatGPT first went live in November 2022, the Italian Government, plus organizations and businesses in the United States, Germany and Japan, have imposed bans or limitations on its use. But rather than banning it, China wishes for GAI to grow and evolve. The draft policy puts forward suggestions on the multidimensional supervision of the technology, lest it be abused. The policy’s ultimate goal is for this technology to develop in a healthy way and for it to benefit human society.
Throughout history, new scientific and technological advances have occurred over short and long intervals, bringing society inventions like electricity, the steam engine and the Internet. Generally speaking, most of these innovations that we almost take for granted today became the topic of heated debate when the world first met them. Even the Internet, an integral part of modern life, used to be called into question—and sometimes still is.
Like other newly invented technologies in modern sci-tech progress, GAI is a mixed blessing. The past several months have seen several reports of a system bug exposing users’ payment data on ChatGPT. Then there was the platform’s glitch that allowed some users to see the titles of other users’ conversations. And then there were those who criticized the chatbot for giving inaccurate answers and showing race- or gender-based bias.
But banning or limiting something out of fear isn’t the right way to go. A full-fledged clampdown on AI novelties will hamper the technology’s overall development and perhaps even affect the future of science and technology.
Regulation and supervision based on laws and moral principles will push GAI in the right direction. And when it’s properly regulated, human beings can use its merits to their advantage while avoiding its drawbacks. This is the way to go and the one way that will benefit all parties involved.
In the short time since ChatGPT’s launch, China’s efforts to regulate GAI convey its support for the new technology. People now come flocking to GAI, but many lack the professional knowledge required to determine if what ChatGPT or its fellow chatbots tell them is correct or not. And if they are misled, they might do things that can harm society at large. It’s against this backdrop that China decided to produce the draft policy; it’s an effort to safeguard online security.
The popularity of GAI reveals the current potential and future value of this technology. We must seriously consider its impact and find effective ways to strike a balance between sci-tech progress and human security and development. Growing pains are an inevitable part of the process.