VCG111427482284
Photo Credit: VCG
TECHNOLOGY

ChatGPT Gold Rush: How AI Business (and Crime) Is Taking Shape in China

ChatGPT and Chinese alternatives like Baidu’s Ernie have entrepreneurs harnessing AI for profit, while lawmakers are reigning in nascent AI crime

A disconcerting photo swept across Chinese social media in March this year, sparking fear and outrage among netizens. The image showed a bizarre incident on the Guangzhou subway, where a young woman was stripped naked and exposed in the carriage.

But the truth soon came out: The image was fake, created by a one-click “undressing software” that used artificial intelligence (AI) to remove the woman’s clothes. The original photo had been posted by a blogger named Zhizhi on social media app Xiaohongshu in July of the previous year, and it showed Zhizhi posing in the carriage clothed in shorts and a spaghetti top. “Is AI crime coming?” asked one worried netizen in a comment under Zhizhi’s post on Xiaohongshu, in reference to the altered image.

As AI content creation tools grow more widespread in China, deepfake nudity, fake news, and gender-biased content are among the worrisome outcomes that have already become a reality.

Just as other countries have seen a boom in interest in AI since the release of the chatbot ChatGPT by US company OpenAI, there has been significant hype around such technologies in China too. ChatGPT doesn’t accept Chinese phone numbers for registration, but Chinese users have rushed to purchase foreign SIM cards in order to use the technology.

Even when Chinese authorities blocked access to the site hosting the AI chat model, a black market soon emerged on platforms like the second-hand shopping portal Xianyu, where users sell accounts or even individual queries for between 1 to 10 yuan and use VPNs to get around the restrictions.

Chinese tech giants have jumped on the AI chatbot bandwagon. On March 16, Baidu announced its own AI language model, Ernie Bot. The launch event, however, left observers unimpressed with Ernie’s pre-recorded—rather than live—answers to questions, and sent Baidu stocks plummeting. Alibaba revealed its own AI model, Tongyi Qianwen, on April 11.

Creating with AI

Chinese netizens have begun to harness the power of AI in innovative ways. A Chinese e-commerce blogger with 580,000 followers on Weibo revealed that he used AI to advertise clothes: “This year we don’t need to spend 40,000 to 50,000 yuan a day to hire photographers and models to shoot clothing...[AI-generated images] are totally enough,” he wrote on Weibo in March.

AI-generated models (mostly female) have also been proliferating on Xiaohongshu and e-commerce platforms. The AI models nearly always conform to mainstream Chinese beauty standards: slim waists, long legs, large breasts, pale white skin, and big eyes. On Xiaohongshu (a platform where the majority of users are young women), some have voiced their concern that AI models promote unhealthy body images and are inherently biased towards generating images for the male gaze, given they are trained on massive data sets that include all the biases of the internet.

More practically, some online shoppers worry that virtual models generated by AI will make it more difficult for people to judge the fit of outfits they want to buy. “Using real models allows you to see the way clothes look on a person, so what’s the point of using an AI model?” wrote one user on Xiaohongshu under a post that used an AI model.

Despite being difficult to access in China, ChatGPT has also caused controversy for helping to generate fake news reports. On February 16, a news article with the headline “Hangzhou to cancel traffic restrictions on March 1” quickly spread online, especially among residents of the Zhejiang capital who have long suffered from some of the most congested roads in the country. The article suggested Hangzhou would no longer restrict when vehicles can be on the road according to license plate number (at present, approximately half of all vehicles are allowed on the city’s roads each day).

As it turned out, the article was created by ChatGPT after prompting by a WeChat user who wanted to demonstrate the platform’s capabilities to members of a group chat. But ChatGPT’s fake article, which included details such as a date for when the new traffic policy would be implemented, was so convincing that some people in the group forwarded it to others, thinking it was a real press release.

The story spread, with a related hashtag on Weibo quickly gaining over 10 million reads. Authorities eventually responded to refute the news. Local police also made the WeChat user apologize for posting the piece, though no criminal charges were filed. This incident marks the first time that ChatGPT has produced a notorious fake news article in China, highlighting its capabilities before many Chinese users have even experienced the technology.

AI vs. Authorities

The first case involving AI which led to criminal prosecution took place in late April. Police in Gansu province became aware of a fake news report about a deadly train accident, allegedly made by an individual surnamed Hong using ChatGPT and then published on Baidu’s self-publishing platform Baijiahao on multiple accounts, racking up 15,000 views in a short time. Hong is being charged under new laws and regulations governing AI technology in China and could face a prison sentence.

To mitigate such risks, China’s Cyberspace Administration acted quickly and already released new draft measures to regulate AI products like ChatGPT on April 11. The Administrative Measures for Generative Artificial Intelligence Services prohibit AI models from producing “content containing obscene and pornographic information, false information, and content that may disturb the economic order and social order.” The measures also require AI-generated content service providers to protect the personal information of users, to clearly mark AI-generated content, and to require users to provide real-name identity information.

However, some Chinese legal scholars and experts have pointed to potential difficulties for service providers to enforce laws and regulations, since AI tools like ChatGPT are essentially “black boxes” that largely operate autonomously without even the operators fully understanding what is happening “inside.”

Other countries are facing similar struggles to regulate AI technology. Italy, for example, banned ChatGPT outright in March this year over privacy concerns related to user information. The EU has already proposed the European AI Act, which would restrict the use of AI in critical infrastructure, policing, education, and the judicial system.

But despite new rules, the wide availability of AI software, not just ChatGPT, means incidents like the naked woman on the Guangzhou subway will be difficult to prevent. The perpetrator who posted the nude image of Zhizhi is yet to face any consequences, despite Zhizhi saying she would “defend her rights” in a post on March 9. Similar one-click stripping AI software, though technically illegal, is still available on the internet.

Besides, AI software still only does the bidding of human prompters—toxic or misleading AI-generated content is ultimately the result of an order from a person in front of their screen.

Find more audio versions of our content here.
SHARE:

author Wang Jiawei

Wang Jiawei is a contributing writer at The World of Chinese. He is deeply passionate about multimedia storytelling and sees the fate of ordinary people in grand narratives.

Related Articles