Privacy safeguards urged for AI technology

An employee introduces 5G-enabled artificial intelligence at the Guizhou Big Data Exhibition Center in Guiyang, Guizhou province, May 26, 2022. (PHOTO / XINHUA)

As artificial intelligence technology requires vast quantities of personal information, related developers should take stronger measures to fulfill their privacy protection responsibilities, said experts and officials on Tuesday.

Development of AI technologies needs more personal information for a more accurate user portrait, thus offering us better services. However, it could also mean greater exposure of privacy. So AI companies should take more responsibility for privacy protection.

Shan Shiguang, expert from the Institute of Computing Technology

"Development of AI technologies needs more personal information for a more accurate user portrait, thus offering us better services," said Shan Shiguang, an expert from the Institute of Computing Technology affiliated to the Chinese Academy of Sciences.

"However, it could also mean greater exposure of privacy," said Shan during a forum about AI and privacy protection held in Hefei, capital of Anhui province, as a part of the 2022 Cybersecurity Week.

"So AI companies should take more responsibility for privacy protection," said Hu Xiao, director of the cyberspace data administration bureau at the Cyberspace Administration of China.

Taking the increasingly adopted face identification systems as an example, Shan said such a system could have learned billions of photos of people, many of which came from social media, thus it could recognize a person more and more quickly and precisely.

READ MORE: When AI feels at home 

Meanwhile, machine learning can also counter such identification through adversary algorithms, he said.

"You may only need to add some slight but key interference information, known as noise, to a photo and make it unbreakable to AI systems, though it still seems normal to the naked eye," said Shan, adding that such a practice is anything but easy.

Shan also suggested AI developers should make key data encrypted, thus making the data usable but invisible to the developers.

Actions taken by authorities have begun to curb the illegal collection of personal information, according to Hu. He said central authorities will enhance privacy protection by adopting further legislation and increasing penalties for legal violations.

ALSO READ: AI companionship offers a new option in country 

Wu Xiaoru, CEO of AI company iFLYTEK, based in Hefei, said the company has founded a specific committee to supervise its development teams and partners' handling of personal information.

"Once any one of them is found to be conducting actions that infringe on our principles, we have the right and ability to shut down their systems via remote control," said Wu, whose company co-hosted the Tuesday forum.