Saturday, July 27, 2019

[EJinisght] Why the China social credit system must be resisted

 On July 5, Guangdong province released a three-year action plan (2018-2020) in relation to the Guangdong-Hong Kong-Macao Greater Bay Area project, in which it said it will incorporate the Social Credit System -- a national reputation system being developed by the Chinese government.

The announcement immediately prompted privacy concerns among citizens. Later, a top Hong Kong government official assured that China’s social credit system will not be implemented in Hong Kong.

Since Hong Kong is part of the Greater Bay Area, it is actually uncertain how such promise can be kept, and for how long, especially as the big-data system for monitoring and shaping business and citizens’ behavior would be fully implemented from next year on the mainland.

It is understandable that Hong Kong citizens have fears of their personal information, behavior and preferences being collected and secretly sent back to the Chinese government.

Now, why do Hongkongers oppose such social credit system? Unlike the traditional credit system for personal finance, China's new definition of credit system will serve as a means to regulate citizens' behavior through mass scoring and ranking.

According to the outline of the plan for development of China's Social Credit System, as early as in 2006, a trial system has been in place in which each Chinese citizen was given a social credit score measuring their sincerity, honesty, and integrity.

Chinese tech companies were obligated to help collect citizens’ banking transaction records, or various online interactions and behaviors.

By the end of last year, a unified platform called Credit China was set up, with Baidu providing the technical support, linking 44 central government agencies, 32 provincial systems, and private databases from 12 enterprises including Alibaba Group-affiliated Ant Financial's Sesame Credit and Tencent’s WeChat.

More than 13 million names have been collected and blacklisted, the authorities said.

Obvious misbehavior or misconduct that would result in social credit score deductions include violating court orders, not following traffic rules, having outstanding debts, defaulting tax payments or committing fraud, and spreading rumors online.

The score would be a major determinant for all, for instance, in deciding whether an individual would be permitted to travel overseas, receive promotion at work, buy or rent a flat, or given access to public schools for their children.

The system has been a useful tool for the Chinese government but has triggered concerns over human rights and free speech. In Suining county of Jiangsu province, an individual who surrounds, or pays unwelcome visits to the government departments, enterprises, or construction sites will see his social score deducted.

People will also have to be careful about their online interactions and what text messages they send over their mobile phones.

What's worse, the system lacks transparency in relation to scoring and ranking, and there is no channel for complaint.

An example would be Xu Xiaodong, a famous mixed martial arts fighter in China. This May it was rumored that the sportsman was ranked Grade D in the social credit system, and that the poor score saw him barred from using some public transport, from gaining access to hotels and golf courses, and also restricted from buying or renting property.

According to the Basic Law, Hong Kong residents have freedom of speech, of demonstration, and the right and freedom of religious belief. Now, if the China-type social credit system is put in place, it would not only infringe upon people's rights, it could also threaten the life and property of every citizen.

This article appeared in the Hong Kong Economic Journal on July 16 2019

Translation by John Chui

Thursday, July 18, 2019

[EJinisght] The threat from facial recognition technology

We have become familiar with face recognition features in our mobile handsets. Meanwhile, there are more and more instances where governments and corporations try to apply this technology to CCTVs and portable video-recording devices. For example, in places like the border departures hall at the Hong Kong International Airport, casinos in Macau, schools in Mainland China, concert venues, public housing estates, and even at road crossings, our faces are being scanned quietly. The application is getting ubiquitous.

While face recognition is fast becoming the latest weapon of authorities in various countries, the problems this causes has stirred much controversy.

Sure, law enforcement departments have found automatic face recognition technology helpful, making it easier to determine the suspects by comparing with their face database, and enhancing the efficiency in law enforcement.

But studies in the US and Europe have indicated that these technologies are unreliable. In other words, the algorithm could lead to discrimination.

Last year, the London Metropolitan police used face recognition several times in a trial to scan the faces of pedestrians, comparing those records with the archives on wanted criminals, in order to pick out suspects for body search operations.

The project emphasizes that people can refuse to be recorded. However, what is the outcome? In June this year, a middle-aged man deliberately hid his face by pushing up his collar when he moved past a surveillance camera, but this act was perceived by the police as a suspicious move. In the end, he was asked to cooperate in a second video-recording. Moreover, he was fined £90 for alleged ‘improper behavior’. In other words, his unwillingness to be scanned was considered a criminal act.

In the US, face recognition has led to misleading outcomes, triggered by racial and gender stereotypes, and has come under attack from human rights groups.

In June this year, San Francisco became the first city in the US to legislate against government’s use of face recognition as a tool for prosecution. And other cities are set to follow suit. For example, in Auckland, there are plans to introduce similar legislation, while in New York, people have come up with proposals to prevent landlords from applying such technology to monitor tenants.

In May this year, California passed laws to prevent anyone from using portable video-recording devices with face recognition technology.

At a Legislative Council meeting in Hong Kong on June 5, I enquired about the use of face recognition currently at government departments.

The government conveyed that, among the 39,000 CCTVs installed by various government departments, around 2200 portable video-recording devices are used by the police.

Authorities claimed that so far none of the government departments have purchased, introduced or used such automatic face recognition feature across their CCTV systems,

But does this mean that Hong Kong residents will not be subject to such surveillance, considering the introduction of a new generation smart ID cards, and the promotion of electronic ID(eID) usage for identity verification.

Following the July 1 clashes at the Legislative Council involving extradition bill protesters, some pro-establishment lawmakers have been calling for anti-mask laws. In the future, if smart lamp-posts and CCTVs are capable of tracking the identity of people by using face recognition features, who among the Hong Kong people will be bold enough to express their views in public?

I believe we cannot wait until we are subject to the sort of surveillance seen in China's Xinjiang region. We must seriously review the risks associated with the use of face recognition technology, and the government should find out ways to evaluate the potential impact of such application before it is too late.

This article appeared in the Hong Kong Economic Journal on July 8 2019

Translation by Jennifer Wong