[D] Chinese government uses machine learning not only for surveillance, but also for predictive policing and for deciding who to arrest in Xinjiang
Link to story
This post is not an ML research related post. I am posting this because I think it is important for the community to see how research is applied by authoritarian governments to achieve their goals. It is related to a few previous popular posts on this thread with high upvotes, which prompted me to post this story.
Previous related stories:
-
Is machine learning’s killer app totalitarian surveillance and oppression?
-
Using CV for surveillance and regression for threat scoring citizens in Xinjiang
-
Hikvision marketed ML surveillance camera that automatically identifies Uyghurs
The story reports the details of a new leak of highly classified Chinese government documents reveals the operations manual for running the mass detention camps in Xinjiang and exposed the mechanics of the region’s system of mass surveillance.
The lead journalist‘s summary of findings
The China Cables represent the first leak of a classified Chinese government document revealing the inner workings of the detention camps, as well as the first leak of classified government documents unveiling the predictive policing system in Xinjiang.
The leak features classified intelligence briefings that reveal, in the government’s own words, how Xinjiang police essentially take orders from a massive “cybernetic brain” known as IJOP, which flags entire categories of people for investigation & detention.
These secret intelligence briefings reveal the scope and ambition of the government’s AI-powered policing platform, which purports to predict crimes based on computer-generated findings alone. The result? Arrest by algorithm.
The article describe methods used for algorithmic policing
The classified intelligence briefings reveal the scope and ambition of the government’s artificial-intelligence-powered policing platform, which purports to predict crimes based on these computer-generated findings alone. Experts say the platform, which is used in both policing and military contexts, demonstrates the power of technology to help drive industrial-scale human rights abuses.
“The Chinese have bought into a model of policing where they believe that through the collection of large-scale data run through artificial intelligence and machine learning that they can, in fact, predict ahead of time where possible incidents might take place, as well as identify possible populations that have the propensity to engage in anti-state anti-regime action,” said Mulvenon, the SOS International document expert and director of intelligence integration. “And then they are preemptively going after those people using that data.”
In addition to the predictive policing aspect of the article, there are side articles about the entire ML stack, including how mobile apps are used to target Uighurs, and also how the inmates are re-educated once inside the concentration camps. The documents reveal how every aspect of a detainee’s life is monitored and controlled.
Note: My motivation for posting this story is to raise ethical concerns and awareness in the research community. I do not want to heighten levels of racism towards the Chinese research community (not that it may matter, but I am Chinese). See this thread for some context about what I don’t want these discussions to become.
I am aware of the fact that the Chinese government’s policy is to integrate the state and the people as one, so accusing the party is perceived domestically as insulting the Chinese people, but I also believe that we as a research community is intelligent enough to be able to separate government, and those in power, from individual researchers. We as a community should keep in mind that there are many Chinese researchers (in mainland and abroad) who are not supportive of the actions of the CCP, but they may not be able to voice their concerns due to personal risk.
submitted by /u/sensetime
[link] [comments]