The Role Of AI In China’s Uighur Crisis
Whether you consider Stalin, Castro, or Xi Jinping today, a fundamental rule of all their authoritarian regimes is maintaining strict control over their people. Yet the sheer scale and complexity of the task makes it very difficult. Now, the use of artificial intelligence has not only aided surveillance but made it much more invasive. With its rapid development, AI is slowly becoming a versatile tool for government oppression.
Unsurprisingly, these AI-backed surveillance systems are already being put into use against the Muslim Uighur minority in China. Almost 10% of Uighurs are held captive in “re-education and retraining” camps, for religious practices such as reading the Koran or even being caught with religious content on their phones. At these camps, the Uighurs are subjected to hours of communist propaganda and at times are forced to renounce their religion. To the Chinese authorities, the divided loyalties that Uighur Muslims have to God and to their government, provide a threat that is to be dealt with harshly.
In the past month, leaked internal documents have revealed the Chinese government’s plans in the region of Xinjiang, where around 1.8 million Muslims have been detained. Furthermore, the leaks have revealed the centerpiece of China’s surveillance system, a huge database with information on the background and behaviors of millions of residents. At the time of the leaks, the “integrated joint operations platform” (IJOP) was labeled as a form of “predictive” surveillance that uses “big data” and artificial intelligence, however, the role of AI in aiding the Uighur crisis is debatable.
While AI does aid the inputting of data in these systems — through tools such as facial-recognition cameras — there is no evidence so far that it is used by the government to form decisions about individuals. We may even be assuming too much by associating the orchestration of the Uighur crisis with sophisticated, AI-driven policing systems. The intention of this security clampdown is to prevent terrorism however crimes such as political dissent are so vaguely defined that predicting them precisely may be near impossible.
If the IJOP did create an AI model that did spot terrorists, it would automatically involve bias as humans would have to decide what counts as a terrorist, feed it into the model, and then ask the platform to identify people with similar characteristics. The possibilities of what would count as a terrorist would ultimately be decided by humans, not just machines, so for the AI to even theoretically be plausible, there must be humans deciding that whether someone falls under the category of “terrorist” or “future terrorist”. Broadly defined, the traits that the AI model is fed and the IJOP follows are Uighur Muslim customs, and infringements include interacting with any form of the religion, language or culture. The fact that there are 1.8 million people in mass detention camps, is a signal that the Chinese are not interested in accurate predictive policing. However, for AI models, a cost must be incurred for inaccuracies in order to avoid false positives. The fundamentals of AI modeling don’t align with the intentions of the Chinese, hence exemplifying the limited scope for automation of the entire surveillance process.
Despite the AI playing a minor role in the assignment of the residents to detention camps, the use of AI itself signifies a greater threat of governments increasingly using technology for racial profiling and other inhumane purposes. China’s government only seems to be realizing now the full potential of inputting bias into their AI systems, and how they could not only be able to classify people by their ethnicity but repress and torture them too - the prospect itself poses a full-blown existential threat to democracy.