For almost a year, four journalists at Bloomberg News were investigating the political connections of China’s richest man, Wang Jianlin. In October, according to The New York Times, Bloomberg editor-in-chief Matthew Winkler told them the story wouldn’t run because if it did, Bloomberg News would lose reporting access in China.
A few days later, Winkler reportedly spiked a second story, on the children of Chinese officials employed by foreign banks, presumably for the same reasons. Winkler reportedly defended his decision by “comparing it to the self-censorship by foreign news bureaus trying to preserve their ability to report inside Nazi-era Germany.” Chinese officials responded last week by conduct unannounced investigations of Bloomberg‘s Chinese bureaus.
What’s most surprising about these revelations is not that Winkler allegedly spiked the stories to preserve access, but that he didn’t opt to publish the articles under what Bloomberg calls Code 204, which keeps stories from appearing on its financial data terminals in China. Since Code 204 was created in 2011, Bloomberg has used it to block access to its 2011 report on Chinese censorship of “Jasmine Revolution” protests and its 2012 article on Chinese millionaire Xi Jinping, the Times reports. Bloomberg maintains that it is merely holding the Jianlin investigation because it’s not yet ready for publication. But its hasty call to withhold both October stories suggests that Bloomberg, like many Chinese media outlets and Internet companies, has but a tenuous grasp on how Chinese censorship operates, and has resorted to self-censorship to stay in business.
As Chinese social media analyst Jason Q. Ng pointed out in a talk at Google last month, China has come to depend upon the subtle “self-censorship by private companies” to maintain theGreat Firewall, the government’s online censorship and surveillance shield that prevents Chinese citizens from accessing sites like Twitter and the Times. Ng has argued that part of the reason the Great Firewall is so effective is because it encourages self-censorship of two varieties: “the self-censorship by content providers, who must make judgment calls on what needs to be censored in order to stay in the government’s good graces, and self-censorship by users, who face the threat of being detained and punished for anti-government posts.” The Bloombergincident obviously falls into the former category, and it is not dissimilar from the practices of the more than 1,400 Chinese social media sites that are required to filter their own content to continue doing business in the country.
These sites rely on human censors to manually process all of the content uploaded to the Chinese Internet. The censors are responsible for manually ensuring that all content complies withChinese Internet regulations and the Public Pledge of Self-Regulation and Professional Ethics, but the exact methods they use are kept under wraps. Over the past few months, however, two new academic studies have exposed the capricious apparatus of Chinese censorship, whose many inconsistencies make incidents like the Bloomberg debacle more likely to occur.
The first installment of an ongoing University of Toronto study of Chinese Internet controls, published last month, exposed the censorship of private communications on several instant messaging services in the country. The report found that censorship of Chinese instant messages varies from region to region, and that on at least one service “millions of chat records were being collected and stored on a publicly accessible, unsecured server based in China.” The researchers (of which Ng is one) also compiled a list of known keywords that, if used in an instant message or social media post, would automatically cause the communication to be blocked. Writing in The Atlantic last week, Ng claimed that what’s especially worrisome about the study is that these companies “engaged in pre-emptive self-censorship” in order to continue operating in China. Social media users internalize this self-censorship, Ng writes, citing China scholar Perry Link’s “metaphor of an ‘anaconda in the chandelier’ to describe how the Chinese state cajoles individuals to censor their own thoughts and words, an example that applies neatly to companies like LINE [an instant messaging service] and Bloomberg.”
But according to an October report from Harvard political scientist Gary King, the Chinese government is more concerned with censoring groups of people rather than individuals. In the first large-scale randomized study of Chinese censorship, King and his team attempted to document China’s censorship system. They authored 1,200 posts on 200 social networks throughout China to see which ones would be blocked, and drew up the following map to explain how it works:
Slide from Reverse Engineering Chinese Censorship, by Gary King, Jennifer Pan and Molly Roberts. institute for Quantitative Social Science, Harvard University.
Chinese companies have to employ their own censors to make sure they stay in line with the Public Pledge of Self-Regulation. So after analyzing all the posts, King and his team went one step further and built their own social network to test how companies internally interact with censors. “We’d call them up and we’d say, “Hey, how do we stay out of trouble with the Chinese government?” And they’d say “Well, let me tell you,” King explained.
King’s research found that calls for collective action or any kind of large gathering are far more likely to be blocked than political dissent, but also uncovered the “highly inexact” nature of the algorithms that censors use to detect banned keywords. “Automated methods of text analysis that work based upon keyword algorithms, they work really badly,” King said. In practice, this means that a lot of the time even pro-government posts are blocked and, more broadly, that the apparatus of Chinese censorship is highly variable and subject to human error. Based on their conversations with censors, King’s team concludes that there is “a great deal of uncertainty over the exact censorship requirements and the precise rules for which the government would interfere with the operation of social media sites, especially for smaller sites with limited government connections.”
If there is uncertainty about censorship requirements even among China’s censors, then one might assume there is uncertainty in the Chinese bureaus of traditional media outlets. “The complexity of the censorship system makes ‘censor’ itself hard to define,” says Isaac Mao, a social media researcher who was one of China’s first bloggers. After years of perplexing censorship, media outlets operating in China have learned “how to define the red lines spontaneously.” For Bloomberg News, that spontaneity has come at a cost to its reputation — both in the West and with the Chinese government.
Have something to add to this story? Share it in the comments.