For a writer, being able to see both sides of an issue may serve as a curse. No one wants to hear the nuances of both sides; they want to hear that whatever side they are on, or at least that of the book they paid for, is completely right and everyone else is just a deluded idiot.
When it comes to internet censorship, however, one must look at both sides because this like any significant issue is nuanced.
In the early days of the internet, censorship was basically unheard of and was done informally in case of gross violations. Site admins took care of it, sending emails back and forth, because they were dealing with a few incidents a month.
Enter spam. This came out of left field for the geeks but everyone else knew it would be coming soon; after all, our snail mail slots were overflowing with bulk postage material from every commercial entity in the universe. Commerce is like oil in that it gets into every crack.
Spam first appeared on newsgroups, then spread into email, and soon people were actively gearing up to “fight” spam by tracking its sources. That failed in the early 2000s, when so many sites came online that whitelists and blacklists became obsolete.
In this new world, Bayesian filtering became the only viable weapon, looking for messages that fit certain criteria and went out to many people. The spammers fought back, adding random words and characters, and the counter-spam software got better.
As a practical matter, email spam began its death curve when Gmail hit the scene, since Gmail had built in Bayesian detection; if it saw 8,000 copies of messages with similar content, it threw them all in the spam folder. Spam then moved on to forums, and eventually to social media.
Into that uncertain situation came the explosion in mobile computing in 2007. Bots ruled the day, seeking out people on social media via keywords and then inserting messages, usually very subtle ones in contrast to earlier spam. Other bots then voted these up, and the propaganda or advertising took over.
This prompted social media companies to invest heavily in filtering technology since they needed to catch commercial messages before they were seen by users and had their effect. Social media was built on censorship of spam from the initial stages.
What made this even more complicated was that, much as “Eternal September” back in 1993 had unleashed an early commercial audience on the internet, the arrival of mobile computing let loose the hounds of the general public on the internet. All it took was a phone and contract, and everyone had those.
If you run a website, you are under constant assault by people who are basically mentally unhinged. It seems likely that many of them are on disability for mental health problems, confined to a basement, or otherwise thrown out of the semi-functional society around us.
People routinely threaten each other with death and bodily harm, post personal information, upload child porn, and engage in other behaviors that would have most of us scratching our heads. But, much as daytime television kept the drunks and medicated schizoids busy, now the internet is the great pacifier.
This forces sites into an ugly situation: they must censor the bad stuff, but they are going to have dumb and low-paid staff doing this, so they need to draw bright lines. Being MBAs, they decide to go for a firm-sounding resolution: cut it all out.
Unfortunately for them, this means that effectively they are acting as political correctness does, silencing mention of the controversial so that inertia can take over. They want us to keep moving in the direction of decay, into our own human worlds, away from a relationship with reality.
Another method of filtering out unwanted content exists, which is qualitative censorship. Instead of banning things by topic, this method removes low-quality expresses of even the uncontroversial.
Basically, sort out the repeated comments, off-hand stuff, idiocy, incoherence, pointless obscenity, off-topic including spam, vandalism, and cruelty. Leave the discussion of any topic so long as it is well-expressed, relevant, informative, insightful, or otherwise healthy to discussion.
This beats viewpoint discrimination, which is what both Big Tech social media and political correctness use:
Viewpoint discrimination is a form of content discrimination particularly disfavored by the courts. When the government engages in content discrimination, it is restricting speech on a given subject matter. When it engages in viewpoint discrimination, it is singling out a particular opinion or perspective on that subject matter for treatment unlike that given to other viewpoints.
For example, if an ordinance banned all speech on the Iraq War, it would be a content-based regulation. But if the ordinance banned only speech that criticized the war, it would be a viewpoint-based regulation.
Extending this to a real world example, one might permit logical and orderly criticism of diversity, but limit ethnic slurs and threats, no matter whether they were pro-diversity or anti-diversity. That is, slurs against white people would be banned like slurs against Blacks, but criticism of either group that avoided profane or slur language would not be banned.
Social media faces a persistent problem of dwindling audience. Perhaps funded by China, Big Tech always wanted to avoid controversy, but especially suppress the rising ethnic Western European Nationalist movement. At the same time, it found that it was only attracting the daytime television audience.
Like those who watch the big cable news channels, the daytime television audience is smaller than the conservative audience, but far more dedicated. They check in every day, all day. They pay attention to the latest talking points. At night, they drink box wine and spread “the good news” via the net.
Big Tech chose to keep this audience and throw out the rest as inconsistent. This makes the fundamental error of confusing ardent participation with paying attention; most people are readers, not commenters, and come in to check out what others are doing without throwing in their own two cents.
In my view, the real audience to capture are the silent ones. They do not want any risk from public exposure of views that might ex post facto be found to be “wrong” in the political correctness jargon, so they just read, maybe throw in an uncontroversial comment now and again.
They do not show up regularly; after all, unlike the daytime television watcher audience, these people have lives. They are busy with families, careers, and hobbies. They are self-directed; they do not need the affirmation of the crowd because they have health and not damaged self-esteem.
By moving to viewpoint discrimination, Big Tech has shown us that it is in decline. The normals are escaping, so it is focusing on the fanatics, and these tend to be Leftist, sentimental, righteously angry, and looking for “new” things to distract or occupy them.
While this group seems like a good audience, its consistency online reflects inconsistency in the world outside of the symbolic realm of the internet, and so it is a bad group toward which to pitch products. The silent people are better, but on the dying internet, they are ignored.
Tags: big tech, internet, social media, viewpoint discrimination