Quote:
Originally Posted by sebastian_dangerfield
The default to "you're missing the point" is old, Ty. The point you're referencing cannot be missed. My point, which is in response to it, is that no platform should be engaged in such policing, period. Whether compelled by posters on it, or by the govt. If I was not explicit, or expansive, enough, my position includes (necessarily, but apparently this may not be obvious to you) the argument that under no circumstance should a platform or the govt be engaged in culling content to weed out "misinformation."
|
This is as helpful as saying that no one should go to prison for a crime they didn't commit. Yes, we "should" have governments that don't try to convict innocent people. But in the real world, the one that we live in, prosecutors are sometimes more interested in getting a conviction than in getting at the truth. If you are an individual subject to that kind of government regulation, you change your behavior because of what the government might do. For Twitter and other online businesses, the same is true. It doesn't do any good to say the government should leave them alone -- it's not going to happen.
Setting the government aside, the idea that platforms should not cull "misinformation" is just incredibly wrong. For example, eBay is a platform. People list things on it. If they are lying about what they're selling, eBay wants to to weed out that "misinformation" because, duh, fraud. If you're defrauded on eBay, you don't go back, and governments start to care, so eBay has a super legitimate interest in doing that sort of culling.
(Now pretend you're a government. Fraud and libel are not OK in meatspace. You're going to pretend they're OK when they happen online? Uh, no.)
This is basically the point that thread is making. Online platforms back into content moderation for reasons like the one I just described, not because they are interested in taking sides in political disputes. They very much don't want to take sides in political disputes.