If I was king for a day, I'd try to find a way to prohibit tech platforms like Youtube or Twitter from censoring this kind of thing, though.
I come down on the side of free speech unless it's directly inciting, encouraging, requesting, or commanding something illegal. (I.E. if a video could be construed as actually encouraging violence, or this whole doxing thing, it would be immediately taken down/banned). But I don't like it that they ban things in the category, arguably, of "misinformation". Why? Mostly just the obvious reasons that have led others to make the same conclusion.
- It leaves some random individual at Youtube or Twitter to be the arbiter of what is true and what is false. That just seems like a crazy place to go as a country cruising along the "Information Superhighway". (blast from the 90's there!)
- It's a slippery slope. Maybe many people can all agree that claiming Sandy Hook is a hoax is nonsense, but what about things like Russian collusion.......or Epstein's manner of death.....or JFK assassination.....or oil & gas vs. electric cars....or hydroxychloriquine....the origination of the virus....what Joe Biden will do after elected...I mean the list obviously goes on and on, indefinitely. Hypothetically, first you might have Twitter banning the KKK, next QAnon, next any mention of alternate coronavirus treatments, and finally even things that are downright politically subjective, like 75% of the 'fact checked' things. Oh wait, all of that already happened!
This Alex fellow does sound like total nonsense, but it's never a good idea to make a "rule" (such as, "disinformation should be able to be banned on social media"), based on one extreme case that happens to garner widespread agreement. It's always a good idea to consider the entirety of what might fall under that rule, and judge its wisdom accordingly. Maybe a little bit like what courts do when they strike down a previously-used legal "rule" for being overbroad or having the potential to reach absurd results.