
Introduction
Never before have I heard so many people speak of so many things they know so little about.
A colleague said this to me over coffee a while back and it’s been stuck in my head ever since. Not because it’s clever (it is), but because I keep seeing evidence for it everywhere I look.
I open LinkedIn and someone with “AI Enthusiast” in their bio is explaining why Bayesian methods are outdated. I scroll further and a marketing director who discovered ChatGPT six months ago is telling physicists how neural networks work. I check Twitter (sorry, “X”) and an influencer with a million followers is confidently explaining that p-values prove causation. Each of these posts gets thousands of likes, hundreds of shares, and a comment section full of people nodding along.
This is what I mean by cacophony. Not silence, not even misinformation exactly. Just an overwhelming volume of confident noise from people who haven’t done the work to earn that confidence.
The problem isn’t ignorance
Let me be clear about something: I’m not calling people stupid. That’s a lazy take and it’s not what I mean. Most people sharing opinions online genuinely believe what they’re saying. They’ve read an article, watched a YouTube video, maybe skimmed a paper’s abstract. They’ve formed a view. The problem is that forming a view and understanding a topic are wildly different activities, and our current information landscape makes it almost impossible to tell them apart.
I’ve been guilty of this myself. When I first got into drug discovery I had opinions about molecular properties that I’d formed from reading a few papers. It took months of actually working with chemists, running simulations, and getting things wrong before I realized how shallow my initial understanding was. I wrote about LogP and LogD partly to force myself through that process. Writing it down was brutal because it revealed every gap I’d been casually ignoring.
And that’s the thing. The gap between “I’ve heard of this” and “I understand this” is enormous. But on social media it’s invisible. A confident 280-character take looks the same whether it comes from someone who has spent a decade on the problem or someone who read a blog post yesterday.
The internet made it worse
People have always been prone to holding opinions they can’t defend. This is not new. What is new is the scale.
When I was doing my PhD in Lund, if you wanted to spread a bad take about physics, you had to stand up in a seminar and say it out loud in front of people who could immediately challenge you. That’s a pretty effective filter. The social cost of being wrong in a room full of experts is high enough that most people either do the work or keep quiet.
Social media removed that filter entirely. You can broadcast opinions to millions of people without ever facing someone who actually knows the field. The feedback you get isn’t “that’s wrong because…” it’s likes, shares, and algorithmic amplification. The platforms are optimized for engagement, not accuracy. Confident, simple, slightly provocative takes get rewarded. Nuanced, careful, qualified statements get ignored. So the incentive structure actively selects for the kind of noise I’m complaining about.
And then there’s the confirmation bias machine. Whatever you already believe, the internet will find you a community of people who believe it too, and a steady stream of content that reinforces it. Want to believe that transformers are conscious? There’s a subreddit for that. Want to believe that all of statistics is a fraud? You’ll find your people. The information is technically available to correct these views, but it has to compete with an algorithm that knows exactly what will keep you scrolling.
AI is pouring gasoline on this
Here’s where it gets really fun. We now have AI systems that can generate fluent, confident, well-structured text on any topic. A person who knows nothing about Bayesian inference can ask an LLM to write a LinkedIn post about it and get something that sounds authoritative. They post it. People who also don’t know about Bayesian inference read it and think “this person knows their stuff.” The cycle accelerates.
I’m not anti-AI (I build AI systems for a living, that would be a weird position to take). But I do think we need to be honest about what this means for the signal-to-noise ratio. When the cost of producing confident-sounding content drops to zero, the volume of that content explodes. And if most of it is produced by people who don’t understand the topic, or by machines that can’t understand anything at all, the noise gets louder while the signal stays the same.
Writing as an antidote
Ok, enough complaining. What do we actually do about this?
I think the single most powerful thing you can do is write. And I don’t mean tweets or LinkedIn posts (though those have their place). I mean long-form writing where you have to actually develop an argument, anticipate counterpoints, and deal with the uncomfortable realization that you don’t know as much as you thought you did.
I’ve been writing this blog since 2016 and I can tell you from direct experience: nothing reveals the gaps in your understanding faster than trying to explain something to someone else in writing. When I wrote about the equivalence of Bayesian priors and Ridge regression, I thought I had a clear picture going in. Three hours later I was back in the derivations because writing it down had exposed an assumption I’d been making for years without examining it. That’s the value. Not the finished post (though that’s nice too), but the process of getting there.
Writing forces you to be specific. You can’t hide behind vague gestures when you have to put actual words on a page. “AI will change everything” is easy to say. Try writing 2000 words about exactly what will change, for whom, through what mechanisms, and with what evidence. Suddenly the confident take doesn’t feel so confident anymore.
Cultivating honest skepticism
The other thing I’d advocate for is what I’ll call honest skepticism. Not cynicism (that’s just laziness wearing a trench coat), but a genuine willingness to ask “how do you know that?” of everything, including your own beliefs.
When someone tells you something, especially something that confirms what you already think, ask yourself: what would have to be true for this to be wrong? What evidence would change my mind? If you can’t answer those questions, you don’t have a belief. You have a feeling. Feelings are fine for picking a restaurant. They’re not fine for making claims about how the world works.
This applies to experts too, by the way. I have a PhD in theoretical physics and I’m wrong about things regularly. The difference (I hope) is that I try to notice when I’m wrong and update accordingly. That’s what being a scientist actually means. Not having the right answers, but having a reliable process for getting closer to them.
Conclusion
I don’t have a tidy solution for the cacophony. I don’t think there is one. The platforms are the way they are, the incentives are the way they are, and AI is only going to make the content flood worse.
But I do think we each have a choice about how we participate. We can add to the noise or we can resist it. We can share things we haven’t read properly or we can take the time to understand before we speak. We can optimize for likes or we can optimize for truth.
Write things down. Subject your ideas to the test of putting them into words. Read things written by people who actually did the work, not by people who are good at sounding like they did. And when you catch yourself about to share a confident opinion on something you learned about twenty minutes ago, maybe pause for a second. The world has enough noise.