When it comes to learning, there are roughly three kinds of people:
Those who rarely read (unfortunately, probably most people).
Those who read, but mostly consume average content.
Those who deliberately seek a high signal-to-noise content.
Being in the first group is genuinely sad.
Why do I say that?
These days, we have access to the Library of Alexandria at our fingertips, yet many decide to ignore it. Naval Ravikant wrote a great article about why falling in love with reading matters, you can find it here.
Belonging to the second group means you're halfway there. Having a reading habit (and by "reading," I mean deliberately absorbing information—whether through articles, videos, podcasts, or social media threads) already puts you ahead. From there, it's about refining your information diet to include valuable content. Content that doesn't merely entertain but genuinely helps you get better at whatever you do.
I like this quote by Sherlock Holmes on the topic of curating the information you consume:
"I consider that a man's brain originally is like a little empty attic, and you have to stock it with such furniture as you choose. A fool takes in all the lumber of every sort that he comes across, so that the knowledge which might be useful to him gets crowded out, or at best is jumbled up with a lot of other things so that he has difficulty in laying his hands upon it."
The third group, however, is the most interesting.
Every impressive person I know is excellent at curating their inputs. They intuitively seek high-signal content, and it clearly shows in the caliber of people they become.
Signal vs. Noise

This essay’s goal is simple:
I want to encourage you to actively pursue content that not only interests you but genuinely elevates your thinking and gives you an edge.
Content that people you respect would label as "high signal-to-noise."
But first, what exactly is “signal-to-noise”?
Signal is valuable, insightful, and truthful content.
Noise is superficial, misleading, or simply filler.
It’s easy to underestimate the difference, but over time, it compounds.
As mentioned earlier, I've noticed a clear pattern among the greatest people I know: they consistently gravitate towards exceptional sources of information. When speaking with them, their mental models and understanding of their field are notably clear.
On the other hand, those who primarily read average content often end up confused rather than informed, or even worse, convinced that the world functions in a specific way that is detached from reality.
Their worldview becomes distorted.
The 1% rule of content creation
Let me unpack that by sharing a few observations.
This one is counterintuitive, so bear with me. There is an extreme asymmetry of content creation. Very few people actually create content:
About 1–3% of Reddit users contribute actively; the rest merely consume.
Wikipedia's top 1,000 contributors, just 0.003% of its user base, account for two-thirds of all edits.
On LinkedIn, only about 1% of 260 million monthly users share posts, but they generate 9 billion impressions.
This “1% rule” follows a power-law distribution, implying that most internet content originates from a small minority of users.
This might create the false impression that if someone publishes content online and it sounds plausible, it must be accurate and useful.
However, fewer creators do not necessarily imply higher quality because quality naturally varies widely. A tiny fraction of this already-small group produces genuinely high-quality (high-signal) content. Most will be average or below.
This becomes especially clear in healthcare. There's endless advice on nutrition and exercise, yet very little is grounded in rigorous, replicable studies. Every week, I find myself debunking popular health articles for my parents, who are drawn to claims that sound plausible but aren’t supported by strong evidence.
There's inherently limited variety in deeply informed perspectives. Thus, the bulk of accessible content will inevitably be average at best.
AI reflects the internet
This brings me to another observation. The "1% rule" helps explain a common critique of AI: it frequently produces average results.
Large Language Models (LLMs) are trained on enormous amounts of internet content, and inevitably reflect the quality of that data. If most online content is average, LLM outputs naturally gravitate toward mediocrity. Unless intentionally fine-tuned or prompted with high-quality inputs, these models will consistently mirror the internet's baseline level of quality.
High-signal inputs produce high-signal outputs.
Clarity of thought is rare
I think that happens because clarity of thought is scarce. True clarity, defined as the ability to explain complex ideas simply and precisely, requires both depth of understanding and effective communication skills.
A writer can be deeply knowledgeable but unable to communicate clearly; that’s not very helpful.
Vice versa, another one might be a great communicator but lack depth; that’s superficial.
True "signal" needs both.
Another reason people consume average content is that average is abundant and easy to find. Popular media, superficial blog posts, and basic tutorials dominate because they’re easily accessible. Many topics are inherently complex and therefore require "translators"—people who simplify or attempt to remove complexity. But unless you choose these translators carefully, it quickly turns into a game of "broken telephone," like we played as children. With each retelling, reality becomes more distorted.
Without deliberate effort, most people default to consuming mediocre content. Add social media algorithms designed to feed you more of what you already engage with, and it's easy to get trapped in the wrong rabbit hole.
If your input is average, your output inevitably follows suit.
To achieve great insights, you must actively seek out and consume great insights. Like most valuable things in life, it's initially difficult to recognize what "good" actually looks like. But with practice, you develop an intuition that reliably guides you toward the right sources.
I recall an interview with Brian Chesky where he explained his approach to learning. Instead of the traditional method of reading widely, he spends his time identifying the single best source on any topic and learns directly from it.
"If you pick the right source," Chesky said, "you can fast-forward."
Let's make this concrete with an example.
At any given moment, there's a trending topic: AI today, crypto previously, or earlier tech waves like mobile, SaaS, the cloud, or even the internet itself. Many people naturally want to understand whatever the "current thing" is, leading to a flood of related content. Right now, that current thing happens to be large language models (LLMs).
Typically, someone curious about a new topic simply performs a keyword search and consumes whatever the algorithm serves them. But relying solely on algorithms often leads you toward average, rather than exceptional, content.
But if someone asked me how to understand AI best, I’d immediately point them to Karpathy’s YouTube channel. Karpathy was directly involved with OpenAI well before it became widely known. He also spent a significant amount of time at Tesla working on AI and self-driving capabilities. But most importantly, he communicates with unparalleled clarity. Everyone I've pointed to Karpathy’s content has thanked me later.
Curiosity leads to high signal
Let me offer another example from my own field—product management.
When interviewing people for my teams, one of my favorite topics is curiosity. If a candidate has potential, I probe how they curate their information diet: Where do they find inspiration? What content do they return to regularly? Who do they respect and follow closely? What products stand out to them as exceptionally well-crafted yet not obvious?
It surprises me how few candidates have intentional answers. Most respond vaguely ("I read Medium articles" or "I'm in product communities"), but deeper questioning reveals they rarely know how to identify genuinely excellent content. This is a red flag. If you can't recognize great work, how can you produce it? If your content diet is mediocre, your mental models will be poorly suited for solving complex problems quickly. Yet, that’s the PM’s job!
To illustrate how I think about content recommendations, if you need a good foundation, I would likely recommend Lenny’s newsletter.
Once you've mastered the basics, I might point you towards Reforge or Shreyas Doshi’s course on developing product sense.
Suppose you're already at an advanced level, familiar with the classic writings by PG, Lenny, Brian Balfour, and Elena Verna. In that case, I'd look for more niche, insightful pieces like Stewart Butterfield’s "We Don’t Sell Saddles Here," or Tony Fadell’s book “Build,” or this great essay on agency that I have been re-reading lately.
You see where this is going.
Someone who deeply understands a field can give you personalized content recommendations far better than any generic search can. In fact, I'd argue your ability to consistently find exceptional content directly shapes your capacity to produce exceptional work.
Throughout my career, this principle has proven itself true again and again.
Addressing common critiques
Before concluding, let me quickly address two common critiques I’ve encountered when discussing the "high-signal" approach:
First, could focusing exclusively on top-tier sources narrow your worldview?
I don’t think so.
Truly high-signal content synthesizes diverse viewpoints and existing knowledge. Great content isn’t necessarily contrarian; it’s clear, thoughtful, and insightful. Being selective doesn't limit your thinking—it sharpens it. Life is short; attention is even shorter.
Occasionally, outstanding content emerges from unknown creators. But these creators must demonstrate exceptional clarity to overcome initial skepticism. If successful, their ideas inevitably prove valuable.
Second, is signal-to-noise subjective?
To some extent, yes.
What's high-signal for a niche expert might feel irrelevant to others. Yet genuinely exceptional content, like Karpathy’s insights on LLMs, is universally recognized because it comes from demonstrable expertise and never insults the reader’s intelligence.
The real question is this:
Why haven't you become intentional enough in curating your inputs?
Aren't you curious to discover how high your ceiling really is?
To wrap up, here are some consistently high-signal sources I recommend:
A few essays I’ve particularly enjoyed:
The simplest, most reliable path to exceptional output is exceptional input. Pick your sources wisely.
Food for thought.
In AI there is a framework called, Generative Adversarial Networks (GAN). You have two networks competing with one another. One produces content (the Generator), the other (the Discriminator) judges weather this content is fake, by comparing the generated content with actual real world content. The generator, uses noise (random data) as a starting point and transforms it to something that can resemble a quality content. Once the discriminator starts misclassifying the generated content as real, then the discriminator loses the battle.
At the end of the day you are left with something that can produce quality content out of noise.