Is AI shooting down contrarian ideas before they take off?
There is a standard story being told in venture circles right now as investors flock religiously to Claude Code in VC events. AI is turbocharging deal sourcing, compressing diligence timelines, and democratizing deep domain expertise. All of this is broadly true, but the story stops too soon. It misses the more unsettling half, the same forces making VCs faster and smarter may also be quietly eroding the very mechanism by which venture capital creates value in the first place.
The information asymmetry at the heart of capital allocation
Capital allocation is fundamentally an exercise in reconciling two views of the world. A top-down perspective shaped by the macro landscape of sectors, technologies, and market cycles, and a bottom-up view grounded in the granular fundamentals of a specific company, market, or founding team. The investor’s edge exists in the space between these two lenses, a gap that LPs and operators cannot efficiently bridge on their own.
But this triangulation is imperfect. Even the best resourced funds see only a narrow slice of total deal flow, and what they do see is heavily path dependent, shaped by networks, geography, pedigree biases, and warm introductions that systematically exclude large parts of the opportunity space. On the bottom up side, the limitation is different but just as real. A generalist investor evaluating a biomarker diagnostics startup or a next generation battery chemistry company remains, in practice, a sophisticated outsider. The asymmetry that is meant to favour the VC over the LP often breaks down at the company level, where founders hold a deeper and more nuanced understanding of their domain than the investors assessing them.
How AI is transforming both ends of the equation
Artificial intelligence is restructuring both of these constraints simultaneously, and in genuinely impressive ways.
On sourcing, AI tools are enabling funds to scan the startup landscape at scale. Breadth stepped up a decade ago with the consolidation of LinkedIn and streamlined scraping techniques. Depth was a tougher nut to crack probably until LLMs got web browsing capabilities, now is table stakes, at least on paper. Humans are intervening increasingly later in the funnel.
On domain knowledge, the democratization is equally striking. We have always advocated for a generalist approach, since vertical-definitions and taxonomies are symptoms of flattening innovation curves. This was long before this age where we can run sophisticated literature reviews, interrogate clinical trial data, and stress-test technical claims through AI-assisted research in a fraction of the time. The knowledge asymmetry between a generalist VC and a subject-matter expert has meaningfully compressed. Not eliminated, but compressed.
The optimistic case for the innovation financing role of venture capital writes itself from here. Reduce the asymmetries, increase velocity and capital flows swiftly to its highest-value uses. Venture becomes faster, more meritocratic, more global.
There is real substance to this view. But it is fundamentally flawed.
The risks: Output collapse, reinforced consensus and centralized knowledge
A few months ago, Andrej Karpathy, OpenAI co-founder and former head of AI at Tesla, offered a precise description of a structural limitation in large language models at the Dwarkesh Podcast. Their outputs, he noted, are silently collapsed, occupying a very narrow manifold of the possible space. Ask a model to tell you a joke and it gives you roughly the same three jokes every time. The entropy has been wrung out. What remains is a consensus residue of human thought, useful, often brilliant, but systematically biased toward the already-known.
The edge in venture is rarely found in the consensus, and a tool that gravitates toward the already-known is one that needs to be handled with that blind spot firmly in mind.
VC has always had a herd behaviour problem. The phenomenon is well-documented. Investors cluster in fashionable sectors, pile into companies with social proof from marquee co-investors, and anchor on valuation comps that were themselves shaped by prior consensus.
When AI systems trained on the same internet, the same pitch decks, the same analyst reports, and the same conference transcripts begin informing investment decisions at scale, herding behavior does not merely persist, it becomes architecturally embedded. If twenty funds are running similar AI-assisted due diligence that surfaces similar signals and applies similar frameworks, the result is not better price discovery. It is faster convergence on the same set of opportunities, driving up valuations on the consensus plays while systematically under-exploring the edges.
There is a deeper structural issue here that echoes arguments Friedrich Hayek made about planned economies in the 1940s. Hayek's critique of central planning was not merely that planners were less intelligent than markets, it was that markets aggregated a fundamentally different kind of knowledge. The price system encoded what he called "tacit" and "local" knowledge, the particular circumstances of time and place that no central authority could ever fully collect or process. The factory manager in Lyon who knows his suppliers are struggling, the grower in Almería who senses a bad harvest coming, this dispersed, granular, often inarticulable knowledge is what markets process continuously through price signals.
Fostering creative destruction in the AI era
The venture ecosystem has always been the financial arm of creative destruction, Schumpeter's process by which new entrants disrupt incumbents and reallocate capital to more productive uses. This process is inherently inefficient and wasteful. Most startups fail. Value accrues to a small number of outliers willing to execute on unpopular convictions long enough for the market to appreciate their legitimacy.
AI systems, however sophisticated, are backward-looking by construction. They compress history into parameters. The investor who outsources thesis-formation entirely to AI-assisted research is essentially asking the past to describe the future.
Could AI someday enable optimal capital allocation without the messy mechanism of creative destruction? That question is well above my paygrade. I would argue that humans will continue to collectively outperform AI in capital allocation for a long time. Is there even a way to test that?
AI is poised to significantly improve many aspects of capital allocation, we are already seeing the results. However, this comes with a risk of embedding a centripetal force in our conviction building processes, that requires an intentional centrifugal counterbalance. There is a meaningful difference between using AI as an analytical accelerator and allowing it to reshape the epistemic culture of the investment decision. The former is leverage. The latter is abdication.
It is up to VCs to do it for the sake of creative destruction or for the sake of their own alpha.
If you’re building the next generation of European tech companies, we’d love to hear from you here.
See also
More insights to better the world through technology






