TR01: From Institutions to Networks
The convergence of artificial intelligence and decentralized systems will change how science is conducted, shifting research from traditional academic institutions to distributed networks of researchers. AI's capabilities in data analysis and hypothesis generation, combined with crypto-enabled coordination mechanisms, directly address the core problems plaguing modern science: risk aversion, centralization, misaligned incentives, and data interpretability limitations. This analysis examines how this technological convergence could reshape scientific discovery while acknowledging that careful design of decentralized research organizations will be crucial for realizing benefits while mitigating risks.
Introduction
Operating at the intersection of disparate disciplines provides unique insights into convergences that few people see. My time in basic science, translational research, and crypto has led to a core conviction: the convergence of AI and crypto will catalyze the deinstitutionalization of science. This thesis forms the foundation of this essay.
Of all possible futures ahead for humanity, a few are becoming increasingly probable. Those at the poles tend to dominate conversation. While it's easy to imagine technological progress going wrong, I'm particularly interested in understanding and optimizing how it can go right. In his article "Machines of Loving Grace," Dario Amodei outlines the bull case for a techno-optimist utopia enabled by powerful AI disrupting science. While this represents one low-probability scenario, I am adopting the default view that this is the world we want to make more probable.
The world is a chaotic system. When analyzing such systems, the goal is identifying critical variables that disproportionately shape outcomes while acknowledging inherent uncertainty. Three convergent forces act as critical variables in shaping humanity's near-term future: science, artificial intelligence, and decentralized systems. While their independent effects might be understood by many, their interactions are nonobvious and will have a disproportionate effect on our world. Specifically, I'm interested in the novel, emergent properties of the science-AI-crypto convergence that will reshape the institution of science itself.
Framework
This analysis is structured in two parts. First, I describe my vision for how crypto and AI will transform science in "The Future of Scientific Discovery." Then, I examine how we arrived at the current moment and what has informed this worldview in "The Problem Space." While both sections are valuable for understanding the full context, the first section alone provides a clear view of how AI and crypto will lead to the deinstitutionalization of science.
The Future of Scientific Discovery
Three key forces will reshape scientific discovery:
1. The acceleration of scientific discovery through AI-powered research and decentralized collaboration
Scientific progress is currently rate-limited by bureaucracy rather than physics. Most researchers are dissatisfied with the amount of time they spend on administration and attracting funding relative to performing science, and they are leaving academia in droves. While researchers have clear pathways for funding through institutions, these same institutions have become bottlenecks. This bottleneck breaks down in a world with powerful AI - defined (by Dario) as systems that:
- Exceed Nobel Laureate-level intelligence across multiple fields
- Interface with all virtual tools and are multimodal
- Execute long-duration tasks (days/weeks)
- Process information 10-100x faster than humans
- Generate infinite copies/agents, constrained only by compute resources
The implications are hard to fathom. The current model of grant applications, research execution, and publication becomes obsolete when faced with this level of intelligence. The geo-restrictive, centralized model of universities - logical before the internet and globalization - functionally breaks down when this intelligence is accessible from any laptop. Instead, we'll see independent scientists and loose affiliations forming via the internet. In this new world, anyone with compute resources and natural language skills can compete in science. Virtual spaces become primary research environments, with internet collaboration replacing most in-person work. We'll progress beyond requiring human-legible understanding of fields like biology - instead simulating biology, chemistry, and physics at sophisticated levels through automated, integrated laboratories.
2. The deinstitutionalization of science through decentralized systems
Large research institutions will struggle to maintain relevance. While resource concentration provides advantages to those who can adapt quickly, universities' bureaucratic nature makes them unlikely to compete with lean, low-overhead organizations. The smartest large organizations may survive by leveraging endowments to acquire compute and attract top scientists. But most researchers will find greater autonomy outside academia. Universities have historically depended on cheap labor (yes, postdocs). In this new world, the academic underclass must become self-reliant as surviving universities discover an even cheaper source - intelligent computers.
Blockchains enable new organizational models. Decentralized systems operate via networks of actors contributing resources for rewards or governance. In proof-of-work systems, that contribution is compute. In proof-of-stake systems, it's currency or tokens. In DAOs, it's often capital or time exchanged for governance tokens. These models enable resource access through simple transactions with minimal overhead.
3. The emergence of new organizational and funding models
Decentralized AI training could democratize large-scale compute, allowing anyone with good ideas to access substantial computing power regardless of institutional ties. This transformation will be driven by networks where individuals and organizations contribute computing resources for rewards like tokens, funding, ownership stakes, and compute access. The world's most powerful clusters may soon be decentralized and publicly accessible.
In this model of decentralized science:
- Individuals pool resources (compute, capital, talent, data, software, IP rights) via on-chain organizations for ownership and governance
- These organizations vote on resource allocation
- Scientists apply directly for funding
- Organizations receive returns (data, status, ownership, financial upside, results)
- Scientists access resources - compute, virtual lab space, agents, data
- Scientists oversee multiple agents conducting research
- Agents handle everything from hypothesis generation to experimental execution with human oversight
This trend toward radical deinstitutionalization parallels what's happening with financial institutions through cryptocurrencies. Middlemen are being disintermediated as control shifts hands. We're witnessing a major wealth transfer as people lose faith in centralized powers and tire of their overhead.
The Problem Space
So how did we get here and exactly what is the system described above disrupting? Science and the institution of science are distinct entities that often have less in common than assumed. "Science" refers to the scientific method and the process of reducing uncertainty. The "institution of science" encompasses the organizations, practices, and structures meant to enable scientific method. Currently, scientists within these institutions face enormous barriers to actually doing science.
Despite record funding levels and advanced technology, structural challenges make conducting research cumbersome. The institutionalization of science has created massive downsides, familiar to any academic who spends more time competing for grants and churning out papers than conducting actual research. This institutionalization has also led society to place scientists on a pedestal, creating the misconception that science is only for scientists. As Imran Khan, Chief Executive of the British Science Association argues, science is "too important to just be left to scientists alone."
Current Institutional Limitations
The scientific establishment faces three critical constraints:
1. Centralization and Access
- Research confined to established institutions with high entry barriers
- Funding concentrated in traditional grants and venture capital
- Critical data and tools siloed within institutions
- Talent restricted by geography and institutional boundaries
- Knowledge dissemination limited by expensive journal paywalls
- Interdisciplinary collaboration hindered by departmental silos
- Quality mentorship and training opportunities concentrated in few locations
2. Incentive Misalignment
- "Publish or perish" culture prioritizing quantity over quality
- University pressure to patent before disclosure
- Tenure systems rewarding individual achievements over collaboration
- Grant funding favoring incremental, "safe" research over bold exploration
- Limited rewards for reproducing or validating existing work
- Publication bias toward positive results
- Career advancement depending more on metrics than research impact
3. Efficiency Bottlenecks
- Manual, time-intensive experimental processes
- Limited capacity to process and synthesize vast research data
- Slow peer review and publication cycles
- Poor coordination of large-scale collaborative efforts
- Ineffective knowledge and protocol sharing across labs
- Redundancy from unpublished negative results
- Data generation outpacing our ability to extract meaningful insights
The Modern Academic's Dilemma
Consider Dr. Smith, a senior PI overseeing a small laboratory with three postdocs. Dr. Smith depends on university and external funding (like NIH) to operate. His lab relies on postdocs as cheap labor for experiments. To attract funding and postdocs, the laboratory must be productive and impactful - measured by publication in high-impact journals like Nature and Cell.
This creates a cycle:
- Attract funding and postdocs
- Win grants to pay postdocs
- Publish in high-impact journals
- Repeat
The incentives are perverse. Dr. Smith focuses on attracting money and talent rather than conducting groundbreaking science. His postdocs, trained for 10+ years with low pay and no job security, face even worse pressures. Their chances of securing a tenure-track position (8% success rate) depend on publishing in prestigious journals. This drives practices like p-hacking, selective publication of confirming results, and occasionally, data fabrication. Those who don't play this game might not have jobs next year.
All of this distracts from doing good science. It has also resulted in a replication crisis and a significant amount of our scientific knowledge being built on a shaky foundation. This is a story unfolding in almost every academic laboratory in the world.
A Brief History of Centralization in Science
How did it get so bad? It starts with the centralization and institutionalization of research.
The centralization of science in research institutes wasn’t an accident - it was a rational response to the material and social constraints of the industrial era. Understanding this is key to understanding how the system must change in response to the current era.
In early modernity, science emerged from three distinct traditions - the individual natural philosopher (think Galileo), the royal society model of gentleman scientists, and the university system. The massive shift towards institutionalization in the 19th and 20th centuries was driven by several factors:
Economics of Physical Infrastructure
- Modern science required increasingly expensive equipment
- Shared facilities enabled economies of scale
- Concentrated funding could support long-term research programs
- Specialized environments (clean rooms, particle accelerators) needed dedicated spaces
Knowledge Density Benefits
- Physical proximity enabled rapid information exchange
- Informal conversations sparked new collaborations
- Mentorship happened organically
- Tacit knowledge transferred through apprenticeship
Quality Control Mechanisms
- Institutional reputation created accountability
- Peer review could happen in real-time
- Standards could be maintained and enforced
- Experimental protocols could be directly verified
The period from 1940 - 1970 represented the ideal form of centralized science, producing achievements like The Manhattan Project, Bell Labs, and an unprecedented amount of technological advance. But this came with high costs. It resulted in cultural narrowing and a standardization of career paths. It fostered institutional conformity that discouraged radical thinking. It built prestigious institutions that became self-reinforcing gate keepers. Downstream, these led to resource misallocation and innovation bottlenecks.
Today’s scientific enterprise restricts the flow of ideas, talent, and capital for the benefit of the institution. What started as an attempt to concentrate resources and enable collaboration in a capex intensive world has evolved into a gatekeeping apparatus that often optimizes for self-preservation at the cost of innovation. Centralization is not just about physical resources or funding, but rather who gets to participate in the scientific conversation. We do not live in a 1940s era world. The reality today is profoundly different, and we need to update our priors.
Resources are not distributed based on merit or potential. The majority are concentrated in a few geographical niches that reward those in proximity. While this model fits well to the preglobalized world, it fails us today.
Show Me The Incentive and I’ll Show You.. The Publication
Perverse incentives abound, the "publish or perish" paradigm of academic science has created a system where career advancement depends more on publication metrics than research quality or impact. Researchers are leaving academia en masse, because scientists are tired of spending an inordinate amount of time competing for grants and churning out papers for little job security and money rather than conducting actual research.
This dynamic is exacerbated by a grant funding system that creates its own fatalistic cycle. Researchers spend countless hours writing and rewriting grants with an 80% rejection rate, while conservative review panels favor safe, incremental work over potentially transformative projects. The system creates a cruel catch-22: researchers need funding to generate preliminary data but need preliminary data to secure funding. Meanwhile, short grant cycles discourage the kind of long-term, systematic research programs that often yield breakthrough discoveries. They also fail to reward risk. The median age of first-time recipients of R01 grants, the most common and sought-after form of N.I.H. funding, is 42, while the median age of all recipients is 52. More people over 65 are funded with research grants than those under age 35. Young people are at a disadvantage, yet historically, many of the most important scientific contributions were made by people in their 20s and 30s.
The institutional architecture of science has basically calcified. What began as a rational response to the material constraints of the industrial era has evolved into an entrenched system that concentrates resources in elite institutions, creates high barriers to entry, and optimizes for self-preservation rather than innovation. This has profound consequences for humanity: in drug development, over 50% of preclinical research cannot be reproduced, negative results rarely see the light of day, and critical research outputs remain locked behind paywalls and proprietary databases. The path from discovery to real-world impact has become a treacherous "valley of death," where promising research often dies before reaching practical application.
The Data Crisis
The exponential growth in scientific data generation has created an unexpected bottleneck: while we can produce more data than ever, our capacity to extract meaningful insights lags behind. Modern tools like single-cell sequencing and high-throughput screening generate vast datasets, but as noted in "A Future History of Biomedical Progress," we face "a dearth of clear, unambiguous data that isolates a biological effect of interest from the other 10,000 confounding things that are going on."
This challenge manifests in the increasingly complex task of integration across different scales and modalities. Researchers must synthesize evidence across genomics, proteomics, and imaging while considering molecular, cellular, and organismal scales. As Shelby Newsad notes, this creates an acute problem in biology, where "the most valuable data is the most scarce, most expensive, and can't significantly contribute to developing better drugs by a startup (given the 3-7 years to get to clinic)."
The fundamental constraint lies in our insistence on human-legible, mechanistic understandings of biology. While reducing complex systems into conceptual primitives has yielded important insights, this approach may be fundamentally misaligned with biology's inherent complexity. Scientists can only process a tiny fraction of available literature, while increasingly complex systems defy intuitive understanding.
Risks
While the first part of this essay presented an optimistic view of how AI and decentralized systems could transform science, we must examine potential risks and failure modes.
Accelerated Knowledge Asymmetries
The democratization narrative surrounding AI and crypto contains a contradiction. These technologies create new hierarchies based on computational capability and algorithmic sophistication. Early access to advanced AI systems capable of processing vast scientific data could create knowledge acceleration loops that rapidly outpace traditional research methodologies.
This acceleration creates qualitatively different gaps than we've seen before. When breakthroughs occur at machine speed rather than human speed, the distance between those at the cutting edge and the rest of the scientific community becomes like the difference between walking and breaking the sound barrier.
Incentive Cascade Failures
Crypto-economic systems promise precise, programmable incentive structures, but this precision brings its own risks. Today's crypto landscape is plagued with zero-sum tokenomics. Complex systems often exhibit unexpected behaviors at extremes, and incentive systems are no exception. We risk creating "scientific dark forests"—environments where researchers optimize for obscuring rather than sharing discoveries.
Consider a decentralized research organization (DAO) discovering a promising quantum computing breakthrough. Their incentive structure, designed to reward discoveries, might discourage sharing crucial intermediate findings. Competition between DAOs could evolve into scientific arms races, where maintaining competitive advantages overwhelms collaborative scientific progress.
Quality Control Crisis
Displacing traditional peer review creates a vacuum that technology alone cannot fill. While blockchain provides transparency and immutability, it cannot inherently guarantee scientific validity. This challenge becomes acute with AI-generated research—imagine an AI system generating thousands of plausible hypotheses per second, each supported by sophisticated simulations. The volume and complexity would overwhelm traditional verification approaches.
Existential Considerations
The convergence of AI and decentralized systems alters our relationship with knowledge creation. A world where AI systems autonomously explore physics, biology, and chemistry at superhuman speeds makes accidental discovery of dangerous knowledge probable. Technologies always carry this dualism—fire enabled cooking and weapons.
Technical development could outpace our ability to develop ethical frameworks and safety protocols. We might acquire knowledge that we're not mature enough as a species to handle. Science democratization could lead to uncontrolled proliferation of dual-use technologies with devastating potential.
Technical Vulnerabilities
The infrastructure supporting this new scientific paradigm introduces its own fragilities:
- Dependence on massive computational resources creates vulnerable chokepoints
- Semiconductor supply chain disruptions could paralyze AI-driven research
- Quantum computing breakthroughs could challenge current cryptographic systems
Mitigation Strategies
Rather than viewing these risks as insurmountable, we should treat them as design constraints for building robust systems. Solutions might include:
- Creating tiered access systems for sensitive discoveries
- Establishing distributed validation networks that scale with discovery speed
- Building cross-DAO coordination mechanisms to prevent destructive competition
- Developing adaptive frameworks that evolve alongside technological capabilities
The future environment for conducting science needs to balance transformative technological potential with centuries of accumulated scientific wisdom. Our challenge is harnessing acceleration without crashing and burning.
Summary and Call to Action
The institution of science faces three fundamental problems—centralization, broken incentives, and data interpretability. The convergence of cryptographic systems and AI directly addresses each of these challenges.
Decentralized systems offer an alternative to institutional centralization, enabling global collaboration through internet-native networks that may ultimately replace universities. These systems can distribute validation work across a worldwide network of researchers, incentivized through tokens and supported by community-driven quality control. This, combined with open access to protocols and data, improves how we coordinate large-scale scientific efforts. Crypto provides a sandbox for designing new incentive structures from scratch, rewarding behaviors that optimize for the best possible outcomes. As an agnostic technology, successful implementation requires careful design.
AI directly addresses the data bottleneck and interpretability challenge. If the scaling hypothesis holds and we achieve AGI in the next decade, these systems will likely integrate our total body of scientific knowledge and generate novel discoveries. We will no longer depend on human-legible, mechanistic understanding to make scientific progress. While this may seem counterintuitive or concerning, it's necessary to compress decades of progress into a shorter timeframe.
While this vision is compelling, its realization requires concrete action across multiple fronts. In the near term (2024-2026), we must focus on three critical areas:
First, infrastructure development. We need to establish decentralized compute networks with scientific tooling, create open-source frameworks for scientific DAOs, and build standardized protocols for data sharing and verification. This technological foundation will enable the transition from traditional to decentralized research environments.
Second, community building. The formation of pilot research DAOs in specific scientific domains will demonstrate the viability of these new models. Bridge programs between traditional institutions and decentralized networks will facilitate knowledge transfer and gradual transition. Working groups focused on scientific quality standards will ensure rigor in this new paradigm.
Third, incentive design. We must launch experimental token models for scientific contribution, create robust reputation systems for peer review, and develop frameworks for managing IP rights in decentralized contexts. These systems will align individual motivations with collective scientific progress.
Success requires coordinated effort across multiple stakeholders—researchers, institutions, technologists, and regulators. Early pilots should focus on areas where decentralized approaches offer clear advantages, gradually expanding as systems prove their effectiveness.
The future of science lies in empowering a global community of natural philosphers driven by wonder and enabled by technology to tackle humanity's greatest challenges. The tools are emerging. The understanding is growing.
The deinstitutionalization of science is inevitable. The question is whether we'll shape this transformation thoughtfully or let it happen haphazardly. I’m betting on the former.
Author Bio:
Tyler has spent the better part of the past decade trying to make the process of scientific discovery and translation more efficient. His work as co-founder of Molecule, BIO, and VitaDAO (see Decentralized Science, DeSci) tackled the problem of inefficiency and adverse outcomes in translational science through the lens of incentives. His current work with Triplicate expands that lens to include identifying technologies that will shape the future of scientific discovery. Prior to all of this, his experiences at Columbia and the NIH dramatically influenced his views about the efficiency of scientific institutions.