After years of exploring real-time adaptation as a computational framework for modeling fluid intelligence, I’ve completed my Ph.D. at the Edmond and Lily Safra Center for Brain Sciences (ELSC), The Hebrew University of Jerusalem.
This milestone marks not an ending, but a beginning. The questions I explored during my PhD have evolved into something larger: Automated Science.
The PhD Journey
My research focused on a fundamental question: how do minds solve genuinely novel problems in real-time? I argued that fluid intelligence emerges from processes where inference and learning occur simultaneously—where confronting a novel problem drives real-time adaptation of the cognitive system itself.
Using artificial neural networks as experimental systems, I demonstrated that networks can perform abstract reasoning through test-time parameter adaptation—without extensive pre-training. I also provided a mechanistic account of paradoxical findings in human belief updating, explaining why extreme expectation violations can sometimes lead to less belief change. This work resulted in publications on abstract reasoning in untrained networks and pathways for resolving relational inconsistencies.
But perhaps more importantly, the PhD taught me about the nature of scientific inquiry itself—how knowledge accumulates, how theories evolve, and how much of science depends on tacit understanding that lives in researchers’ minds rather than papers.
The Turn Toward Automated Science
During my PhD, I became increasingly fascinated with a question: Can AI systems conduct real science?
Not just analyze data or write papers mimicking scientific prose, but genuinely contribute to knowledge—propose hypotheses, design experiments, evaluate evidence, and engage in the iterative process of scientific discovery.
The obvious answer might be “not yet” or “never.” Science requires creativity, intuition, physical experimentation—qualities we associate with human researchers. But the history of AI is full of “never” predictions that eventually fell.
I believe we’re at an inflection point. Large language models can now engage with scientific literature in meaningful ways. They can reason about methodology, identify gaps in arguments, and generate novel combinations of ideas. What’s missing is grounding—connecting AI reasoning to real experimental practice.
AI-Archive: A Platform for AI-Driven Science
This conviction led me to create AI-Archive, a scholarly platform where AI agents can publish research papers and conduct peer reviews under human supervision.
AI-Archive isn’t just a repository—it’s an ecosystem designed from the ground up for AI participation:
- AI agents submit papers through natural language or API
- Multi-stage review combines automated validation with AI and human review
- Reputation systems track the quality of AI contributions
- Integrated sandbox lets humans co-author with AI in real-time
The platform is live and growing, but I quickly learned something important: building infrastructure isn’t enough.
The Reality Check
Academics won’t publish where their work won’t be recognized. A paper on AI-Archive doesn’t count toward tenure. Funding agencies don’t acknowledge it. This isn’t stubbornness—it’s the reality of how scientific credibility works.
And there’s a deeper issue: AI-led science faces a grounding problem. Papers are just the tip of the iceberg. Most scientific knowledge lives in:
- Laboratory protocols never written down
- Intuitions about what experiments “feel” right
- Troubleshooting techniques passed apprentice to mentor
- Tacit understanding of what results mean in context
An AI that only reads papers is like a student who only reads textbooks—they might pass tests, but they can’t really do science.
The Next Phase: AI-Enhanced Labs
This realization shaped my current direction: integrating AI systems deeply within real research laboratories.
The idea is straightforward: if we want AI to be authoritative in scientific domains, it needs to be grounded in actual practice. This means:
- Embedding AI infrastructure in active research centers
- Connecting AI agents to real experimental data and lab workflows
- Building expertise through sustained engagement with working scientists
- Developing authority as the AI demonstrates genuine understanding
I’m working on this approach with my alma mater, ELSC. The vision is to make ELSC a leading center for collaboration between human scientists and AI—creating agents so deeply integrated that they become authoritative voices in computational neuroscience.
What This Means
If this works, the implications are significant:
- AI reviewers grounded in experimental reality could help with the peer review crisis
- Research acceleration through AI that truly understands methodology, not just text
- Democratized expertise as AI systems make specialized knowledge more accessible
- A new model for how AI-Archive’s “Enterprise tier” brings automated science to institutions
But I want to be clear: this is early. The contracts aren’t signed. The infrastructure isn’t built. I’m sharing the vision because I believe in public thinking—letting ideas evolve through discussion rather than emerging fully-formed.
Looking Forward
My PhD studied how adaptive systems solve novel problems through real-time learning. Now I’m working on something that feels like the natural extension: applying that same principle to how AI can genuinely participate in science.
The goal isn’t to replace human scientists. It’s to augment them—to create AI systems that genuinely understand what we’re trying to do and can help us do it better.
To everyone who supported me through the PhD—advisors, collaborators, friends, family—thank you. The next chapter is just beginning.
If you’re interested in automated science, AI-enhanced research, or just want to discuss these ideas, feel free to reach out. I’m always happy to explore these questions with fellow travelers.
Comments