AI Solution Architect & Fractional CTO; Life Sciences
Global Head of AI/GenAI Practice · 20/15 Visioneers
Founder · HitchhikersAI
I'm an AI solution architect and entrepreneur enabling Scientific R&D organisations to move from AI‑curious to AI‑led — starting with life sciences, where I've spent the last 20 years building depth in the industry's priorities, risks, and resistance to change. I work through practical software, open‑source in‑silico workflows, and the organisational change needed to make it stick.
Most agent frameworks — LangChain/LangGraph, LlamaIndex, AutoGen, CrewAI — are general-purpose toolkits. They give you primitives and stay neutral about the operating model, expecting you to compose them into whatever shape your application needs. OpenClaw is the opposite: narrow, opinionated, and purpose-built for one operating model — long-running, scheduled, parallel-thread research agents that gather information continuously, hold session state across runs, and produce reasoned output a scientist reviews. Every design choice — the scheduler, the workspace isolation, the instruction-file-driven behavior, the ReAct loop tuned for tool-heavy retrieval — falls out of that single operating model. For scientific R&D workflows, specialization beats generality. A general framework adapted to this shape carries permanent engineering tax: defaults that don't fit, abstractions that leak, hooks for use cases you don't have. OpenClaw doesn't need bending because the shape is already correct.
My commitment to OpenClaw is deliberate, grounded in two decades of direct experience inside scientific R&D — spanning experimental design, data quality, regulatory considerations, and the recurring causes of pilot failure — combined with the work of architecting a full secure stack with OpenClaw at its core. That combination has shaped where to build directly on OpenClaw’s operating model, where its defaults require adjustment, and how the security, orchestration, and observability layers around it should be designed. ScienceClaw is the first production deployment of this stack. ALS hypothesis generation, compound deep-dives for drug discovery, life science market intelligence, and scientific research assistants are now under active delivery on the same stack, with each engagement informing further refinement across the runtime and the surrounding architecture.
OpenClaw is now stewarded by a non-profit foundation and updated continuously, so the runtime hardens in the open while engagement time goes to research logic, not runtime plumbing. The stack runs on open-source LLMs and Python packages with secure orchestration interfaces — deployable on macOS, DGX OS, and Ubuntu; on-premise, air-gapped, or cloud; configurable to commercial LLMs where policy allows; and observable end-to-end.
I'm an AI Solution Architect with a deep-rooted foundation in systems engineering — trained to understand how complex pieces fit together, where the friction is, what the boundaries between components should be, and what it takes to make something work reliably at scale. That instinct shapes everything I build.
What excites me about this moment is the raw momentum that AI brings. The speed at which ideas can become working software has fundamentally changed. But momentum without structure produces fragile systems, hallucinated outputs, and untestable code.
AI has made Systems Engineering more critical, not less. When AI writes the code, the architect is still accountable. The value a human brings is no longer in the implementation — it is in requirements elicitation, defining module boundaries, and deciding what the system should actually do. Those are systems engineering decisions that no AI agent makes reliably without that discipline enforcing the structure.
I particularly enjoy building software appliances for R&D scientists — purpose-built tools that run on OpenClaw to deliver autonomous, reliable research capabilities. ScienceClaw is the embodiment of this: an autonomous research platform that scientists can actually trust, built on rigorous engineering principles.
That's the thread running through all my work — from teaching scientists to vibe‑code responsibly, to building test‑first workflows that keep AI agents honest, to deploying autonomous research platforms. Two decades inside scientific R&D — understanding how experiments are designed, where data breaks down, what regulators care about, and why adoption stalls — means I build for the constraints that actually exist, not the ones that look good in a pitch deck.
A non‑profit grass‑roots community accelerating the adoption of AI/ML and data in scientific R&D — starting with drug discovery & development. Members include bench scientists, data scientists, mathematicians, business owners, executives, and academics — all focused on fixing the disconnect between AI/ML/GenAI and its practical application in the lab.
Regular column in Drug Target Review exploring the real‑world application of AI, ML, and generative AI in drug discovery — cutting through the hype to examine what actually works, what doesn't, and what the industry needs to do differently.
A curated tracker covering AI scientists, autonomous discovery systems, and infrastructure across pharma and biotech — from funding rounds and platform launches to partnerships and regulatory developments. Searchable and filterable by category. Updated weekly.
I’ve spent 20 years moving between technical, commercial, and leadership roles across life sciences, semiconductors, and data infrastructure. That range matters — because the AI adoption challenge in scientific R&D isn’t purely technical. It sits at the intersection of engineering discipline, domain expertise, and the ability to navigate enterprise-scale organisations.
Partnered with John Conway (Founder & Chief Visioneer) to address AI adoption as the interconnected challenge it actually is — combining AI engineering, LLM agents, and in-silico software with organisational change, culture transformation, and FAIR data strategy.
Founded and led a company building computational platforms for drug discovery. The core product — ALaSCA — applies Pearlian causal inference to multi-omics data. Four bioRxiv preprints: DDR resistance in cancer (2024 ↗), pathway simulation in Type 1 Diabetes (2023 ↗), causal inference in Alzheimer’s (2022 ↗), and ML target prioritisation in aging (2022 ↗).
Led market development and product strategy across epigenetics, microbiology, multi-omics, and real-world evidence. The company’s top seller, regularly closing multi-year six-figure solution deals with blue-chip life sciences and CPG customers worldwide.
Led the Watson genomics programme in partnership with the New York Genome Center, reporting directly to an IBM Senior Vice President. Closed multi-million dollar agreements in healthcare & life sciences, including complex IP licensing and partnership contracts.
Program Director for Operations Research at IBM’s 300mm Fishkill fab. Led a cross-functional team across multiple IBM organisations. Developed and deployed a predictive analytics platform with IBM Research, saving $10M+. Awarded 12 patents during this period.
For the full picture — including patents, earlier publications, education, and additional roles — see my LinkedIn profile ↗