Dr Raminderpal Singh

Dr Raminderpal Singh

AI Solution Architect & Fractional CTO; Life Sciences

Global Head of AI/GenAI Practice · 20/15 Visioneers
Founder · HitchhikersAI

I'm an AI solution architect and entrepreneur enabling Scientific R&D organisations to move from AI‑curious to AI‑led — starting with life sciences, where I've spent the last 20 years building depth in the industry's priorities, risks, and resistance to change. I work through practical software, open‑source in‑silico workflows, and the organisational change needed to make it stick.

From OpenClaw to project fred

Most agent frameworks — LangChain/LangGraph, LlamaIndex, AutoGen, CrewAI — are general-purpose toolkits. They stay neutral about the operating model, expecting the developer to compose primitives into whatever shape an application needs. OpenClaw is the opposite: narrow, opinionated, and purpose-built for one operating model — long-running, scheduled, parallel-thread research agents that gather information continuously, hold session state across runs, and produce reasoned output a scientist reviews. Every design choice falls out of that single operating model. For scientific R&D workflows, specialization beats generality: a general framework adapted to this shape carries permanent engineering tax — defaults that don't fit, abstractions that leak, hooks for use cases that don't apply.

The shape of scientific research workflows 01 Long-running days and weeks, not turn-by-turn input 02 Scheduled timezone-aware jobs with error tracking 03 Parallel threads one agent per research question 04 Session state isolated workspaces, resume across runs 05 Continuous gathering papers, web, databases, internal data 06 Instruction-driven scientists edit .md, not Python The operating model proven on OpenClaw. project fred inherits this shape.

My judgment that this operating model is correct is grounded in two decades of direct experience inside scientific R&D — spanning experimental design, data quality, regulatory considerations, and the recurring causes of pilot failure — combined with the work of architecting a full secure stack with OpenClaw at its core. ScienceClaw is the first production deployment of that stack, with ALS hypothesis generation, compound deep-dives for drug discovery, life science market intelligence, and scientific research assistants now under active delivery on the same foundation. Each engagement has shaped a precise understanding of where the operating model is correct, where the runtime defaults require adjustment, and how the security, orchestration, and observability layers around it should be designed.

Evolving the stack: project fred

I am now evolving this work into project fred — a secure-by-design, clean-room codebase that will, over time, replace OpenClaw at the core of my stack. project fred carries forward the operating model proven above and adds two capabilities the current runtime does not provide. First, the security architecture is being designed from the first commit, with requirements being shaped in active collaboration with academic and enterprise partners — rather than retrofitted onto an existing codebase. Second, and more fundamentally, the core engine is tuned to the behavior of the LLM driving it. Reasoning models behave differently from instruction-tuned models; cloud-scale models behave differently from local mid-size and local smaller models. An orchestration core that treats the LLM as an opaque substrate — as most frameworks, including OpenClaw, do — leaves significant control on the table. project fred treats LLM behavior as a first-class architectural concern.

The architecture is a core engine with three model wrapperscloud LLMs, local mid-size LLMs, and local smaller LLMs, all thinking models — each with its own tuned interaction strategy. The core selects and coordinates across them according to the work being done and the deployment constraints in force.

project fred — core engine, tuned per LLM tier CORE ENGINE secure-by-design, clean-room orchestration tuned to LLM behavior WRAPPER 01 Cloud LLMs — thinking WRAPPER 02 Local mid-size — thinking WRAPPER 03 Local smaller — thinking

project fred is being built from a nanobot starting point that already encodes the strengths of the OpenClaw operating model proven above — allowing the build to begin from a working baseline rather than from scratch, while every subsequent line of code lands in a clean-room repository under my control. This combination preserves what the four years of OpenClaw work have established about the right shape, and lets the new effort focus where it matters: the security architecture, the LLM-tuned core, and the deployment characteristics that academic and enterprise partners require for the next generation of scientific R&D agents.

Engineering discipline meets AI momentum

I'm an AI Solution Architect with a deep-rooted foundation in systems engineering — trained to understand how complex pieces fit together, where the friction is, what the boundaries between components should be, and what it takes to make something work reliably at scale. That instinct shapes everything I build.

What excites me about this moment is the raw momentum that AI brings. The speed at which ideas can become working software has fundamentally changed. But momentum without structure produces fragile systems, hallucinated outputs, and untestable code.

AI has made Systems Engineering more critical, not less. When AI writes the code, the architect is still accountable. The value a human brings is no longer in the implementation — it is in requirements elicitation, defining module boundaries, and deciding what the system should actually do. Those are systems engineering decisions that no AI agent makes reliably without that discipline enforcing the structure.

I particularly enjoy building software appliances for R&D scientists — purpose-built tools that run on OpenClaw to deliver autonomous, reliable research capabilities. ScienceClaw is the embodiment of this: an autonomous research platform that scientists can actually trust, built on rigorous engineering principles.

That's the thread running through all my work — from teaching scientists to vibe‑code responsibly, to building test‑first workflows that keep AI agents honest, to deploying autonomous research platforms. Two decades inside scientific R&D — understanding how experiments are designed, where data breaks down, what regulators care about, and why adoption stalls — means I build for the constraints that actually exist, not the ones that look good in a pitch deck.

HitchhikersAI

300+
community members

A non‑profit grass‑roots community accelerating the adoption of AI/ML and data in scientific R&D — starting with drug discovery & development. Members include bench scientists, data scientists, mathematicians, business owners, executives, and academics — all focused on fixing the disconnect between AI/ML/GenAI and its practical application in the lab.

AI in Drug Discovery

Regular column in Drug Target Review exploring the real‑world application of AI, ML, and generative AI in drug discovery — cutting through the hype to examine what actually works, what doesn't, and what the industry needs to do differently.

LLM in Life Sciences News Tracker

A curated tracker covering AI scientists, autonomous discovery systems, and infrastructure across pharma and biotech — from funding rounds and platform launches to partnerships and regulatory developments. Searchable and filterable by category. Updated weekly.

I’ve spent 20 years moving between technical, commercial, and leadership roles across life sciences, semiconductors, and data infrastructure. That range matters — because the AI adoption challenge in scientific R&D isn’t purely technical. It sits at the intersection of engineering discipline, domain expertise, and the ability to navigate enterprise-scale organisations.

20/15 Visioneers

20/15 Visioneers

Global Head of AI/GenAI Practice · Current (2 years)

Partnered with John Conway (Founder & Chief Visioneer) to address AI adoption as the interconnected challenge it actually is — combining AI engineering, LLM agents, and in-silico software with organisational change, culture transformation, and FAIR data strategy.

incubate.bio

Founder & CEO · 3 years

Founded and led a company building computational platforms for drug discovery. The core product — ALaSCA — applies Pearlian causal inference to multi-omics data. Four bioRxiv preprints: DDR resistance in cancer (2024 ↗), pathway simulation in Type 1 Diabetes (2023 ↗), causal inference in Alzheimer’s (2022 ↗), and ML target prioritisation in aging (2022 ↗).

Eagle Genomics

VP / Head of Microbiome Division · VP Business Development · 3 years

Led market development and product strategy across epigenetics, microbiology, multi-omics, and real-world evidence. The company’s top seller, regularly closing multi-year six-figure solution deals with blue-chip life sciences and CPG customers worldwide.

IBM Research

Business Development Executive · 7 years

Led the Watson genomics programme in partnership with the New York Genome Center, reporting directly to an IBM Senior Vice President. Closed multi-million dollar agreements in healthcare & life sciences, including complex IP licensing and partnership contracts.

IBM Semiconductor Group

Senior Manager & Senior Member of Technical Staff · 6 years

Program Director for Operations Research at IBM’s 300mm Fishkill fab. Led a cross-functional team across multiple IBM organisations. Developed and deployed a predictive analytics platform with IBM Research, saving $10M+. Awarded 12 patents during this period.

Top 13 Influencers in the Semiconductor Industry

EETimes, 2003

Signal Integrity Effects in Custom IC and ASIC Designs

Book · Author

Silicon Germanium: Technology, Modeling, and Design

Book · Author

For the full picture — including patents, earlier publications, education, and additional roles — see my LinkedIn profile ↗