Most AI is sold as a platform. The work that matters is closer to a workshop.
Tinker is the studio built around that gap. We sit in the room before the strategy is committed, build the tool when no platform will fit, and ship the products that keep showing up in the work.
We tinker, because the alternative is theatre.
Four convictions we keep returning to.
The product is the decision, not the model. A model that can't change a meeting is a research artifact.
Specificity beats scale. A tool sharpened to one operator's hand outperforms a platform built for a category.
Demos lie. Tools tell the truth. If it doesn't survive a Tuesday at 4pm, it doesn't ship.
Institutions need leverage, not transformation. The good work compounds quietly inside what already exists.
Counsel.
In the room before the commitment is made. Senior advisory at the moments where the AI bet is being written, the portfolio is being culled, or the platform you've already bought needs a second opinion.
Where the leverage actually sits, in what order, with what tradeoffs.
Structured kill review before a use case becomes a budget line.
Audit of models, pipelines, and agents your team did not build.
Vendor discipline and milestone cadence across an AI program.
Workshop.
Bespoke builds for the work that doesn't fit a platform. Single-use decision instruments, agent pipelines, ontologies, and the institutional infrastructure that lets a portfolio of use cases ship in parallel.
One use case, one team, one tool sharpened to the operator's hand.
Plumbing that turns a manual judgment into an automated one.
Shared eval harness, deployment lanes, registry, cost telemetry.
Operating instruments built to be lived in, not toured.
Products.
Six tools we kept rebuilding by hand for institutions — packaged so the next one doesn't pay to invent them again.
Each product started as a one-off engagement, packaged after it earned its place.
Built to fit the institution's taxonomy, policy, and operating workflow.
Policy-checked, evaluable, auditable — observability on from day one.
Cloud, on-prem, or hybrid. API, UI, or both — wherever the work happens.
The six.
Data classification.
Document and data tagging calibrated to your taxonomy.
Share-ready redaction.
PII and PHI out, share-ready files in. Built for regulated workflows.
Talk to your warehouse.
Plain language in, defensible SQL out, with lineage attached.
Geospatial intelligence.
An operating tool, not a map demo. Planning, monitoring, response on one substrate.
Domain RAG, in a sprint.
Configurable, evaluable, governed retrieval — domain-specific in a sprint.
Control plane for AI work.
Between your team and every model, every agent — policy-checked, sandboxed, audited.
A short ledger of what we've built.
Retrieval, governed.
Domain-specific retrieval for a regulator — connected, policy-checked, continuously evaluated.
Talk to the warehouse.
Plain-language access to a national-strategy data warehouse — defensible SQL with lineage.
The factory floor.
Foundational infrastructure for shipping AI use cases — eval lanes, deployment, registry, telemetry.
Operating instrument.
An AI-native daily driver — backlog, ledger, project tracker, ambient agent pipeline.
Employee assistance, AI-augmented.
An Employee Assistance Programme rebuilt around AI — triage, on-demand support, human escalation.
The bet, on paper.
Repeat AI strategy engagements — portfolio triage, capability maps, sequencing across institutions.
Why we're building this now.
Labs built the largest models. It's the studios composing them — focused, mission-critical teams close to operators, in particular sectors, with judgement about where the leverage actually sits — that will decide where AI wins.
Three ways in.
Embedded.
Standing relationship. We sit with your leadership and own the AI-native edge of the stack.
One scoped engagement.
Eight to sixteen weeks. One artifact, one decision, one shipped tool.
At the moment of choosing.
Senior advisory for boards and ministers, drawing on Tinker's full Counsel practice.