About Services Studio Get in touch
AI moves fast.
We break it down.
Weekly dispatches on the developments that matter — explained for everyone.
Protocols
The USB-C of AI Just Won
Anthropic's Model Context Protocol has become the industry standard for connecting AI to everything. 7 million downloads a month. Here's why it matters.
Feb 13 · 5 minRead →
Industry
The Day AI Went to Work
On February 5th, OpenAI launched Frontier and Anthropic dropped Opus 4.6 — on the same day. Both are betting AI's future is as your coworker, not your chatbot.
Feb 6 · 6 minRead →
Science
Your Next Medicine Was Designed by AI
Google DeepMind's first AI-designed cancer drug just entered clinical trials. Sanofi found 10 new drug targets in a single year. The lab is going digital.
Jan 30 · 5 minRead →
Software
41% of All Code Is Now Written by Machines
Cursor hit $500M in revenue in 12 months. Microsoft says AI writes 30% of its code. "Vibe coding" made the dictionary. Software will never be the same.
Jan 23 · 5 minRead →
Models
The Race to Shrink AI
The era of "bigger is better" is over. DeepSeek proved a small team can match the giants. Now the real competition is efficiency — and it changes everything.
Jan 16 · 5 minRead →
Infrastructure
The Buildings That Think
$55 billion a month in AI infrastructure investment. Data centres the size of suburbs. The physical cost of every conversation you have with AI.
Jan 9 · 6 minRead →

The USB-C of AI Just Won

There's a protocol you've never heard of that's quietly becoming one of the most important pieces of technology in the world. It's called MCP — the Model Context Protocol — and it's the reason AI is about to get dramatically more useful.

Right now, AI lives in a chat window. It can talk to you, but it can't really do anything. It can't check your calendar, search your files, update your CRM, or pull data from your company's systems. Every connection has to be custom-built, one integration at a time. MCP fixes that — a universal standard, a single plug that connects any AI to any tool.

What actually happened

Anthropic released MCP as an open-source standard in late 2024. The pitch was simple: instead of building a different connector for every app, build one standard that works everywhere. They called it the "USB-C of AI."

It worked. By early 2025, OpenAI adopted it. Then Google. Then Microsoft. In December 2025, Anthropic donated MCP to the Linux Foundation's new Agentic AI Foundation, co-founded with OpenAI and Block. The protocol went from one company's experiment to the industry's standard in barely a year. MCP servers now see over 7 million downloads a month.

Why this is a big deal for you

Without a universal standard, every AI tool is an island. Your assistant can't talk to your project management tool. Your coding AI can't access your documentation. With MCP, an AI agent can discover available tools, understand how to use them, and connect — automatically. One protocol, infinite connections.

That means AI assistants that can actually navigate your entire digital life: checking flights on one service, cross-referencing your calendar on another, and booking through a third — all in one seamless flow.

The bigger picture

If every AI system speaks the same protocol, you're no longer locked into one provider. You can mix and match — use Claude for research, GPT for coding, Gemini for data analysis — all connected through the same universal plug.

New protocols are emerging alongside MCP too. Google's A2A handles communication between AI agents themselves. Together, these standards are forming what some call the "agentic web" — a network where AI systems collaborate as fluidly as websites link to each other.

The last time a universal protocol changed everything, it was HTTP — and it gave us the web. MCP might not be as dramatic. But for AI, it's the moment the walls start coming down.

The most important AI innovation of 2025 wasn't a model. It was a protocol.

The Day AI Went to Work

February 5th, 2026 might go down as the day the AI industry decided chatbots were over.

On the same day — not coordinated, but telling — OpenAI launched Frontier, a platform for managing fleets of AI agents inside businesses, and Anthropic released Claude Opus 4.6, its most capable model yet, optimised for sustained autonomous work. Both companies arrived at the same conclusion: the future of AI isn't chat. It's work.

OpenAI's big bet: AI employees

Frontier is OpenAI's most aggressive move into the enterprise. It's a platform where companies build, deploy, and manage AI agents the same way they manage human employees — with onboarding, permissions, performance reviews, and access to company systems.

Connect your CRM, data warehouse, ticketing tools, and internal apps, and AI agents work across all of them. They get identity and permissions like a human employee. They build memory of past tasks and improve over time. Early adopters include Uber, State Farm, Intuit, and Thermo Fisher. A global financial services firm using Frontier freed up over 90% of their client-facing team's time.

Anthropic's counter: the model that doesn't need hand-holding

Anthropic took a different angle. Instead of a management platform, they built a model so capable it barely needs one. Claude Opus 4.6 leads the industry on real-world coding, financial analysis, and multi-step workflows. It features a 1 million token context window — enough to process entire codebases in a single session.

The standout feature is adaptive thinking. The model decides how deeply to reason based on complexity. Simple questions get fast answers. Hard problems get careful analysis. Anthropic's head of enterprise described the shift plainly: "We are now transitioning almost into vibe working."

Two strategies, one conclusion

OpenAI is building the operating system — the platform that manages AI workers. Anthropic is building the worker — the model so good it handles complex tasks independently. Both are racing toward the same future: AI as colleague, not tool.

This is fundamentally different from ChatGPT in 2023. That was a search replacement. This is workforce augmentation. The language has shifted from "ask me anything" to "let me handle this."

If you work at a company of any size, AI agents are coming to your workplace. Not as a novelty. As teammates. February 5th wasn't the finish line. It was the starting gun.

The question is no longer "can AI do my job?" It's "which parts of my job will AI do first?"

Your Next Medicine Was Designed by AI

Somewhere in a lab right now, an AI-designed cancer drug is being tested in human patients for the first time. It wasn't discovered by accident or decades of trial and error. It was engineered — computationally, deliberately — by an artificial intelligence that understands molecular biology in ways human researchers can't match.

The numbers

Google DeepMind's drug discovery arm, Isomorphic Labs, has 17 active programmes and its first AI-designed cancer treatment entering clinical trials in early 2026. DeepMind CEO Demis Hassabis has predicted we're entering a "golden age of discovery."

Sanofi used AI to discover 10 completely new drug targets in a single year. Their development committee now begins every meeting with an AI agent's assessment of whether a candidate should advance. Insilico Medicine's AI-designed lung fibrosis drug posted positive results in early human trials. Recursion Pharmaceuticals runs millions of experiments per week using automated labs guided by machine learning.

How AI changes drug discovery

Traditional development takes 10-15 years and costs billions. The failure rate exceeds 90%. AI changes every step — it predicts which molecules will bind to disease targets before a single lab test runs. It simulates drug interactions with human biology entirely in software. It explores chemical spaces orders of magnitude larger than any human team could test.

The difference between searching for a needle in a haystack by hand and having a magnet. The haystack is still enormous. But the search is no longer blind.

What this means for patients

Faster discovery means treatments for diseases that currently have none. Rare diseases become viable targets when AI collapses the timeline and expense. Personalised medicine — drugs tailored to your genetic profile — becomes practical when AI can design bespoke molecules efficiently.

2026 is the year AI drug discovery moves from computational milestones to clinical ones. The molecules are leaving the computer and entering the human body. If even a handful succeed, it validates an entirely new paradigm for how we develop medicine.

The lab of the future looks more like a server room. And the drugs it produces might save your life.

41% of All Code Is Now Written by Machines

As of early 2026, 41% of all code written globally is generated by AI. Not assisted. Generated.

Cursor — barely two years old — hit $500 million in annual revenue, growing from $1M to half a billion in 12 months. Microsoft says AI writes 30% of its production code. Google reports similar. "Vibe coding" — coined by AI researcher Andrej Karpathy in February 2025 — made the dictionary within a month.

What vibe coding actually looks like

You describe what you want in plain English. "Build me a dashboard showing sales by region with date filters and CSV export." The AI writes the code. You review it, ask for changes, iterate — all in conversation. No syntax. No stack overflow.

Tools like Cursor, GitHub Copilot, Replit, and Claude Code have made this reliable enough for production. Professional developers move faster. Non-developers build things that weren't possible before.

The democratisation is real

A small business owner building a custom inventory system. A teacher creating a learning app. A researcher building a data visualisation tool. None studied computer science. All can now create functional software by clearly describing what they need.

The barrier has collapsed from years of education to clear thinking about what to build. The next wave of software won't come from Silicon Valley alone — it'll come from domain experts everywhere who understand problems deeply.

The employment question

If AI handles routine coding, what happens to junior developers? The entry-level tasks new engineers learned on — bug fixes, boilerplate, simple features — are exactly what AI does best. Senior engineers are more productive than ever. But the pathway to becoming senior is being disrupted.

The most valuable skill in software is no longer syntax. It's product thinking — understanding problems deeply, envisioning solutions, and communicating clearly enough for both humans and machines to execute.

The machines write the code. The humans decide what's worth building.

The Race to Shrink AI

For years, the AI industry had one playbook: make it bigger. More data. More parameters. More GPUs. The assumption was that scale was everything.

In 2026, that era is ending. And what's replacing it is far more interesting.

The wall

The industry hit hard limits. High-quality training data is running out. Training costs are staggering — billions per frontier model run. Returns from simply scaling up are flattening. As IBM's Kaoutar El Maghraoui put it: "We can't keep scaling compute, so the industry must scale efficiency instead."

Innovation has shifted to post-training — techniques applied after a model is built that make it dramatically better at specific tasks. The base model is a generalist graduate. Post-training is the specialist apprenticeship.

DeepSeek proved the point

A relatively small Chinese lab, working with a fraction of the resources available to OpenAI or Google, released open-source models that matched or beat the leaders. Their reasoning model R1 shocked the world. The message: clever engineering beats brute force.

Open-source models are closing the gap with proprietary ones. Silicon Valley apps are quietly running on Chinese open models. The lag between frontier and open-source has shrunk from months to weeks.

Why efficiency matters to you

Smaller models are cheaper — AI features get smarter without massive cloud costs. They're faster — real-time translation, instant suggestions, live analysis. And critically, they run on your device — no cloud, no latency, complete privacy.

2026 has two tracks. Frontier models push the ceiling of capability. Efficient models push the floor of cost. The real winners build the smartest model for the least power, cost, and latency.

The AI revolution won't be won by scale. It'll be won by efficiency.

The Buildings That Think

Every time you ask an AI a question, a warehouse full of specialised computers spins up to answer it. Not a small warehouse. A building the size of several football fields, cooled by systems consuming millions of litres of water. You never see it. But it's reshaping the physical world.

The scale is staggering

Venture capital is pouring into AI infrastructure at over $55 billion per month globally. Amazon, Google, Meta, Microsoft, and Nvidia are building hyperscale data centres across the world. These aren't traditional data centres — they're purpose-built AI factories bundling hundreds of thousands of specialised chips into synchronised clusters. They represent the largest infrastructure investment since the electrical grid.

The energy equation

A single large AI training run can consume as much electricity as a small city uses in a year. Communities hosting data centres see rising electricity costs. In some regions, the strain has delayed residential construction. The White House recently told Big Tech to fund its own AI power expansion as communities push back.

Tech companies say they're committed to clean energy. Many invest heavily in solar, wind, and nuclear. But the gap between ambition and reality remains significant.

Water, noise, and neighbours

Data centres need vast amounts of water for cooling — creating conflict in water-stressed regions. Industrial cooling generates constant noise. And economic benefits to local communities are often limited — these facilities employ surprisingly few people relative to their size.

The geopolitical dimension

Where AI infrastructure sits determines who controls AI capability. Countries compete to host facilities for strategic advantage. Sovereign AI — running AI without depending on foreign infrastructure — has become a national security concern. Cisco is building dedicated sovereignty centres across Europe for organisations with strict data requirements.

The decisions being made now — where it's built, how it's powered, who bears the costs — will shape communities and climate for decades.

Every conversation with AI starts with a question. It ends in a building the size of a suburb.