The Year the Editor Came Back

A year ago, the editor was a place I visited

A year ago (and for a long time before that), my code editor sat on the second or maybe third screen, mostly closed.

I opened it to check a complex AMP script in a Salesforce Email, to read a Liquid snippet I had written for a client, or to debug something related to the GTM implementation of one of the platform SDKs I work with.

It was a reference room more than a workshop.

The room I actually lived in was somewhere else: architecture diagrams, vendor scorecards, integration maps, board decks, RFP responses. More than twenty-five years of moving from the keyboard toward the whiteboard, and the trajectory had started to feel one-way.

That year is over.

Or rather, the last 12 months had two phases, and they were not the same year. I was not a late adopter. ChatGPT, Gemini, and Perplexity had all been in my workflow well before the period this article covers. What changed in 2025 was not adoption. It was method.

The first phase (roughly April through December 2025) was when that use became deliberate and configured: AI as a structured thinking surface rather than an occasional lookup tool. GEMs in Gemini tied to specific client contexts. ChatGPT Plus Projects scoped per engagement. Perplexity Spaces organised around persistent research threads I actually maintained.

I increasingly think this is where chat-based AI is most useful: not as a substitute thinker, but as a thinking surface, provided you keep direction and judgment in human hands.

They reshaped how I researched, how I scoped briefs, how I drafted points of view, how I read a Forrester Wave or a vendor release note. They were genuinely useful, and they were already changing the texture of my work.

What they did not change was what I built, because the building part still ran on me, and on the time I did not have.

The second phase (since January 2026, when I brought Claude Cowork and, later, Codex into the workflow and started using them as building partners rather than chat assistants) is the phase this article is really about.

That is where the editor came back to the front of the desk.

Five months. Not twelve.

The chat era set the stage; the agentic era reshaped the work. They are not the same shift, and I would do the last months a disservice if I described them as one.

Two phases: Chat Era vs Agentic Era

I am writing this on May 10, 2026. In those five months, I shipped a personal site. I built working sales demos on Braze, Marketo and Bloomreach for three different enterprise prospects. I deployed a private weekly reading agent that scans MarTech release notes and gives me back structured signals. I prototyped a research-and-document agent that classifies vendor material, industry reports and event coverage, then drafts client-ready documents from our company templates. I built a small personal-finance web app that ingests Italian bank statements and gives me a unified search, dashboard and forecasting view across accounts. And I put six Field Notes on CEPs into the public record.

None of that, by itself, is unusual for someone in my role.

What is unusual, and what I want to write down while I still remember the shape of it, is how short the period that produced it actually was, and how completely the distribution of where my hours went changed.

I am not writing this because the artifacts matter on their own. I am writing it because the same compression I felt in my own work is now entering the MarTech platforms my clients rely on.

I expected AI to take the bottom of the stack: the boilerplate, the typing, the parts I had already delegated mentally. The chat era already did most of that, in fairness: drafting briefs, summarising long documents, pattern-matching across vendors.

What I did not expect is that the agentic shift, when it came, would change my relationship with the top of the stack: the parts I thought were uniquely mine.

What actually shifted

The shifts came in two phases, and they were qualitatively different.

In the chat era, what shifted was the front end of the work.

Researching a vendor took an hour instead of a day, because Perplexity Spaces gave me a workspace that remembered what I was looking for. Writing a point of view started from a half-formed draft instead of a blank page, because a ChatGPT Project held the context of the client and the strategy. Reading a long industry report no longer required reading every page, because AI could summarise it.

That is real productivity, and I would not give it back.

But it left the making part of the work mostly intact. I still wrote the deliverable. I still built the diagram. The artifact was still mine, end to end.

In the agentic era, what shifted is the making itself.

Not “I write more code now.” That would be too narrow.

The chat era reduced the cost of describing the thing. It did not reduce the cost of making it.

Now, I prototype.

I open Cowork, describe what I want, and get a working draft in the time it would have taken me to write a one-pager describing it. Then I poke at it, change my mind twice, throw most of it away, and ship a recommendation that has been pressure-tested against something real instead of something imagined.

That is a different job.

It looks the same from the outside. It is not the same.

The two articles I wrote earlier this spring, What a Weekend with Astro and Claude Cowork Taught Me About Building Again and the Braze sales-demo, were partly about the artifacts.

They were really about this.

The artifacts were the alibi.

The last months, in artifacts

If I lay the year out as an inventory rather than a story, four kinds of work stand out.

Sales POCs for prospects, on three different platforms (via Claude Cowork).
I have written about the Braze project at length already. What I have not written is that two more demos sat alongside it: one on Marketo, one on Bloomreach, for two different prospects. The pattern across all three was similar: a working environment with realistic data, a kit of journeys or programs designed around a specific competitive argument, a technical playbook, and a business case calibrated for a non-technical executive audience. Three platforms, three prospects, three different incumbent vendors to displace, three slightly different stories. AI-assisted work compressed the implementation cost dramatically and concentrated my hours on the parts that needed judgment: the demo narrative, the data-model decisions specific to each platform, and the competitive framing. Without that compression, three of these in a few months would not have happened. With it, the bottleneck was no longer building: it was deciding what to build.

An internal research-and-document agent (via Claude Cowork). It ingests vendor material and release notes, classifies them, and drafts client-ready documents from our templates. The interesting part was not the classification. It was being forced to write down, explicitly, the implicit conventions of how we communicate with clients: register, structure, what we never say, what we always say.

A personal finance app (via OpenAI Codex).
A small private python web app that ingests bank statements, normalises the categories, gives me a unified dashboard across accounts, lets me search transactions in natural language, and produces a basic cashflow forecast. It taught me something useful: the difference between knowing what you want and being able to specify it precisely is the entire user-experience curve.

A reading agent and a writing practice (via Claude Cowork).
The weekly MarTech signals agent and the Field Notes series are connected. The agent reads release notes. The series writes about what those releases mean. Without the agent, the series would not be sustainable at the current cadence. Without the series, the agent would be a private hobby. Each makes the other useful. That is the most honest thing I can say about why I built them.

When the slogan became arithmetic

There is a phrase that gets repeated in every AI-and-development article, and I hesitated to use it because it has been worn flat.

But it is still the most accurate description of what happened.

For years, “AI gets you 80% of the way there” was a line you heard in conference talks. In the chat era, it was approximately true for the describing parts of my work: research summaries, draft frameworks, first-pass copy.

In the agentic era, it became true for the building parts too.

The Braze demo project I wrote about, and the Marketo and Bloomreach builds that followed, should each have been a two-week effort for one person, perhaps three weeks if the prospect kept changing their mind.

Each was two to three days of focused work.

Not because Claude wrote the architecture.

Because Claude made the scaffolding, the boilerplate, the first-draft Braze Canvas or program blueprints, the localised user CSVs, the technical playbook outlines and the business-case skeletons functionally free: across different platform paradigms and different competitive frames.

What stayed expensive was the part of the work I would not want to delegate anyway: each demo’s narrative ordering, the data-model decisions, the competitive framing, the exact register of the local-language copy.

That is the part I keep coming back to.

AI compresses the distance between knowing what to build and having built it. It does not, and should not be expected to, close the gap that comes before that: knowing what to build in the first place.

The Compression Model: what AI compressed vs what stayed human

The research is starting to catch up with what many of us are seeing in practice: the gains are real, but they do not distribute evenly.

I read those reports the way a CDP architect reads a Forrester Wave: with affection, with skepticism, and with the suspicion that the most interesting things are happening in the cells the chart cannot show.

My own n=1 experience is that the lift is real, uneven, and most real for senior people doing work where they already know what good looks like.

Recent work on AI coding assistants points in a similar direction: individual task speed can improve materially, but team-level gains are often absorbed by review queues, QA, integration work and coordination overhead.

The same compression, one floor up

Here is the part that matters professionally, not personally. The compression I have been describing is not only a developer story. I recognise it because I lived it from the inside. And now I see the same dynamic entering the platforms I evaluate, recommend, and implement for clients, one layer up, at enterprise scale. Adobe, Braze and others are no longer only exposing workflow builders.

They are exposing agentic surfaces: systems that can generate, simulate, optimize, execute, or assist with the work marketers used to assemble manually.

Adobe Journey Agent, BrazeAI Agents, Marketo Engage’s MCP server, the spread of MCP across marketing automation categories: they all point in the same direction.

The platforms are becoming systems you direct rather than systems you operate.

Journey Optimizer stops being only a workflow-building application and starts becoming an orchestration target. The CDP becomes the substrate. The CEP becomes the stage on which agents act.

I have been writing about this trajectory for months: across the marketing-automation trilogy, the Field Notes series, and in several editions of Weekly MarTech Signals.

Reading my own articles back in sequence, the through-line is difficult to miss.

Analysts are already describing “agentic marketing platforms” in these terms: agents embedded into campaign, creative, and journey workflows, with marketers spending more time steering systems and less time clicking through UIs.

If that is true, and I believe it is, then the shape of MarTech architecture work changes too.

Not as dramatically as the LinkedIn discourse suggests.

Not as little as platform vendors with something to protect would prefer.

The architecture, as I keep saying when people get tired of me saying it, still wins. It just has new things to architect.

The role question

A reasonable challenge: am I describing my work changing, or my work staying the same while the floor moves?

Honestly, both.

The role I do for clients still has the same description on paper: select platforms, design data models, sequence implementations, align organizations.

The difference is in what each of those words now requires.

Selecting platforms used to mean reading documentation, sitting through pitches, and running a feature matrix.

Now it also means asking, for every shortlist candidate: what does the agent surface look like, what does the MCP layer expose, what is the governance posture, where do humans stay in the loop?

Designing data models used to be about identity resolution, profile architecture and event taxonomy.

It is still all of those things.

It is also about what an agent can read, what an agent can write, what an agent must escalate, and what an agent will fabricate when asked a question that cannot be answered with the data available.

The data model is the agent’s reality.

Getting it wrong has new failure modes that the old failure modes do not prepare you for.

Sequencing implementations used to be a question of dependencies and risk.

It still is.

It is also a question of which capabilities the team needs to develop in their own muscles before letting the agent take them, because a team that has never run a journey by hand cannot govern a journey that an agent ran for them.

That is not a romantic objection to automation.

It is an operational one.

You cannot review what you do not understand.

Aligning organizations is the part I think about most.

The CMO’s job, in the agentic era, is not to write fewer briefs. It is to write briefs that an agent can decompose into goals, and to take responsibility for the outcomes of decompositions the CMO did not personally inspect.

That is a different relationship to delegation.

It will produce different organizational charts.

The architect’s job is to make those charts implementable.

What got harder

I do not want to write only the optimistic version of this story.

Some things genuinely got harder.

The first is judgment under speed.

When the artifact appears in a minute, the temptation to ship it in two is strong. The cost of a bad decision is the same as it ever was, but the time to make it has been compressed past what feels safe.

Leadership pieces on AI keep circling the same warning: the moment the model becomes the “thought leader,” you quietly stop practicing judgment, even as your output accelerates.

I have learned, slowly, to add deliberate latency back into the process: to sit with a draft before approving it, to read the second-best variant out loud, to have someone else look at the thing before I send it.

The friction the tool removed had a function.

Some of that function needs to be restored manually.

The second is the temptation to over-fit to the tool.

Every project I touched in the last year had moments when I caught myself shaping the work to play to Claude’s strengths instead of the client’s needs.

That is a small drift, but it compounds.

The fix is mundane and old-fashioned: keep going back to the original problem statement.

The tool is not the brief.

The third is energy.

This work is more cognitively demanding, not less.

Reviewing a draft you did not write, checking it for errors of fact and errors of taste, deciding what to keep and what to throw away, all while moving faster than before: that is more tiring than typing the thing yourself.

The hours got shorter. The intensity got higher. I have not figured out the right shape for that yet. What I know is that the answer is not working less, it is working with more deliberate pauses built in. The tool removes friction. Some of that friction had a function. This one did too.

What I am watching now

A few things look like the next inflection points from where I am sitting.

Governance as a feature.
The vendors that win the next eighteen months will not be the ones with the most agents. They will be the ones with the cleanest audit trail, the most legible decision surface, and the most credible answer to a very simple enterprise question: how do I prove this agent did not make a mistake?

The MCP layer as architecture.
The interesting architectural question is no longer “do you have an MCP server?” but “what is the governance contract you have published for it?”

That is the question I am putting to the platforms I evaluate now.

The role of consultants.
If a small senior team with AI assistance can do work that previously required a larger implementation team, the consulting model around MarTech has to change.

Not because there is less work.

There is more.

But because the unit of capability is shifting from a large junior team with hours to a smaller senior team with judgment, tooling and accountability.

The org charts at consultancies will adjust to that.

We are already starting.

What stays human.
The list is not the romantic list: empathy, creativity, intuition.

It is the operational list: deciding what to build, knowing the audience well enough to choose between two adequate options, taking responsibility for outcomes, being accountable to a regulator.

The list is narrower than we like to admit, it is also more valuable than it used to be.

That is the kind of trade I will take.

Closing

I am ending the year more useful to my clients than I started it, in ways that do not show up on a certification page.

I am writing more, and I think the writing is sharper because the things I am writing about are things I have actually built.

I am, on balance, glad the editor came back to the front of the desk.

More than twenty-five years ago, I was a developer who became a strategist because the strategy was where the leverage was.

The leverage moved.

Not far, it is still mostly in the strategy, but enough, in five months, that the editor is a useful place to sit again for the first time in a long time.

The chat era told me that AI was going to be useful.

The agentic era told me what kind of useful, and the answer turned out to be different from the one I had been preparing for.

I do not know what next year looks like exactly.

I suspect it looks like more of this, with the proportions shifting again, and many of the certainties I am writing today will read as quaint in twelve months.

That is fine.

The architecture, including the architecture of how I work, is supposed to evolve.

The line I have been repeating for years still holds.

The architecture always wins.

I am simply less certain, this year, where the architecture ends and the architect begins, and I am finding that uncertainty more productive than I expected.

Sources