Why I Built a Reading Agent Instead of a Reading List
I have been in this industry for 25+ years. In that time I have accumulated a lot of habits for staying current: vendor newsletters I actually read, bookmark folders I have good intentions about, RSS feeds from a previous decade, LinkedIn alerts that fire at the wrong time and conference sessions where vendors describe their roadmap and call it news.
None of it worked particularly well as a system. It worked the way most information habits work, inconsistently, driven by what happened to surface on a given morning rather than by any coherent signal-gathering logic.
What I actually needed was not another source. It was a repeatable process.
This is not a weekly news roundup. It is a weekly signal scan.
Something that ran the same scan every week, checked the same official release notes pages, compared what was new against what had already been covered, filtered out the noise, and handed me a structured picture of what actually changed.
So I built one. And I want to explain what it does and how it works before I start publishing its output here as a weekly series.
What This Series Is
Every week I track a fixed set of platforms and vendors. These are not chosen randomly. They are the tools I work with professionally, evaluate for clients, or hold production certifications in. The full list:
-
B2C CEP Platforms: Adobe Journey Optimizer, Adobe Real-Time CDP, Braze, Bloomreach Engagement, Insider, Iterable, Salesforce Marketing Cloud.
-
B2B CEP Platforms: Adobe Marketo Engage, Adobe Journey Optimizer B2B Edition, Salesforce Account Engagement, HubSpot.
-
Vendors (CDP, data, activation layer): Adobe, Bloomreach, Braze, Hightouch, Insider, Iterable, Salesforce, Segment, Tealium, Treasure Data.
Each week, the agent checks each platform’s official release notes, changelogs, and product update pages. It compares what it finds against a memory of what was already reported in prior runs, keeps only what is genuinely new, and produces two outputs:
-
The digest: one section per platform and vendor, with the new items, a short explanation of what changed, and the direct link to the source. No invented details, no third-party summaries unless the official source is unavailable. Impact scoring and capability tagging included so you can scan quickly.
-
A draft article: what you are reading now, essentially. A commentary layer on top of the raw digest: what the week’s releases mean in aggregate, which patterns are worth tracking and which signals I think are being underread by the market.
The article is the one I care most about. Release notes are facts. The article is interpretation. And interpretation, grounded in years of building these stacks for real clients, is the thing that is actually scarce.
The Honest Reason I Did Not Do This Before
The discipline required to run a structured scan across eleven platforms every week, without it degenerating into “I checked the ones I happened to visit,” is real. I have tried versions of this before: curated reading lists, saved searches, Slack channels with link dumps. All of them eventually converged on the same failure mode: inconsistency, gaps, and the cognitive overhead of managing the process rather than doing the reading.
What changed is that I now have a tool capable of running the process reliably. I use Claude in Cowork mode as the engine. It handles the structured fetching, comparison against memory, source verification, and output generation. I review, edit, and add the interpretive layer. That division of labor is what makes the system sustainable.
I want to be precise about what this means, because “AI-generated content” carries a lot of justified skepticism in professional contexts.
The agent does not fabricate release details. Everything in the digest is sourced from official documentation with a direct link. If a page is unavailable or the content cannot be extracted reliably, the digest notes that explicitly rather than filling the gap with inference. The agent checks primary sources: Adobe Experience League, Braze docs, Iterable’s support center, HubSpot’s product update pages, Bloomreach’s changelog, not summaries or analysts or press releases. Where it does use secondary sources (typically for vendor-level news or conference coverage), it says so.
What I add on top of that is the architectural reading. Which patterns across releases suggest a coordinated market direction. What a specific feature means for the teams actually implementing these platforms. Why a deprecation notice buried in a Marketo release note matters more than a feature launch with a press release.
That part does not come from the agent. It comes from having built and evaluated these stacks across retail, fashion, insurance and financial services for two and a half decades.
How I Built the Agent
I want to walk through the architecture of this because I think it is genuinely useful, both as a transparency disclosure and as a practical illustration of what current AI tooling can actually do in a professional context.
The setup runs inside Claude’s Cowork mode on my desktop. I defined the agent’s behavior through a structured role prompt that specifies:
The sources to check. Every platform has a primary official URL. Some have mandatory secondaries (for Braze, I always check the official releases home; for AJO, both the current notes and the 2026 archive). The prompt specifies that official vendor-owned documentation takes priority over any third-party analysis and that gaps are reported rather than filled.
The workflow. Check each platform. Find the latest official updates. Compare against memory. Discard anything already reported. Summarize what remains. Include the source link. Repeat for vendors. Then produce the digest and the article draft.
The memory system. This is the piece that makes the deduplication work across weeks. After each run, the agent writes a structured log of every item it reported: platform, URL, release date, normalized description. The next run reads that log before it starts, so it knows what has already been covered. Without this, a weekly process would keep re-reporting the same stable features that simply happen to appear on the release notes page.
The output format. The digest includes impact scoring (CDP / CRM / Journey relevance), capability tags (Segmentation, Activation, AI, Channels, etc.), a per-platform section, a per-vendor section, a conclusion with market interpretation, and a full source appendix. The article draft is a separate file. Both are MD files, lighter for Claude to process and already formatted for my Astro personal site.
The first run, which produced the first edition of this series, took the agent roughly one full pass across all sources. Subsequent runs will be faster because most of the structure is already established.
What I do at the end of each run: I read the digest, correct anything that looks wrong or imprecise and write the interpretive layer of the article (or edit the draft).
The agent handles the research and the structure. I handle the judgment.
What This Is Not
It is not a vendor-neutral overview of the entire MarTech landscape. I track the platforms I actually know. There are tools I do not cover, not because they are unimportant, but because I cannot offer useful interpretation of them. I would rather cover eleven platforms well than twenty platforms superficially.
It is not a replacement for reading the release notes yourself. If you work daily with one of these platforms, you should be reading its changelog directly. What this series offers is the cross-platform pattern recognition, what is happening across the entire category in a given week, which is genuinely hard to do when you are also trying to run implementations.
Why It Is Worth Reading
The MarTech industry is very good at generating noise. Vendors have marketing teams. Analyst firms have research agendas. Conference sessions have sponsors. Press releases are optimized for coverage, not accuracy.
Release notes are different. They are written for implementers, not buyers.
They describe what actually changed, not what someone wants you to believe changed. They include the deprecation notices that nobody puts in the press release. They show you which features went from “limited availability” to “general availability”, which tells you something about real production readiness. They are the closest thing this industry has to primary sources.
I find them genuinely interesting. Not every week, some weeks the entire scan produces little content. But the weeks when a coordinated pattern emerges across multiple platforms simultaneously, when you can see the market moving in a direction before the analyst coverage catches up, those are worth the investment.
That is what I am trying to surface here, every week, without embellishment.
The first full edition follows this post. It covers the week ending April 30, 2026 and it is one of the more interesting scans I have done. As it happens, the week I chose to launch this series turned out to be the week that agentic AI stopped being a roadmap item and became a product reality across most of the platforms I track.
I did not plan that. The timing was the timing.
*The digest behind each weekly article is produced by a structur