24 SaaS Changelogs, 5 Months: What I Found About How B2B Companies Actually Ship - Spylert Blog

13 min read Original article ↗

Product managers have opinions about what competitors are building. Almost none of those opinions come from systematic data.

I tracked 24 B2B SaaS changelogs from October 2025 through early March 2026 and categorized every entry into five buckets: New Feature, Improvement, Bug Fix, Platform/API, and Deprecation. The companies span developer tools, infrastructure, CRM, marketing, design, project management, and customer support. 1,089 entries total.

The extended timeframe caught something I didn’t plan for: I watched the same teams ship through a regular quarter, slow to a crawl during the holidays, and ramp back up in the new year. That seasonal data turned out to be more interesting than anything else in the dataset.

Weekly changelog entries across 24 SaaS companies


What I tracked and how

I monitored changelogs at their source URLs — /changelog, /releases, /whats-new pages, RSS feeds, and Atom feeds — using automated scraping with manual verification for ambiguous entries.

The 24 companies: Cloudflare, Vercel, Stripe, Tailscale, PlanetScale, Clerk, Figma, Customer.io, Kit, Neon, Railway, Linear, HubSpot, Slack, Close, Resend, Supabase, Attio, Gusto, Intercom, Zendesk, Canva, Notion, and Sentry. Developer tools and infrastructure are the largest group (10 of 24), but I also tracked CRM tools (HubSpot, Attio, Close), marketing platforms (Customer.io, Kit), customer support (Intercom, Zendesk), and others.

Each entry got one primary label: New Feature (a capability that didn’t exist before), Improvement (enhancement to something existing), Bug Fix (correction of incorrect behavior), Platform/API (developer-facing work like endpoints, webhooks, SDKs), or Deprecation (removal or sunset of capabilities).

Where an entry covered multiple categories, I assigned the primary intent.


Improvements dominate everything else

I expected bug fixes to dominate. They didn’t even come close.

Across 1,089 entries from 24 companies:

What SaaS teams actually ship

  • Improvements: 42.6%
  • New Features: 40.7%
  • Platform/API: 10.7%
  • Bug Fix: 4.6%
  • Deprecation: 1.4%

Over two-fifths of all changelog entries were improvements to existing features. Not new capabilities, not bug fixes — just making the thing they already had slightly better. New features came in a close second at 41%.

Bug fixes? Under 5%. Either these companies are genuinely shipping stable software, or (more likely) they don’t bother logging bug fixes in their public changelogs. Nobody’s writing a blog-style entry about fixing a race condition in their webhook delivery. Bug fixes get rolled into deploys and never see a changelog.

The one outlier was Tailscale. A substantial share of their 70 entries were bug fixes — the highest of any company in the dataset by a wide margin. Their changelog reads like proper release notes from an engineering team that actually tells you what broke. Refreshing, but unusual.

Kit went the other direction: the majority of their entries were new features. That’s what a marketing platform in growth mode looks like — shipping capabilities fast, worrying about polish later.

When a competitor’s blog announces six features in a quarter, the changelog behind it is probably three parts “made X slightly better” for every one part “built something new.” The ratio isn’t what you’d guess from reading launch posts.


The holiday dead zone

The holiday shipping cliff

This is the section that surprised me most.

In a normal week during my tracking period, the 24 companies in my dataset collectively shipped about 50-55 changelog entries. That rate held remarkably steady through October, November, and the first three weeks of December.

Then Christmas hit.

The week of December 22-26: 8 entries total. Across all 24 companies. An 85% drop from baseline. Only four companies shipped anything at all: Vercel (3 entries), Customer.io (3), Cloudflare (1), and Tailscale (1). The other 20 companies went completely silent.

The following week was worse. December 29 through January 2: 7 entries. Cloudflare posted 2. Neon, Railway, Close, Kit, and Gusto each posted 1. Everyone else was still out.

To put a finer point on it: in a normal 12-day stretch, these companies collectively ship about 100 changelog entries. During the 12 days from December 22 to January 2, they shipped 15. That’s an 85% drop, and it’s not gradual. It’s a cliff.

The ramp-up wasn’t instant either. The first full week of January (Jan 5-9) saw 31 entries — about 60% of normal. It took until the week of January 12 to get back to 54, roughly full speed. February then turned out to be the strongest month in the entire dataset: 250 entries, averaging 65 per week. New year energy is real. Teams come back from break with backlogs and Q1 plans, and February is when it all hits the changelog.

Thanksgiving showed a milder version of the same pattern. The week of November 24-28 dropped to 31 entries from 53 the previous week — a 42% dip, noticeable but nowhere near the Christmas cliff. Teams apparently take a couple days off for Thanksgiving but don’t fully disengage.

What does this mean for competitive monitoring? If you’re watching changelogs in late December and nothing’s happening, don’t read into it. Everyone is on vacation. But if a company is still dark in mid-January while competitors are back to shipping, that’s a signal worth paying attention to. And February changelogs deserve extra scrutiny — teams dump a lot of pent-up work in the first full month of the new year.


Which companies never stop shipping

Monthly entries by company

The holiday data also reveals something about company culture. Most companies go quiet during the holidays. A few don’t.

Vercel and Customer.io both shipped 3 entries during Christmas week. Cloudflare had entries in both Christmas and New Year’s weeks. These aren’t automated deploys or batch jobs — they’re real changelog entries describing specific changes.

On the other end: Attio and Sentry had zero entries in all of December. Both came roaring back in February — Attio went from 0 entries in December to 14 in February, and Sentry from 0 to 8. That’s not a company struggling. That’s a company that takes a real break and comes back swinging.

The biggest ramp-ups from December to February: Cloudflare (36 to 61, +25), Attio (0 to 14), Vercel (38 to 52, +14), Figma (9 to 18), Intercom (5 to 11). February consistently outpaced every other month for almost every company in the dataset.


Shipping velocity tracks with funding stage (until it doesn’t)

Monthly shipping volume

I went in expecting Series B-C companies to ship the fastest. Smaller teams, less overhead, hungry to capture market. The data said otherwise.

The two highest-velocity shippers were Cloudflare (50 entries/month) and Vercel (44 entries/month). Both are late-stage or public. Cloudflare has thousands of employees. Vercel has raised over $300M.

Then a big drop: Stripe at 15/month, Tailscale at 13.5/month, PlanetScale at 12/month. Mid-tier companies like Figma (10/month), Clerk (9/month), and Customer.io (7/month) shipped at roughly comparable rates despite very different team sizes. Smaller companies like Linear (3/month), Close (2.5/month), and Gusto (2/month) posted far fewer entries.

What’s going on? Cloudflare and Vercel have massive product surface areas. Cloudflare ships changes across Workers, Pages, R2, D1, Durable Objects, AI Gateway, and dozens of other products. Each sub-team has its own changelog cadence. Total volume is high because the denominator is huge.

Raw entry counts are misleading. A company with 15 products shipping 3 updates each looks like a machine that ships 45 things a month. A focused startup shipping 5 meaningful changes to one product is doing harder work per entry.

The real signal isn’t total velocity. It’s velocity per product area, and whether the updates are substantive or incremental.


Platform/API work spikes before major announcements

Six companies had windows where 3 or more Platform/API entries landed within two weeks: Stripe (5 entries in a mid-December cluster), Vercel (3 in late October), PlanetScale (4 in mid-November), Cloudflare (3 in early February), Figma (3 in late February), and Slack (3 in mid-October).

The individual entries were boring. New API endpoint. Updated SDK. Webhook support for an additional event type. But stacked together, they formed a pattern.

PlanetScale’s November cluster aligned with their Vitess feature rollouts. Slack’s October spike came just before their Block Kit expansion announcements. Figma’s February cluster points toward platform work that hasn’t been publicly announced yet.

Platform work is infrastructure that gets built before it can be announced. If you’re watching a competitor and see a sudden burst of API/webhook/SDK entries, something larger is probably coming in 4-8 weeks. In my dataset, this held true for every company that showed the pattern.


The best days to check changelogs

Across 1,089 entries, releases spread more evenly across weekdays than I expected:

Which days do SaaS teams ship?

  • Thursday: 22.2%
  • Wednesday: 22.0%
  • Tuesday: 19.3%
  • Friday: 18.5%
  • Monday: 16.7%
  • Weekend: 1.3%

No single day dominates. Thursday and Wednesday are essentially tied at the top, but the gap between the busiest and quietest weekday is under 6 percentage points. Monday is the slowest weekday — ramp-up day after the weekend. Friday is more active than expected; teams apparently want to land changes before the weekend rather than sit on them.

The weekend number is effectively zero. Only 14 entries out of 1,089 landed on Saturday or Sunday. Whatever automated deploys happen over weekends, they don’t make it to public changelogs.

If you’re checking changelogs weekly, do it Friday afternoon. You’ll catch Wednesday and Thursday releases (the peak days) and anything that landed earlier in the week.


Deprecation notices are buried (and that’s a problem)

Only 15 entries out of 1,089 were categorized as deprecations. That’s 1.4%.

Some of them were significant: Supabase breaking pg_graphql’s auto-enable behavior, HubSpot sunsetting legacy sandboxes, Clerk disabling organization slugs by default, Slack pausing their classic apps deprecation, Gusto deprecating API versions. These aren’t minor cleanup — they’re changes that force customers to do work.

Deprecations appeared alongside many other updates in larger releases, easy to miss on a quick scan. HubSpot and Gusto each had multiple deprecation notices across the five months, buried in batches of other updates. Companies used soft language — “sunset,” “transitioning,” “what’s changing” — instead of clear “DEPRECATED” labels. And Slack’s entry about pausing a deprecation was itself newsworthy, which tells you something about how disruptive these changes are.

The 1.4% number is almost certainly an undercount. Companies communicate deprecations through email, in-app banners, and support docs that never touch the public changelog. If you’re tracking a competitor’s platform surface area, the changelog is not enough. You need to cross-reference help center updates and developer emails.


Developer tools and infrastructure companies produced 814 of the 1,089 entries (75%). That’s partly dataset composition — I had 10 dev tools/infra companies out of 24. But even adjusting for that, these companies write more entries per month and include more useful detail per entry.

Stripe’s changelog includes specific API version references and migration guides. Vercel logs every meaningful change to their build pipeline and edge network separately. Cloudflare’s changelog covers dozens of product areas with consistent formatting. You can read these and understand exactly what changed at a technical level.123

The non-dev-tools companies split into two camps. Some wrote useful changelogs: Customer.io logged 37 entries with clear dates and category tags across their release notes page. Intercom posted 35 structured entries. Kit’s changelog had structured entries with Feature/Improvement labels. Close posted 13 detailed entries.

Others wrote almost nothing. Zendesk produced 6 entries in five months. Canva had 6. Notion had 10 but most were sparse. These companies ship plenty of product changes — they just don’t put them in a structured changelog. Canva announces features through blog posts and social media. Notion uses a release notes page that’s more curated narrative than raw log.

If you compete with a developer tools company, their changelog is a goldmine. If you compete with Canva or Zendesk, the changelog tells you almost nothing. You’ll need blog posts, help center diffs, and social media to piece together what they actually shipped.


Consistent shippers beat fast shippers

I measured consistency using coefficient of variation (standard deviation divided by mean) of weekly entry counts. Lower means more consistent.

The most consistent shipper was Neon (CoV 0.22). They averaged about one entry per week with almost no deviation — no holiday dumps, no quiet weeks, just steady output for 23 weeks straight. Railway (CoV 0.32), Cloudflare (CoV 0.43), and Vercel (CoV 0.46) rounded out the top four.

The least consistent: Sentry (CoV 2.48 — they had zero entries through December, then 8 in February), Attio (CoV 2.28 — same pattern, zero in December and 14 in February), and Stripe (CoV 2.09). High-variance shippers tend to batch their updates — either a big release dump followed by quieter weeks, or in Attio and Sentry’s case, a complete holiday shutdown followed by a February sprint.

The consistency numbers shifted substantially when I extended from 4 months to 5. Companies that looked consistent over Oct-Jan showed their true pattern once the December dead zone and February ramp-up were included. The holidays are a consistency stress test. Companies like Neon and Railway barely flinched. Others showed a month-long gap in their output.

If I’m sizing up a competitor, consistency tells me more than volume. Neon shipping once a week for 23 straight weeks compounds into something formidable. A company that ships 14 things in February and then goes dark until April is harder to predict and often dealing with internal churn.


How to use this framework

Start by establishing a baseline. Pick each competitor’s changelog and measure their average monthly entries over the past quarter. That’s your reference point. Cloudflare’s 50/month is normal for Cloudflare. If it drops to 25, that’s a signal.

Then watch for deviations. A company that averages 12 entries/month suddenly dropping to 4 for two months is worth noticing — technical crisis, reorg, or strategic pause. A company jumping from 8 to 20 has added capacity or shifted focus. But first, check the calendar. A December slowdown is human, not strategic.

Categorize, don’t just count. A spike in Platform/API entries means something different than a spike in Improvement entries. Six of the 24 companies showed Platform/API clusters that preceded larger announcements.

Pay attention to the seasonal rhythm. February is when companies ship their Q1 backlogs. October is strong and steady. December is dead. If you’re benchmarking a competitor’s output, compare like months. A company’s February output vs. their December output tells you nothing about their trajectory. Compare February to October instead.

Ignore the bug fix count. Unless a company is unusually transparent (like Tailscale), bug fixes don’t appear in public changelogs. The absence of bug fix entries doesn’t mean the software is stable — it means the team doesn’t log them publicly.

And review quarterly, not daily. The patterns that matter — platform moves, velocity shifts, deprecation waves, seasonal rhythms — only show up in the aggregate. Week-to-week numbers are noise.


The competitive intelligence advantage isn’t in reading changelogs. Most PMs know they should. It’s in reading them systematically, consistently, and with a framework for telling signal from noise.

1,089 entries. 24 companies. Five months. The patterns are there if you look.


  1. Stripe Changelog. Available at: https://docs.stripe.com/changelog

  2. Vercel Changelog. Available at: https://vercel.com/changelog

  3. Cloudflare Changelog. Available at: https://developers.cloudflare.com/changelog/