Back to Colophon
Back to Colophon

Almost Astro 6 + Workers Cutover

Astro 6 beta + Workers migration, production cutover, tweet embeds, and perf cleanup.

Astro 6 beta + Workers migration, production cutover, tweet embeds, and perf cleanup.

#astro #cloudflare #workers #migration #performance

What changed#

This branch moved the site from Astro 5.16.15 on Cloudflare Pages to Astro 6.0.0-beta.20 on Cloudflare Workers. The obvious code moves were the ones Astro documented: src/content/config.ts became src/content.config.ts, z moved to astro/zod, and the deprecated <ViewTransitions /> component became <ClientRouter />.

The bigger shift was runtime truth. functions/_middleware.ts is gone, and the branch now treats Workers as the thing to validate against instead of carrying the old Pages-oriented assumptions forward.

The most useful before-and-after package snapshot looked like this:

PackageBeforeAfter
astro^5.16.15^6.0.0-beta.20
@astrojs/cloudflare^12.6.5^13.0.0-beta.14
@astrojs/mdx^4.3.13^5.0.0-beta.12
@astrojs/react^4.4.2^5.0.0-beta.4
@astrojs/rss^4.0.15^4.0.15-beta.4
@astrojs/sitemap^3.7.0^3.6.1-beta.3
AreaBeforeAfter
Framework/runtimeAstro 5.16.15 on Cloudflare PagesAstro 6.0.0-beta.20 on Cloudflare Workers
Content configsrc/content/config.tssrc/content.config.ts
Client routing<ViewTransitions /> plus a router patch<ClientRouter /> on supported APIs
Middleware shapefunctions/_middleware.tssrc/middleware.ts on Workers
CSP strategyParse generated _worker.js outputAstro 6 stable global CSP
Before: Astro 5.16.15 + Pages After: Astro 6.0.0-beta.20 + Workers Request Pages Functions middleware Astro router patch Custom CSP sync from generated worker output Page response Request Astro middleware on Workers ClientRouter Astro stable global CSP Worker response

What got simpler#

The nicest cleanup was CSP. The old setup parsed Astro’s generated _worker.js output to keep headers and router behavior aligned. Astro 6’s stable CSP support made that unnecessary, so the branch now uses the simpler supported global CSP path instead.

The other good deletion was the old Astro router patch. patches/astro+5.16.15.patch had been reaching into router internals directly. That patch is gone rather than being dragged forward into Astro 6.

Why the beta move made sense here#

This was still beta software, but it was a good candidate for beta work. The site is content-driven, not life-critical, and the stack already had the right pressure valves: builds that actually catch things, validation scripts that check public surfaces, preview environments that make it possible to rehearse the move, a rollback path back to Pages, and feature flags that keep preview-only content from leaking into production.

The migration also had a lot of magnetic loops built into it. Build, test, compare, deploy preview, run parity checks, fix the weird thing, rerun, cut over, verify again. That rhythm matters more than some abstract rule about whether betas are always good or always bad. What made this workable was not bravado. It was having enough automation, enough checkpoints, and enough human validation in the loop to keep the risky parts legible.

That is also why the production cutover itself ended up feeling less dramatic than the migration work. By the time the domains actually moved, the branch had already been forced through performance checks, public parity checks, tweet parity checks, vault checks, iOS checks, and rollback thinking. The beta risk was real. It just was not unmanaged.

What broke#

Some breakage was expected. Some of it was weirder.

ThumbHash generation was pulling sharp into a Cloudflare server bundle path where it did not belong, which caused partially rendered HTML in early Worker runs. Mermaid rendering also needed a safer fallback path. Both are now handled without aborting the page render.

Keystatic was the most obvious compatibility miss. @keystatic/astro@5.0.6 still advertises Astro peer support through v5, and its Cloudflare path was still using the removed Astro.locals.runtime.env API. A small local patch gets it booting again on Astro 6 + Workers, and I would like to delete that patch as soon as upstream catches up.

Tweet embeds turned into their own little subplot. The syndication endpoint behaved differently inside the Worker-oriented prerender path than it did in plain Node, so build-time fetches started baking “Tweet not found” fallbacks into public pages. The fix was to stop trusting prerender-time network luck: fetch the tweet JSON once during prebuild with plain Node, write a local cache file, and render the embed HTML from that cached payload.

What was actually verified#

This is the verification story, not just a build log.

CheckStatusWhat that means
npm testPassunit-level safety net still green
CF_PAGES_BRANCH=astro6-beta-workers npm run buildPassbranch builds end-to-end on the Workers path
CF_PAGES_BRANCH=main npm run buildPassproduction-shaped Worker build succeeds too
npm run test:cspPassglobal CSP stayed aligned with the migrated runtime
npm run test:seoPassgenerated SEO surfaces still validate
npm run test:searchPasssearch/index outputs still build correctly
Pre-cutover public parity pass vs live PagesProventhe deployed Worker matched public behavior where it mattered
Production custom-domain cutoverProventonyseets.com and www.tonyseets.com now serve Worker responses
Tweet parity on productionProvenlive tweet pages render real embed HTML again instead of fallback blocks
iOS navigationProveniPhone 14 emulation matched live, and a later human-side physical iOS check also looked good

The main public diffs I saw during the cutover work were feature-flag behavior around /projects, not migration breakage. I also had to fix a config-time mismatch so sitemap, rss, and llms now agree about what is public.

Performance, honestly#

The branch ended with four useful perf snapshots for the same route matrix: the pre-migration Pages baseline, the first post-migration Workers rerun, the first remediation pass, and one final cleanup pass after I chased the remaining waste instead of calling it “close enough.”

The first rerun was not the result I wanted. Desktop stayed close to flat, but mobile fell back harder than I was comfortable with. The worst regressions were the homepage and /projects.

Old Pages baselinemobile score 97.7 First Workers rerunmobile score 91.9 Perf pass 1lazy dashboard hydrationresponsive project imageslighter resource hints Perf pass 2remove remaining mono usageremove remaining font-shift sourceskip hover-only mobile tooltip bundle Final local matrixmobile score 96.4CLS 0 Live Pages vs deployed Worker subsetWorker ahead on every measured route

The local matrix#

CheckpointMobile scoreMobile FCPMobile LCPDesktop scoreDesktop FCPDesktop LCP
Old Pages baseline97.71621 ms2168 ms100450 ms561 ms
First migrated Workers rerun91.92225 ms2991 ms99.7546 ms698 ms
First perf remediation94.32150 ms2640 ms100527 ms626 ms
Final cleanup pass96.41842 ms2440 ms100499 ms617 ms

That recovery happened because the fixes got more specific instead of more vague:

Fix passWhat moved most
Perf pass 1lazy hydration for the below-the-fold homepage chart, fewer non-critical preconnects, responsive Astro images on /projects, stable featured-slider initial state
Perf pass 2removed remaining JetBrains Mono usage on home and /projects, removed the remaining above-the-fold italic font-shift source, stopped loading the hover-only link-preview bundle on non-hover devices

Compared with the first Workers rerun, the final local pass ended up about 4.6 mobile Lighthouse points better, about 383 ms faster on mobile FCP, about 551 ms faster on mobile LCP, and about 79.5 KB lighter on average transfer. Desktop improved too: about 48 ms faster on FCP, about 81 ms faster on LCP, and about 67 KB lighter on transfer.

The clearest route-level recoveries were:

RouteMobile scoreLCPCLS
/87 -> 963634 ms -> 2481 ms0 -> 0
/projects88 -> 983617 ms -> 2107 ms0.0087 -> 0

The live comparison that mattered#

The most trustworthy apples-to-apples comparison I ended with was the current public Pages site versus the deployed Worker candidate that was about to replace it.

Average delta, Worker vs live PagesMobileDesktop
Lighthouse score+29+21.2
FCP-77.7 ms-3.3 ms
LCP-580.3 ms-194.9 ms
Transfer size-140.9 KB-126.8 KB
JavaScript transfer-80.1 KB-67.2 KB
Representative live routePages mobile scoreWorker mobile scoreLCP delta
/71100-748 ms
/blog69100-745 ms
/field-notes70100-743 ms

That means two things can be true at once:

  • the broader local matrix still showed a remaining mobile gap versus the original old baseline
  • the deployed Worker candidate beat the current live Pages site on every measured route in the live remote subset

The post-launch check#

After the real production cutover, I ran one smaller live Lighthouse pass against the public Worker on / and /blog/agent-shaped-web/.

RouteForm factorScoreFCPLCPCLSTBT
/Desktop74310 ms350 ms0730 ms
/Mobile711158 ms1383 ms02809 ms
/blog/agent-shaped-web/Desktop76318 ms565 ms0603 ms
/blog/agent-shaped-web/Mobile691158 ms2283 ms0.00062834 ms

The important interpretation there is not “the site got slow again.” The main offender in the live runs was Cloudflare’s own challenge script, not a fresh app-side regression. On the homepage mobile run, the top bootup-time item was cdn-cgi/challenge-platform/scripts/jsd/main.js at about 5869 ms, which swamps Lighthouse TBT and drags the score harder than the page’s own render path does.

Compared with the earlier post-cutover homepage check on the same live domain, the homepage Worker path stayed basically flat on FCP, improved LCP, and still held CLS at 0:

Homepage live checkDesktop scoreDesktop LCPMobile scoreMobile LCPMobile TBT
Earlier post-cutover run76537 ms711740 ms2634 ms
After tweet/cache fix74350 ms711383 ms2809 ms

That is why I read the live production perf story as “render path still healthy, challenge overhead still noisy,” not “migration regression reopened.”

How close this looks to Astro 6 stable#

This does not feel like there is a second migration hiding after this one. It feels like most of the migration-bearing work is already done, and the stable follow-up should mostly be a cleanup pass.

AreaReadiness readWhy
Astro coreMostly therethe branch already absorbed the breaking changes that mattered here
Cloudflare adapterMostly therethe branch now follows the Workers-first path Astro is steering toward
Router + CSPIn good shapethis branch moved off the patchy/internal path and onto supported APIs
KeystaticNot fully there yetworks with a local patch, which I still want to delete later
Operational proofMostly therethe public domains are already on the Worker, tweets were reproven live, and the remaining uncertainty is ecosystem lag rather than the core cutover path

My best estimate is that the Astro core + official Cloudflare adapter migration work is roughly 85-90% done already, and the practical production move is now mostly past the risky part. What remains looks more like 90%+ operational follow-through than a second migration.

That is why I think the next pass, when Astro 6 stable actually lands, should be intentionally small: swap beta versions for stable, rerun the matrix, rerun the live Pages-vs-Worker subset, and check whether the Keystatic patch can finally disappear.

What still needs another pass#

After the production cutover, what I still want is smaller:

  • an upstream Keystatic release that makes the local patch unnecessary
  • one smaller cleanup pass when Astro 6 stable lands so the branch can drop the beta versions and rerun the same matrix
  • maybe one future pass on live challenge overhead if I decide it is worth tuning the Cloudflare side rather than just reading around it in Lighthouse

The migration is meaningfully simpler now than it was on Astro 5, but it is not “done because the build passed.” The branch is in a much better place. The last bit is making sure the performance story is as clean as the runtime story.

Astro 6 beta + Workers migration, production cutover, tweet embeds, and perf cleanup.