Almost Astro 6 + Workers Cutover
Astro 6 beta + Workers migration, production cutover, tweet embeds, and perf cleanup.
Astro 6 beta + Workers migration, production cutover, tweet embeds, and perf cleanup.
What changed
This branch moved the site from Astro 5.16.15 on Cloudflare Pages to Astro 6.0.0-beta.20 on Cloudflare Workers. The obvious code moves were the ones Astro documented: src/content/config.ts became src/content.config.ts, z moved to astro/zod, and the deprecated <ViewTransitions /> component became <ClientRouter />.
The bigger shift was runtime truth. functions/_middleware.ts is gone, and the branch now treats Workers as the thing to validate against instead of carrying the old Pages-oriented assumptions forward.
The most useful before-and-after package snapshot looked like this:
| Package | Before | After |
|---|---|---|
astro | ^5.16.15 | ^6.0.0-beta.20 |
@astrojs/cloudflare | ^12.6.5 | ^13.0.0-beta.14 |
@astrojs/mdx | ^4.3.13 | ^5.0.0-beta.12 |
@astrojs/react | ^4.4.2 | ^5.0.0-beta.4 |
@astrojs/rss | ^4.0.15 | ^4.0.15-beta.4 |
@astrojs/sitemap | ^3.7.0 | ^3.6.1-beta.3 |
| Area | Before | After |
|---|---|---|
| Framework/runtime | Astro 5.16.15 on Cloudflare Pages | Astro 6.0.0-beta.20 on Cloudflare Workers |
| Content config | src/content/config.ts | src/content.config.ts |
| Client routing | <ViewTransitions /> plus a router patch | <ClientRouter /> on supported APIs |
| Middleware shape | functions/_middleware.ts | src/middleware.ts on Workers |
| CSP strategy | Parse generated _worker.js output | Astro 6 stable global CSP |
What got simpler
The nicest cleanup was CSP. The old setup parsed Astro’s generated _worker.js output to keep headers and router behavior aligned. Astro 6’s stable CSP support made that unnecessary, so the branch now uses the simpler supported global CSP path instead.
The other good deletion was the old Astro router patch. patches/astro+5.16.15.patch had been reaching into router internals directly. That patch is gone rather than being dragged forward into Astro 6.
Why the beta move made sense here
This was still beta software, but it was a good candidate for beta work. The site is content-driven, not life-critical, and the stack already had the right pressure valves: builds that actually catch things, validation scripts that check public surfaces, preview environments that make it possible to rehearse the move, a rollback path back to Pages, and feature flags that keep preview-only content from leaking into production.
The migration also had a lot of magnetic loops built into it. Build, test, compare, deploy preview, run parity checks, fix the weird thing, rerun, cut over, verify again. That rhythm matters more than some abstract rule about whether betas are always good or always bad. What made this workable was not bravado. It was having enough automation, enough checkpoints, and enough human validation in the loop to keep the risky parts legible.
That is also why the production cutover itself ended up feeling less dramatic than the migration work. By the time the domains actually moved, the branch had already been forced through performance checks, public parity checks, tweet parity checks, vault checks, iOS checks, and rollback thinking. The beta risk was real. It just was not unmanaged.
What broke
Some breakage was expected. Some of it was weirder.
ThumbHash generation was pulling sharp into a Cloudflare server bundle path where it did not belong, which caused partially rendered HTML in early Worker runs. Mermaid rendering also needed a safer fallback path. Both are now handled without aborting the page render.
Keystatic was the most obvious compatibility miss. @keystatic/astro@5.0.6 still advertises Astro peer support through v5, and its Cloudflare path was still using the removed Astro.locals.runtime.env API. A small local patch gets it booting again on Astro 6 + Workers, and I would like to delete that patch as soon as upstream catches up.
Tweet embeds turned into their own little subplot. The syndication endpoint behaved differently inside the Worker-oriented prerender path than it did in plain Node, so build-time fetches started baking “Tweet not found” fallbacks into public pages. The fix was to stop trusting prerender-time network luck: fetch the tweet JSON once during prebuild with plain Node, write a local cache file, and render the embed HTML from that cached payload.
What was actually verified
This is the verification story, not just a build log.
| Check | Status | What that means |
|---|---|---|
npm test | Pass | unit-level safety net still green |
CF_PAGES_BRANCH=astro6-beta-workers npm run build | Pass | branch builds end-to-end on the Workers path |
CF_PAGES_BRANCH=main npm run build | Pass | production-shaped Worker build succeeds too |
npm run test:csp | Pass | global CSP stayed aligned with the migrated runtime |
npm run test:seo | Pass | generated SEO surfaces still validate |
npm run test:search | Pass | search/index outputs still build correctly |
| Pre-cutover public parity pass vs live Pages | Proven | the deployed Worker matched public behavior where it mattered |
| Production custom-domain cutover | Proven | tonyseets.com and www.tonyseets.com now serve Worker responses |
| Tweet parity on production | Proven | live tweet pages render real embed HTML again instead of fallback blocks |
| iOS navigation | Proven | iPhone 14 emulation matched live, and a later human-side physical iOS check also looked good |
The main public diffs I saw during the cutover work were feature-flag behavior around /projects, not migration breakage. I also had to fix a config-time mismatch so sitemap, rss, and llms now agree about what is public.
Performance, honestly
The branch ended with four useful perf snapshots for the same route matrix: the pre-migration Pages baseline, the first post-migration Workers rerun, the first remediation pass, and one final cleanup pass after I chased the remaining waste instead of calling it “close enough.”
The first rerun was not the result I wanted. Desktop stayed close to flat, but mobile fell back harder than I was comfortable with. The worst regressions were the homepage and /projects.
The local matrix
| Checkpoint | Mobile score | Mobile FCP | Mobile LCP | Desktop score | Desktop FCP | Desktop LCP |
|---|---|---|---|---|---|---|
| Old Pages baseline | 97.7 | 1621 ms | 2168 ms | 100 | 450 ms | 561 ms |
| First migrated Workers rerun | 91.9 | 2225 ms | 2991 ms | 99.7 | 546 ms | 698 ms |
| First perf remediation | 94.3 | 2150 ms | 2640 ms | 100 | 527 ms | 626 ms |
| Final cleanup pass | 96.4 | 1842 ms | 2440 ms | 100 | 499 ms | 617 ms |
That recovery happened because the fixes got more specific instead of more vague:
| Fix pass | What moved most |
|---|---|
| Perf pass 1 | lazy hydration for the below-the-fold homepage chart, fewer non-critical preconnects, responsive Astro images on /projects, stable featured-slider initial state |
| Perf pass 2 | removed remaining JetBrains Mono usage on home and /projects, removed the remaining above-the-fold italic font-shift source, stopped loading the hover-only link-preview bundle on non-hover devices |
Compared with the first Workers rerun, the final local pass ended up about 4.6 mobile Lighthouse points better, about 383 ms faster on mobile FCP, about 551 ms faster on mobile LCP, and about 79.5 KB lighter on average transfer. Desktop improved too: about 48 ms faster on FCP, about 81 ms faster on LCP, and about 67 KB lighter on transfer.
The clearest route-level recoveries were:
| Route | Mobile score | LCP | CLS |
|---|---|---|---|
/ | 87 -> 96 | 3634 ms -> 2481 ms | 0 -> 0 |
/projects | 88 -> 98 | 3617 ms -> 2107 ms | 0.0087 -> 0 |
The live comparison that mattered
The most trustworthy apples-to-apples comparison I ended with was the current public Pages site versus the deployed Worker candidate that was about to replace it.
| Average delta, Worker vs live Pages | Mobile | Desktop |
|---|---|---|
| Lighthouse score | +29 | +21.2 |
| FCP | -77.7 ms | -3.3 ms |
| LCP | -580.3 ms | -194.9 ms |
| Transfer size | -140.9 KB | -126.8 KB |
| JavaScript transfer | -80.1 KB | -67.2 KB |
| Representative live route | Pages mobile score | Worker mobile score | LCP delta |
|---|---|---|---|
/ | 71 | 100 | -748 ms |
/blog | 69 | 100 | -745 ms |
/field-notes | 70 | 100 | -743 ms |
That means two things can be true at once:
- the broader local matrix still showed a remaining mobile gap versus the original old baseline
- the deployed Worker candidate beat the current live Pages site on every measured route in the live remote subset
The post-launch check
After the real production cutover, I ran one smaller live Lighthouse pass against the public Worker on / and /blog/agent-shaped-web/.
| Route | Form factor | Score | FCP | LCP | CLS | TBT |
|---|---|---|---|---|---|---|
/ | Desktop | 74 | 310 ms | 350 ms | 0 | 730 ms |
/ | Mobile | 71 | 1158 ms | 1383 ms | 0 | 2809 ms |
/blog/agent-shaped-web/ | Desktop | 76 | 318 ms | 565 ms | 0 | 603 ms |
/blog/agent-shaped-web/ | Mobile | 69 | 1158 ms | 2283 ms | 0.0006 | 2834 ms |
The important interpretation there is not “the site got slow again.” The main offender in the live runs was Cloudflare’s own challenge script, not a fresh app-side regression. On the homepage mobile run, the top bootup-time item was cdn-cgi/challenge-platform/scripts/jsd/main.js at about 5869 ms, which swamps Lighthouse TBT and drags the score harder than the page’s own render path does.
Compared with the earlier post-cutover homepage check on the same live domain, the homepage Worker path stayed basically flat on FCP, improved LCP, and still held CLS at 0:
| Homepage live check | Desktop score | Desktop LCP | Mobile score | Mobile LCP | Mobile TBT |
|---|---|---|---|---|---|
| Earlier post-cutover run | 76 | 537 ms | 71 | 1740 ms | 2634 ms |
| After tweet/cache fix | 74 | 350 ms | 71 | 1383 ms | 2809 ms |
That is why I read the live production perf story as “render path still healthy, challenge overhead still noisy,” not “migration regression reopened.”
How close this looks to Astro 6 stable
This does not feel like there is a second migration hiding after this one. It feels like most of the migration-bearing work is already done, and the stable follow-up should mostly be a cleanup pass.
| Area | Readiness read | Why |
|---|---|---|
| Astro core | Mostly there | the branch already absorbed the breaking changes that mattered here |
| Cloudflare adapter | Mostly there | the branch now follows the Workers-first path Astro is steering toward |
| Router + CSP | In good shape | this branch moved off the patchy/internal path and onto supported APIs |
| Keystatic | Not fully there yet | works with a local patch, which I still want to delete later |
| Operational proof | Mostly there | the public domains are already on the Worker, tweets were reproven live, and the remaining uncertainty is ecosystem lag rather than the core cutover path |
My best estimate is that the Astro core + official Cloudflare adapter migration work is roughly 85-90% done already, and the practical production move is now mostly past the risky part. What remains looks more like 90%+ operational follow-through than a second migration.
That is why I think the next pass, when Astro 6 stable actually lands, should be intentionally small: swap beta versions for stable, rerun the matrix, rerun the live Pages-vs-Worker subset, and check whether the Keystatic patch can finally disappear.
What still needs another pass
After the production cutover, what I still want is smaller:
- an upstream Keystatic release that makes the local patch unnecessary
- one smaller cleanup pass when Astro 6 stable lands so the branch can drop the beta versions and rerun the same matrix
- maybe one future pass on live challenge overhead if I decide it is worth tuning the Cloudflare side rather than just reading around it in Lighthouse
The migration is meaningfully simpler now than it was on Astro 5, but it is not “done because the build passed.” The branch is in a much better place. The last bit is making sure the performance story is as clean as the runtime story.