Complete Guide to Deploying Astro on Cloudflare: SSR Configuration + 3x Speed Boost for China

Introduction
Last week I helped a friend deploy an Astro blog. Here’s something interesting that happened. After deploying to Cloudflare Pages, I tested it from overseas - pages loaded instantly, super smooth. But when his friends in China tried to access it, they waited nearly 5 seconds for the page to load. Talk about awkward.
Cloudflare advertises over 300 data centers worldwide with unlimited free bandwidth. So why is access from China so slow? This issue puzzles many developers. You’ve probably heard of “optimized IPs,” but how exactly do you implement them? Will your account get banned? Does it actually work? These questions always linger.
In this article, I’ll walk you through the complete Astro Cloudflare deployment process - from basic static sites to SSR configuration, and finally China access optimization. To be honest, I stumbled into quite a few pitfalls myself. Now I’m sharing these lessons to help you avoid the same mistakes.
What you’ll learn:
- Complete deployment from scratch in 20 minutes
- Understand the three SSR adapter modes without confusion
- Master 3 China access optimization strategies with 3x latency reduction
Let’s dive right in.
Why Choose Cloudflare Pages
Cloudflare vs Vercel: Free Hosting Platform Comparison
People often ask me: Vercel or Cloudflare, which one? Honestly, it depends on your needs.
Bandwidth limits are the most obvious difference. Vercel’s free plan includes 100GB monthly bandwidth. Beyond that, you’re charged per use - $40 per 100GB. I once had a project hit the front page, traffic exploded, and I got a Vercel bill that hurt for quite a while. Cloudflare Pages? Unlimited bandwidth, completely free. This is incredibly friendly for individual developers - no more worrying about traffic spikes and fees.
Global performance wise, Cloudflare has over 300 data centers with broader coverage. I tested several locations across Europe, Asia, and the Americas - all showed low latency. Vercel’s edge network is solid too, but has relatively fewer nodes. If your users are distributed globally, Cloudflare’s advantage is more apparent.
Another very practical benefit: DDoS protection. Cloudflare’s free plan includes built-in DDoS protection without extra configuration. When my site was previously attacked, Cloudflare automatically blocked it - I didn’t even see it in the logs. Vercel has protection too, but the free version mainly protects their own network, not deep protection specifically for your site.
So what’s Vercel’s advantage? Build caching is excellent. If your project has many images and dependencies, Vercel retains cache from previous builds. The second build only takes 3-4 minutes. Cloudflare Pages builds from scratch every time, taking at least 10+ minutes. There’s also deep Next.js integration - if you use Next.js, Vercel is definitely the best choice, since they created it.
My recommendations:
- Astro static blog → Cloudflare Pages (free unlimited traffic)
- Unpredictable traffic projects → Cloudflare Pages (avoid fees)
- Heavy Next.js users → Vercel (best experience)
- Frequent iterations and builds → Vercel (faster build cache)
Cloudflare Pages vs Workers: 2025 Updates
I was confused about this initially too. Cloudflare has both Pages and Workers that can deploy Astro - which one to use?
The fundamental difference is actually simple. Workers is Cloudflare’s serverless compute platform where you can run JavaScript code at the edge. Pages can be understood as Workers + automated build tools packaged together. Pages still runs on Workers under the hood, but provides out-of-the-box features like Git integration and automatic deployment.
The 2025 change is that Cloudflare officially started recommending Workers over Pages for new projects. I browsed their official blog - the main reason is that Workers offers more flexibility and finer control. However, for us Astro users, this recommendation doesn’t need to be taken too seriously.
My actual experience:
- Pages approach: Connect GitHub repo, push code triggers automatic build - simple and straightforward. Perfect for “I just want to deploy quickly, don’t complicate things” scenarios.
- Workers approach: Requires manual deployment using Wrangler CLI, configure wrangler.jsonc file. The benefit is more granular control over environment variables, KV storage bindings, etc.
Which should Astro projects choose?
- Pure static sites (output: ‘static’): Use Pages, connecting GitHub via Dashboard is most convenient
- SSR sites (output: ‘server’ or ‘hybrid’): Both work, but I recommend Pages + Wrangler CLI deployment for both automation and flexibility
Simply put, if this is your first deployment, try Pages with Git integration first - you’ll see results in minutes. Once familiar, consider Wrangler CLI. No rush.
Complete Astro Cloudflare Deployment Process
Static Site Deployment: 5-Minute Quick Start
If your Astro project is purely static (blogs, documentation sites, portfolios), deployment is super simple. Let me walk you through it step by step.
Prerequisites:
- Project already pushed to GitHub
- Cloudflare account (if not, register one - it’s free)
Specific steps:
Log into Cloudflare Dashboard Open dash.cloudflare.com, navigate to Workers & Pages in the left menu.
Create new project Click Create application in the top right → select Pages → Connect to Git.
Connect GitHub repository Select your Astro project repository. First time may require authorizing Cloudflare to access GitHub - just follow the prompts.
Configure build settings This step is crucial - wrong settings will cause deployment to fail. Configure as follows:
- Framework preset: Select Astro (automatically fills in commands)
- Build command:
npm run build - Build output directory:
dist - Root directory: Leave empty if project is in repo root, fill in path if in subdirectory
Deploy Click Save and Deploy, wait a few minutes. Cloudflare automatically pulls code, installs dependencies, builds, and deploys.
After successful build, you’ll get a your-project.pages.dev domain - access it directly to see your site. Every time you push code to GitHub, Cloudflare automatically triggers a build. Super convenient.
Common issues:
- Build fails with Node.js version error: Add
.nvmrcfile or.node-versionto project root with18or20. Cloudflare automatically recognizes it. - Page shows 404: Check if
Build output directoryisdist. Some configs might usepublicorbuild- adjust based on yourastro.config.mjs. - Build timeout: Possibly too many dependencies or network issues. Try redeploying - usually works the second time.
Bind custom domain (optional):
After successful deployment, find Custom domains in project settings, click Set up a custom domain, enter your domain (like blog.yourdomain.com). Cloudflare provides a CNAME record - add it at your DNS provider. If the domain is already on Cloudflare DNS, it configures automatically, even simpler.
SSR Site Deployment: @astrojs/cloudflare Adapter Configuration Explained
I was quite confused about SSR initially. Official docs are somewhat scattered. Here I’ll explain each step in practical operation order.
When do you need SSR?
First clarify the scenarios to avoid uncertainty about whether you need it:
- Need real-time data: Like comment sections, visit statistics, user login
- Need server-side logic: API calls, database queries, permission verification
- Need on-demand rendering: Massive content, don’t want to pre-render everything as static pages
If you just have a pure blog with Markdown articles, you don’t need SSR at all - static deployment works fine.
Step 1: Install @astrojs/cloudflare adapter
Run in project root:
npx astro add cloudflareThis command automatically does three things:
- Installs
@astrojs/cloudflarepackage - Modifies
astro.config.mjsfile, adds adapter configuration - Creates
wrangler.jsoncfile (Cloudflare Workers config file)
After running, your astro.config.mjs should look like this:
import { defineConfig } from 'astro/config';
import cloudflare from '@astrojs/cloudflare';
export default defineConfig({
output: 'server', // This line is new
adapter: cloudflare(),
});Step 2: Choose rendering mode (key point!)
There are three options here - many people get stuck here. Let me explain in detail:
1. output: 'server' - Full SSR
- All pages render server-side
- Suitable for: Data-driven applications, frequently updated content
- Downside: Every request requires rendering, slightly slower performance
2. output: 'hybrid' - Hybrid mode (recommended)
- All pages are static by default
- Specific pages can opt into SSR
- Suitable for: Mostly static blogs with some dynamic features
- Advantage: Highest flexibility, best performance
3. output: 'static' - Pure static
- Doesn’t need adapter, this is the static deployment mentioned earlier
My recommendation: Choose hybrid 90% of the time, unless you’re certain the entire site needs SSR.
Hybrid mode example:
// astro.config.mjs
export default defineConfig({
output: 'hybrid', // Change to hybrid
adapter: cloudflare(),
});Then in pages that need SSR, add one line:
// src/pages/api/comments.js
export const prerender = false; // This page will SSR
export async function GET() {
// Fetch comments from database
const comments = await fetchComments();
return new Response(JSON.stringify(comments));
}Other pages remain static by default, no changes needed.
Step 3: Configure Cloudflare service bindings (optional)
If you need to use Cloudflare’s KV storage, D1 database, R2 object storage, configure bindings in wrangler.jsonc.
For example, I want to use KV storage to save user sessions:
// wrangler.jsonc
{
"name": "my-astro-app",
"compatibility_date": "2024-01-01",
"kv_namespaces": [
{
"binding": "MY_KV",
"id": "your-kv-namespace-id"
}
]
}First create a KV namespace in Cloudflare Dashboard, copy the ID and fill it in.
Then use it in code like this:
// src/pages/api/session.js
export const prerender = false;
export async function GET({ locals }) {
const { MY_KV } = locals.runtime.env;
const sessionData = await MY_KV.get('user:123');
return new Response(sessionData);
}Step 4: Local development and testing
Install Wrangler CLI (Cloudflare’s official tool):
npm install wrangler --save-devRun locally:
npm run build
npx wrangler pages dev ./distThis starts a local server simulating Cloudflare environment, where you can test SSR functionality and bindings work properly.
Step 5: Deploy to production
Two methods:
Method 1: Deploy via Wrangler CLI (recommended, more precise control)
# Login to Cloudflare account
npx wrangler login
# Build project
npm run build
# Deploy
npx wrangler pages deploy ./distFirst deployment will ask for project name, afterwards it’s just one command.
Method 2: Auto-deploy via Git integration
Still use the static deployment method mentioned earlier, connect GitHub repo. Cloudflare automatically recognizes the @astrojs/cloudflare adapter. However, binding KV and other services this way requires manual Dashboard configuration, slightly more hassle.
Common error troubleshooting:
“Hydration completed but contains mismatches” This error is quite common, caused by Cloudflare’s Auto Minify feature messing up your HTML compression. Solution: Go to Cloudflare Dashboard → your domain → Speed → Optimization, turn off all three Auto Minify options for HTML, CSS, JS.
“Cannot find module ‘MY_KV’” Means binding isn’t configured properly. Check if
bindingname inwrangler.jsoncmatches what’s used in code.500 error after deployment Check Cloudflare Dashboard → Workers & Pages → your project → Logs (real-time logs) for specific error info. Most likely code accessing unbound resources.
Environment Variables and Secrets Management
Many people get confused about environment variables - I accidentally pushed an API key to GitHub once and had to quickly revert it. Here’s the right approach.
Local development environment:
Create .dev.vars file in project root (note the leading dot):
# .dev.vars
DATABASE_URL=postgres://localhost:5432/mydb
API_KEY=your-secret-key-for-devImportant: Add .dev.vars to .gitignore - don’t push to GitHub!
During local development, Wrangler automatically reads this file. Access it in code like this:
export const prerender = false;
export async function GET({ locals }) {
const apiKey = locals.runtime.env.API_KEY;
// Use apiKey to call third-party API
}Production environment:
Don’t use .dev.vars, configure in Cloudflare Dashboard instead.
Steps:
- Go to Workers & Pages → select your project
- Click Settings → Environment Variables
- Click Add variable, enter name and value
- Select environment (Production or Preview)
Use Secrets for sensitive info:
For particularly sensitive data (like payment API keys), use Cloudflare’s Secrets feature with encrypted storage:
npx wrangler secret put API_KEYAfter running, it prompts you to enter the value without displaying it in the command line - more secure.
My recommendations:
- Database URLs, third-party API keys → Secrets
- Public configs (like site name, CDN address) → Regular environment variables
China Access Speed Optimization Practice
Problem Diagnosis: Test Your Site’s Real Speed in China
After deployment, don’t rush to optimize - test actual speed first. Not all sites need optimization. If your users are mainly overseas, Cloudflare’s default configuration is already perfect.
Online speed test tools:
17ce.com (recommended) Open 17ce.com, enter your domain, select “Website Speed Test”. Tests from different provinces and carriers in China. Focus on the “Response Time” column - if generally under 200ms, that’s good; over 300ms suggests optimization is worthwhile.
Chinaz Ping Test tool.chinaz.com/speedtest, similar functionality, shows more detailed routing info.
Local testing:
If you’re in China yourself, test using Chrome DevTools:
- Press F12 to open DevTools
- Switch to Network tab
- Refresh page, check the Time value of the first request
Judgment criteria:
- Latency < 150ms: Great, no optimization needed
- Latency 150-250ms: Decent, depends on personal needs
- Latency > 250ms or Load time > 3 seconds: Optimization recommended
Why is China slow?
Simply put, Cloudflare uses Anycast technology. China’s network environment is special, routing often takes long detours. For example, if you’re accessing from Beijing, traffic might first route to Hong Kong before coming back, naturally increasing latency. The principle of optimized IPs/CNAMEs is finding nodes with more direct routes.
Solution 1: Use Optimized IPs (Free, Most Effective)
This is the most effective solution. I tested it myself - latency dropped from 280ms to around 70ms. However, it requires slightly stronger technical skills and has some risks, which I’ll explain later.
What are optimized IPs?
Cloudflare has hundreds of IP addresses, different IPs corresponding to different data centers. Access speed to these IPs from China varies greatly - some take long detours and are slow as molasses, others connect directly and fast as lightning. Optimized IPs means finding those fast IPs and having your domain resolve directly to them.
Step 1: Test optimized IPs
Use the CloudflareSpeedTest tool, download from GitHub: XIU2/CloudflareSpeedTest
Windows users download CloudflareST.exe, Linux/Mac download corresponding versions.
Running method (Windows):
# Double-click to run or execute via command line
CloudflareST.exeThe tool automatically tests hundreds of Cloudflare IPs, takes about 5-10 minutes. Upon completion, outputs fastest IP list, like:
IP Address Latency Download Speed
104.16.160.10 68ms 15.2MB/s
172.64.32.5 75ms 14.8MB/s
104.23.240.28 82ms 13.9MB/sNote the first IP (lowest latency).
Step 2: Modify DNS resolution
Important prerequisite: Your domain DNS cannot be hosted on Cloudflare, must be other providers (Alibaba Cloud, DNSPod, anything except Cloudflare). If currently on Cloudflare DNS, need to transfer out first.
Why? Because Cloudflare DNS forces Anycast routing - changing A records won’t help.
Configuration method (using DNSPod as example):
Log into your DNS provider
Find your domain, add/modify A record:
- Host record:
@(represents root domain) orwww - Record type: A
- Record value:
104.16.160.10(your tested optimized IP) - TTL: 600 (10 minutes)
- Host record:
Save, wait for propagation (usually 5-10 minutes)
Step 3: Verify effectiveness
After DNS propagates, test speed again using 17ce.com - should see noticeable improvement.
Risk warning (important!):
In January 2025, Cloudflare updated their Terms of Service with a clause saying they may limit or penalize “abuse” behavior. While not explicitly stating whether optimized IPs count, risks exist.
My recommendations:
- Personal blogs, small projects: Can use, risk is low
- Commercial sites, high-traffic projects: Be cautious, recommend Solution 2 (CNAME) or Solution 3 (geo-routing)
- Regular checks: IPs may become invalid, test every 1-2 months, swap in new ones
Common issues:
Changed DNS but not working
- Clear browser cache or test in incognito mode
- Check if DNS actually propagated, use
nslookup yourdomain.comcommand to verify
Optimized IP became slow again after some time
- IP may have become invalid, retest speed, swap in new one
Some regions fast, others slow
- Different carriers (Telecom/Unicom/Mobile) have different fastest IPs, Solution 3 (geo-routing) can solve this
Solution 2: Use Optimized CNAME Domain (Free, More Stable)
If you think testing IPs yourself is too much hassle, or worry about IPs frequently becoming invalid, you can use public optimized CNAME domains. These domains are backed by optimized IPs maintained by others, regularly updated, much more convenient.
Principle:
Some developers or organizations set up domains that regularly test and resolve to latest optimized IPs. You just CNAME your domain to theirs to enjoy acceleration benefits.
Step 1: Choose public CNAME
Common public optimized domains (these are examples only, test before using):
cdn.cloudflare.questcf.xiu2.xyz
Note: Public CNAMEs may become invalid anytime. Recommend joining relevant communities or Telegram groups to get timely updates on latest available domains.
Step 2: Modify DNS resolution
Again, domain DNS cannot be on Cloudflare. Configuration method (using DNSPod as example):
Log into DNS provider
Add CNAME record:
- Host record:
@orwww - Record type: CNAME
- Record value:
cdn.cloudflare.quest(your chosen public domain) - TTL: 600
- Host record:
Save, wait for propagation
Step 3: Resolve potential 403 errors
Using CNAME method easily encounters 403 Forbidden errors, because Cloudflare detects your domain isn’t registered in their system but is accessing through their IP.
Solution:
- Transfer domain DNS out of Cloudflare (if still there)
- Delete that site in Cloudflare Dashboard
- Wait a few minutes, try accessing again - should work normally
Step 4: Verify effectiveness
After CNAME propagates, use nslookup yourdomain.com to check if resolving to public CNAME domain, then verify with speed test.
Pros and cons comparison:
Pros:
- Don’t need to test IPs yourself, convenient
- Maintainers regularly update, better stability
- Slightly lower risk than using optimized IPs directly
Cons:
- Depends on third parties, you’re stuck if they stop
- Speed may not match IPs you test yourself (since not specifically optimized for you)
My recommendation: Suitable for users who “want optimization but don’t want too much hassle,” pretty good cost-benefit ratio.
Solution 3: Geo-routing DNS (Requires Paid DNS, Best Results)
This is the ultimate solution with best results but requires some cost. Suitable for sites with many users both in China and abroad, with high speed requirements.
Core idea:
Return different IP addresses for users in different regions. For example:
- China Telecom users → Resolve to Telecom optimized IP
- China Unicom users → Resolve to Unicom optimized IP
- Overseas users → Resolve to Cloudflare default address (or
your-project.pages.dev)
This way everyone accesses via optimal path.
Required tools:
DNS providers supporting intelligent resolution, like:
- DNSPod (under Tencent Cloud, free personal version has basic features)
- Alibaba Cloud DNS (paid, but powerful features)
- Cloudflare DNS (doesn’t support China carrier routing, not suitable for this scenario)
Configuration method (using DNSPod as example):
Test optimized IPs for different carriers
Use CloudflareSpeedTest tool, test separately on Telecom, Unicom, Mobile networks, record fastest IPs. Or refer to these commonly used IP ranges (testing yourself is more accurate):
- Telecom: 104.16.160.0/24
- Unicom: 104.23.240.0/24
- Mobile: 172.64.32.0/24
Configure carrier-specific routing
Log into DNSPod, add multiple A records, each with different “Route Type”:
Record 1: - Host record: @ - Record type: A - Route type: Telecom - Record value: 104.16.160.10 Record 2: - Host record: @ - Record type: A - Route type: Unicom - Record value: 104.23.240.5 Record 3: - Host record: @ - Record type: A - Route type: Mobile - Record value: 172.64.32.8 Record 4 (default route for overseas users): - Host record: @ - Record type: CNAME - Route type: Default - Record value: your-project.pages.devSave, wait for propagation
Results:
After carrier-specific routing, users on different China carriers access fastest nodes, overseas users get native Cloudflare - both sides covered.
Cost:
- DNSPod Personal: Free (supports basic carrier routing)
- DNSPod Professional: 20 RMB/month (supports more granular geo-routing)
- Alibaba Cloud DNS: Charged by query volume, small sites a few RMB per month
My recommendation: If your site gets >10,000 visits monthly, worth this small investment for significantly improved experience.
Optimization Effect Comparison and Monitoring
After discussing so many solutions, what are the actual results? Let me compare with real data.
Speed comparison before and after optimization (blog test case, Beijing Telecom network):
| Solution | Average Latency | TTFB | Load Time | Cost |
|---|---|---|---|---|
| No optimization (default Cloudflare) | 280ms | 1.8s | 3.2s | Free |
| Solution 1: Optimized IP | 75ms | 0.5s | 1.1s | Free |
| Solution 2: Public CNAME | 120ms | 0.7s | 1.5s | Free |
| Solution 3: Carrier routing | 65ms | 0.4s | 0.9s | 20 RMB/month |
As you can see, speed improved 3-5x after optimization, completely different user experience.
Long-term monitoring solutions:
Optimization isn’t the end - need regular monitoring since IPs may become invalid.
Recommended tools:
UptimeRobot (uptimerobot.com)
- Free monitoring for 50 sites
- Pings every 5 minutes, sends email on downtime
- Shows response time trends
Cloudflare Analytics
- Built into Cloudflare Dashboard, free
- View traffic sources, bandwidth usage, error rates
- Downside: can’t see specific China latency
Better Uptime (betteruptime.com)
- Paid with free tier
- Can create public status pages to increase user trust
When to adjust optimized IPs:
Set a reminder to check every 1-2 months:
- Use 17ce.com to test speed, check if latency increased significantly
- If latency >200ms, rerun CloudflareSpeedTest, swap in new IP
- Update DNS records
From my experience, IPs have noticeable fluctuations roughly every 2-3 months - just swap timely.
Conclusion
After all this, let me summarize the key points.
Deployment process: Deploying Astro on Cloudflare Pages is genuinely simple. Pure static sites connect GitHub in minutes; for SSR needs, use npx astro add cloudflare for one-click configuration, choose hybrid or server mode - static what should be static, dynamic what should be dynamic, flexible and efficient.
SSR configuration: Remember these key points:
- Use
hybridmode for 90% of scenarios - Need KV/D1/R2? Configure bindings in
wrangler.jsonc - Local dev uses
.dev.vars, production uses Dashboard environment variables - Encounter Hydration errors? Turn off Auto Minify
China access optimization: Three solutions each suited for different scenarios:
- Solution 1 (Optimized IP): Best results, free, but needs regular maintenance, some risk
- Solution 2 (Public CNAME): Convenient, free, medium results, suits users who don’t want hassle
- Solution 3 (Carrier routing): Ultimate solution, requires small cost, covers both China and abroad
My recommendations:
- Personal blogs, low traffic → Solution 1 or 2
- Higher traffic, values experience → Solution 3
- Commercial projects → Use optimized IPs cautiously, prioritize Solution 3 or accept default speed
Next steps:
If you haven’t started yet, you can now:
- Create a Cloudflare account, try static deployment, feel the speed
- If China access is slow, first test with 17ce.com before deciding whether to optimize
- After optimization, set reminders to regularly check if IPs became invalid
Advanced learning:
Deployment is just the first step. Cloudflare’s ecosystem has many powerful features:
- Workers KV: Key-value storage, suitable for caching, sessions
- D1: SQLite database, runs directly at the edge
- R2: Object storage, competes with AWS S3, large free tier too
- Pages Functions: Write serverless functions directly in Pages
All of these seamlessly integrate with Astro - you can explore gradually.
Finally, if this article helped you, feel free to share your deployment experiences or pitfalls you encountered in the comments. Let’s learn together.
Published on: Dec 3, 2025 · Modified on: Dec 4, 2025
Related Posts

Building an Astro Blog from Scratch: Complete Guide from Homepage to Deployment in 1 Hour

What is Astro? Understanding Zero JS, Islands Architecture, and Content-First in 3 Minutes
