Building a Cloudflare R2 Image Hosting: Complete Guide for Next.js Blogs
7 min read
0
Background
I recently migrated my blog from Hexo to Next.js. During the migration, I encountered an inevitable challenge: image storage.
In the old blog, images were stored directly in the Git repository. As articles accumulated, the repo size approached 500MB. Every push took forever, and Vercel deployments were painfully slow when pulling the codebase. To make matters worse, images lacked CDN acceleration, resulting in terrible loading times for visitors from China.
I considered several options: Alibaba Cloud OSS, Qiniu Cloud, AWS S3, and even GitHub as an image host (you know what I mean). Eventually, I chose Cloudflare R2 for a simple reason: generous free tier with zero egress fees.
Why R2
Cloudflare R2 is an S3-compatible object storage service, but with a completely different pricing model. AWS S3's most expensive component is egress bandwidth ($0.09 per GB), while R2 charges nothing for egress. For blogs with read-heavy workloads, this is perfect.
Pricing breakdown:
- Storage: $0.015/GB/month
- Writes: Free
- Reads: Free (yes, you read that right)
With my current image library (~2GB), monthly cost is under $0.03. In comparison, Alibaba Cloud OSS would cost several dollars just for bandwidth.
Another hidden advantage: R2 supports custom domains backed by Cloudflare's CDN. Performance in China beats AWS CloudFront hands down.
Setting Up R2 Bucket
Log into the Cloudflare Dashboard and navigate to R2. First-time users need to enable R2 in billing settings (card required even for free tier).
Create a bucket with any name you like—I used blog-assets. For region, select Auto and let Cloudflare choose the optimal location.
After creation, the critical step is configuring CORS policy. Without it, browsers will block image loading due to cross-origin restrictions.
Go to bucket Settings, find CORS Policy, and add this rule:
[
{
"AllowedOrigins": ["*"],
"AllowedMethods": ["GET", "HEAD", "PUT", "POST"],
"AllowedHeaders": ["*"],
"ExposeHeaders": ["ETag"],
"MaxAgeSeconds": 3000
}
]I set AllowedOrigins to * because my blog has multiple domains (dev, preview, production). If security is a concern, restrict it to specific domains.
Generating API Credentials
To upload files programmatically, you need an Access Key. In the R2 dashboard, click "Manage R2 API Tokens" and create a new token.
Select "Object Read & Write" permissions and scope it to your bucket.
You'll receive three critical pieces of information:
- Access Key ID
- Secret Access Key
- Endpoint URL (looks like
https://xxxxx.r2.cloudflarestorage.com)
Important: The Secret Access Key is shown only once. I store these in 1Password.
Custom Domain (Optional but Recommended)
R2's default access URL (xxxxx.r2.cloudflarestorage.com) isn't elegant. More importantly, requests to this URL bypass Cloudflare's CDN.
To bind a custom domain:
Navigate to bucket settings, find "Public Access", and click "Connect Domain". Enter your domain, like assets.yourdomain.com.
Cloudflare automatically creates a CNAME record pointing to R2's public endpoint. After DNS propagation (usually minutes), images become accessible via your custom domain.
Benefits:
- CDN acceleration kicks in with global edge caching
- Cleaner URLs
- Future-proof—if you switch storage providers, just update the CNAME without changing image URLs
I use assets.996828.xyz, hosted on Cloudflare DNS, configured in minutes.
Python Upload Script
I could use AWS CLI or various image hosting tools, but I prefer writing my own script. Reasons:
- Full control—modify logic anytime
- Integrate into blog publishing workflow
- Learn AWS Signature V4 (though painful)
Core dependencies are requests and requests_aws4auth:
pip install requests requests_aws4authKey parts of the upload script:
from requests_aws4auth import AWS4Auth
import requests
import hashlib
from datetime import datetime
ACCESS_KEY = "your Access Key ID"
SECRET_KEY = "your Secret Access Key"
ENDPOINT = "https://xxxxx.r2.cloudflarestorage.com"
BUCKET = "blog-assets"
def upload_image(file_path):
# Generate unique filename with MD5 to avoid duplicates
with open(file_path, 'rb') as f:
file_data = f.read()
md5_hash = hashlib.md5(file_data).hexdigest()[:12]
# Organize by year/month for easier management
now = datetime.now()
file_key = f"blog/{now.year}/{now.month:02d}/{md5_hash}.jpg"
# AWS4 signature authentication
auth = AWS4Auth(ACCESS_KEY, SECRET_KEY, "auto", "s3")
# Upload request
url = f"{ENDPOINT}/{BUCKET}/{file_key}"
response = requests.put(
url,
data=file_data,
auth=auth,
headers={"x-amz-acl": "public-read"}
)
if response.status_code == 200:
return f"https://assets.996828.xyz/{file_key}"
else:
raise Exception(f"Upload failed: {response.status_code}")A few important details:
Filename handling: Use MD5 hash instead of original filename to avoid issues with Chinese characters, spaces, etc. First 12 characters provide enough uniqueness.
Path organization: Categorize by blog/year/month/ instead of dumping everything in root. You'll thank yourself in six months.
ACL settings: x-amz-acl: public-read is mandatory, otherwise uploads succeed but images return 403. This cost me 30 minutes of debugging.
Signature region: R2 region must be "auto", not "us-east-1" or similar, or signature validation fails.
Automated Publishing Workflow
Manual image upload is tedious. My ideal workflow: write article, run one command, automatically upload images and replace links.
Implementation approach:
- Draft articles go in
tmp/draft/directory - Reference images with relative paths:
 - Script handles everything:
- Scan Markdown files for image references
- Upload images to R2
- Replace image links with full R2 URLs
- Move processed articles to
content/posts/
Core code snippet:
import re
def process_article(md_file):
with open(md_file, 'r', encoding='utf-8') as f:
content = f.read()
# Regex match Markdown image syntax
pattern = r'!\[(.*?)\]\((.*?)\)'
images = re.findall(pattern, content)
for alt_text, img_path in images:
# Skip images that are already URLs
if img_path.startswith('http'):
continue
# Upload image
full_path = f"tmp/draft/images/{img_path}"
r2_url = upload_image(full_path)
# Replace link
old_str = f""
new_str = f""
content = content.replace(old_str, new_str)
return contentNow the article publishing process becomes:
# 1. Prepare content
tmp/draft/
├── my-article.md
└── images/
└── screenshot.png
# 2. Run script
python scripts/publish-blog.py
# 3. Upload, replace, publish—all automaticThe entire process takes under 10 seconds.
Pitfalls Encountered
PicGo Plugin Issues
Initially, I tried using VSCode's PicGo plugin, configuring it in AWS S3 mode to connect to R2. Theoretically, it should work. In practice, constant 403 errors.
I dug through countless issues, tried pathStyleAccess, forcePathStyle, and various other parameters—still unstable. Eventually gave up and wrote a Python script, which proved more reliable.
Signature Time Drift
If your local clock is off (common in VM environments), AWS Signature V4 rejects requests due to time skew. Error message: 403 RequestTimeTooSkewed.
Solution: Sync system time. On macOS: sudo sntp -sS time.apple.com.
Missing CORS Configuration
Configuring CORS on the bucket alone isn't enough. If upload requests lack proper headers, browsers still report cross-origin errors.
Ensure your upload request includes:
headers = {
"Content-Type": "image/jpeg", # Based on actual file type
"x-amz-acl": "public-read"
}Performance Testing
Compared several CDN providers' real-world performance (500KB test image):
| Service | China Latency | Overseas Latency | Cost (10GB traffic) |
|---|---|---|---|
| R2 + Cloudflare CDN | 120ms | 80ms | $0.15 |
| Alibaba Cloud OSS + CDN | 60ms | 200ms | ~$2.00 |
| AWS S3 + CloudFront | 180ms | 50ms | $2.50 |
R2's China performance isn't the fastest (Cloudflare lacks ICP filing in China), but it's perfectly usable. Overseas access shows clear advantages.
Cost-wise, R2 wins decisively. Even if traffic increases tenfold, you pay nothing extra.
Conclusion
Cloudflare R2 is ideal for personal blog image hosting:
Pros:
- Free tier sufficient (10GB storage + unlimited bandwidth)
- S3-compatible with mature ecosystem
- Built-in CDN with global distribution
- Transparent pricing without hidden fees
Cons:
- China performance lags domestic CDNs
- Card required (even for free tier)
- Steep learning curve (AWS signature mechanism is complex)
If your blog is hosted on Vercel or Netlify, R2 makes an excellent image host. Both run on Cloudflare's network, reducing latency.
Finally, all code is open-sourced in my GitHub repo, including complete upload scripts and automated publishing workflow. Feel free to use it.
Environment: macOS 14.2 / Python 3.11 / Next.js 14.2