I recently migrated my blog from Hexo to Next.js. During the migration, I encountered an inevitable challenge: image storage.
In the old blog, images were stored directly in the Git repository. As articles accumulated, the repo size approached 500MB. Every push took forever, and Vercel deployments were painfully slow when pulling the codebase. To make matters worse, images lacked CDN acceleration, resulting in terrible loading times for visitors from China.
I considered several options: Alibaba Cloud OSS, Qiniu Cloud, AWS S3, and even GitHub as an image host (you know what I mean). Eventually, I chose Cloudflare R2 for a simple reason: generous free tier with zero egress fees.
Cloudflare R2 is an S3-compatible object storage service, but with a completely different pricing model. AWS S3's most expensive component is egress bandwidth ($0.09 per GB), while R2 charges nothing for egress. For blogs with read-heavy workloads, this is perfect.
Pricing breakdown:
With my current image library (~2GB), monthly cost is under $0.03. In comparison, Alibaba Cloud OSS would cost several dollars just for bandwidth.
Another hidden advantage: R2 supports custom domains backed by Cloudflare's CDN. Performance in China beats AWS CloudFront hands down.
Log into the Cloudflare Dashboard and navigate to R2. First-time users need to enable R2 in billing settings (card required even for free tier).
Create a bucket with any name you like—I used blog-assets. For region, select Auto and let Cloudflare choose the optimal location.
After creation, the critical step is configuring CORS policy. Without it, browsers will block image loading due to cross-origin restrictions.
Go to bucket Settings, find CORS Policy, and add this rule:
[
{
"AllowedOrigins": ["*"],
"AllowedMethods": ["GET", "HEAD", "PUT", "POST"],
"AllowedHeaders": ["*"],
"ExposeHeaders": ["ETag"],
"MaxAgeSeconds": 3000
}
]I set AllowedOrigins to * because my blog has multiple domains (dev, preview, production). If security is a concern, restrict it to specific domains.
To upload files programmatically, you need an Access Key. In the R2 dashboard, click "Manage R2 API Tokens" and create a new token.
Select "Object Read & Write" permissions and scope it to your bucket.
You'll receive three critical pieces of information:
https://xxxxx.r2.cloudflarestorage.com)Important: The Secret Access Key is shown only once. I store these in 1Password.
R2's default access URL (xxxxx.r2.cloudflarestorage.com) isn't elegant. More importantly, requests to this URL bypass Cloudflare's CDN.
To bind a custom domain:
Navigate to bucket settings, find "Public Access", and click "Connect Domain". Enter your domain, like assets.yourdomain.com.
Cloudflare automatically creates a CNAME record pointing to R2's public endpoint. After DNS propagation (usually minutes), images become accessible via your custom domain.
Benefits:
I use assets.996828.xyz, hosted on Cloudflare DNS, configured in minutes.
I could use AWS CLI or various image hosting tools, but I prefer writing my own script. Reasons:
Core dependencies are requests and requests_aws4auth:
pip install requests requests_aws4authKey parts of the upload script:
from requests_aws4auth import AWS4Auth
import requests
import hashlib
from datetime import datetime
ACCESS_KEY = "your Access Key ID"
SECRET_KEY = "your Secret Access Key"
ENDPOINT = "https://xxxxx.r2.cloudflarestorage.com"
BUCKET = "blog-assets"
def upload_image(file_path):
# Generate unique filename with MD5 to avoid duplicates
with open(file_path, 'rb') as f:
file_data = f.read()
md5_hash = hashlib.md5(file_data).hexdigest()[:12]
# Organize by year/month for easier management
now = datetime.now()
file_key = f"blog/{now.year}/{now.month:02d}/{md5_hash}.jpg"
# AWS4 signature authentication
auth = AWS4Auth(ACCESS_KEY, SECRET_KEY, "auto", "s3")
# Upload request
url = f"{ENDPOINT}/{BUCKET}/{file_key}"
response = requests.put(
url,
data=file_data,
auth=auth,
headers={"x-amz-acl": "public-read"}
)
if response.status_code == 200:
return f"https://assets.996828.xyz/{file_key}"
else:
raise Exception(f"Upload failed: {response.status_code}")A few important details:
Filename handling: Use MD5 hash instead of original filename to avoid issues with Chinese characters, spaces, etc. First 12 characters provide enough uniqueness.
Path organization: Categorize by blog/year/month/ instead of dumping everything in root. You'll thank yourself in six months.
ACL settings: x-amz-acl: public-read is mandatory, otherwise uploads succeed but images return 403. This cost me 30 minutes of debugging.
Signature region: R2 region must be "auto", not "us-east-1" or similar, or signature validation fails.
Manual image upload is tedious. My ideal workflow: write article, run one command, automatically upload images and replace links.
Implementation approach:
tmp/draft/ directorycontent/posts/Core code snippet:
import re
def process_article(md_file):
with open(md_file, 'r', encoding='utf-8') as f:
content = f.read()
# Regex match Markdown image syntax
pattern = r'!\[(.*?)\]\((.*?)\)'
images = re.findall(pattern, content)
for alt_text, img_path in images:
# Skip images that are already URLs
if img_path.startswith('http'):
continue
# Upload image
full_path = f"tmp/draft/images/{img_path}"
r2_url = upload_image(full_path)
# Replace link
old_str = f""
new_str = f""
content = content.replace(old_str, new_str)
return contentNow the article publishing process becomes:
# 1. Prepare content
tmp/draft/
├── my-article.md
└── images/
└── screenshot.png
# 2. Run script
python scripts/publish-blog.py
# 3. Upload, replace, publish—all automaticThe entire process takes under 10 seconds.
PicGo Plugin Issues
Initially, I tried using VSCode's PicGo plugin, configuring it in AWS S3 mode to connect to R2. Theoretically, it should work. In practice, constant 403 errors.
I dug through countless issues, tried pathStyleAccess, forcePathStyle, and various other parameters—still unstable. Eventually gave up and wrote a Python script, which proved more reliable.
Signature Time Drift
If your local clock is off (common in VM environments), AWS Signature V4 rejects requests due to time skew. Error message: 403 RequestTimeTooSkewed.
Solution: Sync system time. On macOS: sudo sntp -sS time.apple.com.
Missing CORS Configuration
Configuring CORS on the bucket alone isn't enough. If upload requests lack proper headers, browsers still report cross-origin errors.
Ensure your upload request includes:
headers = {
"Content-Type": "image/jpeg", # Based on actual file type
"x-amz-acl": "public-read"
}Compared several CDN providers' real-world performance (500KB test image):
| Service | China Latency | Overseas Latency | Cost (10GB traffic) |
|---|---|---|---|
| R2 + Cloudflare CDN | 120ms | 80ms | $0.15 |
| Alibaba Cloud OSS + CDN | 60ms | 200ms | ~$2.00 |
| AWS S3 + CloudFront | 180ms | 50ms | $2.50 |
R2's China performance isn't the fastest (Cloudflare lacks ICP filing in China), but it's perfectly usable. Overseas access shows clear advantages.
Cost-wise, R2 wins decisively. Even if traffic increases tenfold, you pay nothing extra.
Cloudflare R2 is ideal for personal blog image hosting:
Pros:
Cons:
If your blog is hosted on Vercel or Netlify, R2 makes an excellent image host. Both run on Cloudflare's network, reducing latency.
Finally, all code is open-sourced in my GitHub repo, including complete upload scripts and automated publishing workflow. Feel free to use it.
Environment: macOS 14.2 / Python 3.11 / Next.js 14.2