Documentation Index Fetch the complete documentation index at: https://buttercms.com/docs/llms.txt
Use this file to discover all available pages before exploring further.
How ButterCMS handles scale
Enterprise-grade infrastructure
ButterCMS handles high-volume content delivery and complex integrations for enterprise websites and applications. The global CDN ensures fast content delivery, enhancing user experience and SEO for enterprise websites.
Automatic scaling architecture
Your application’s scaling responsibilities
While ButterCMS scales automatically, your application needs proper architecture to handle high traffic. See:
Caching Strategies — SSG/ISR, application-level caching with node-cache and Redis, SWR, and TTL recommendations
CDN & Global Delivery — multi-layer caching architecture and Cache-Control/stale-while-revalidate patterns
Handling traffic spikes
Preparation checklist
Before a planned traffic event
Graceful degradation pattern
import NodeCache from 'node-cache' ;
const cache = new NodeCache ({ stdTTL: 3600 }); // 1-hour cache
const FALLBACK_KEY = 'fallback' ;
async function getPageWithFallback ( slug ) {
const cacheKey = `page- ${ slug } ` ;
// Try cache first
let page = cache . get ( cacheKey );
if ( page ) return page ;
try {
// Try API
const butter = Butter ( process . env . BUTTER_API_TOKEN );
const response = await butter . page . retrieve ( '*' , slug , {
levels: 2
});
page = response . data . data ;
// Cache the successful response
cache . set ( cacheKey , page );
// Also save as fallback for emergencies
cache . set ( ` ${ FALLBACK_KEY } - ${ slug } ` , page , 86400 ); // 24-hour fallback
return page ;
} catch ( error ) {
console . error ( 'API request failed:' , error . message );
// Try fallback cache
const fallback = cache . get ( ` ${ FALLBACK_KEY } - ${ slug } ` );
if ( fallback ) {
console . log ( 'Serving fallback content' );
return fallback ;
}
// Return static fallback
return getStaticFallback ( slug );
}
}
function getStaticFallback ( slug ) {
// Return pre-built static content
return {
slug ,
fields: {
title: 'Content temporarily unavailable' ,
body: '<p>We are experiencing technical difficulties. Please try again later.</p>'
}
};
}
Architecture patterns for scale
Pattern 1: edge-first architecture
Deploy your application to edge locations for minimum latency:
// Cloudflare Workers / Vercel Edge Functions
export const config = { runtime: 'edge' };
export default async function handler ( request ) {
const url = new URL ( request . url );
const slug = url . pathname . split ( '/' ). pop ();
// Check edge cache first
const cache = caches . default ;
const cacheKey = new Request ( url . toString (), request );
let response = await cache . match ( cacheKey );
if ( response ) {
return response ;
}
// Fetch from ButterCMS
const BUTTER_TOKEN = process . env . BUTTER_TOKEN ;
const butterResponse = await fetch (
`https://api.buttercms.com/v2/pages/*/ ${ slug } /?levels=2` ,
{ headers: { Authorization: `Token ${ BUTTER_TOKEN } ` } }
);
const data = await butterResponse . json ();
// Build response
response = new Response ( JSON . stringify ( data ), {
headers: {
'Content-Type' : 'application/json' ,
'Cache-Control' : 'public, max-age=60, stale-while-revalidate=300'
}
});
// Cache at edge
await cache . put ( cacheKey , response . clone ());
return response ;
}
Pattern 2: backend-for-frontend (BFF)
Create a dedicated API layer that handles caching and aggregation:
// BFF layer - aggregates ButterCMS calls
app . get ( '/api/homepage' , async ( req , res ) => {
const cacheKey = 'homepage-data' ;
// Check cache
let data = await redis . get ( cacheKey );
if ( data ) {
return res . json ( JSON . parse ( data ));
}
// Fetch all homepage data in parallel
const [ hero , featuredPosts , testimonials ] = await Promise . all ([
butter . page . retrieve ( '*' , 'homepage' , { levels: 2 }),
butter . post . list ({ page_size: 3 , exclude_body: true }),
butter . content . retrieve ([ 'testimonials' ])
]);
data = {
hero: hero . data . data ,
featuredPosts: featuredPosts . data . data ,
testimonials: testimonials . data . data . testimonials
};
// Cache for 5 minutes
await redis . setex ( cacheKey , 300 , JSON . stringify ( data ));
res . json ( data );
});
Pattern 3: pre-warming caches
Before expected traffic spikes, pre-warm your caches:
// Cache warming script - run before traffic events
async function warmCaches () {
const butter = Butter ( process . env . BUTTER_API_TOKEN );
// Get all page slugs
const pages = await butter . page . list ( '*' , { page_size: 100 });
const posts = await butter . post . list ({ page_size: 100 });
console . log ( `Warming ${ pages . data . data . length } pages...` );
console . log ( `Warming ${ posts . data . data . length } posts...` );
// Fetch each page to populate caches
for ( const page of pages . data . data ) {
try {
await fetch ( ` ${ YOUR_SITE_URL } / ${ page . slug } ` );
console . log ( `Warmed: ${ page . slug } ` );
} catch ( error ) {
console . error ( `Failed to warm: ${ page . slug } ` );
}
// Small delay to avoid overwhelming your own infrastructure
await new Promise ( r => setTimeout ( r , 100 ));
}
for ( const post of posts . data . data ) {
try {
await fetch ( ` ${ YOUR_SITE_URL } /blog/ ${ post . slug } ` );
console . log ( `Warmed: ${ post . slug } ` );
} catch ( error ) {
console . error ( `Failed to warm: ${ post . slug } ` );
}
await new Promise ( r => setTimeout ( r , 100 ));
}
console . log ( 'Cache warming complete!' );
}
warmCaches ();
Monitoring during high traffic
Key metrics to watch
Metric Normal Warning Critical Response time (p95) < 200ms 200-500ms > 500ms Error rate < 0.1% 0.1-1% > 1% Cache hit ratio > 90% 80-90% < 80% API usage Within plan 80% of limit > 90% of limit
Real-time monitoring setup
// Express middleware for monitoring
app . use (( req , res , next ) => {
const start = Date . now ();
res . on ( 'finish' , () => {
const duration = Date . now () - start ;
// Log slow requests
if ( duration > 500 ) {
console . warn ( `Slow request: ${ req . path } - ${ duration } ms` );
}
// Track metrics (send to your monitoring service)
metrics . recordResponseTime ( req . path , duration );
metrics . incrementRequestCount ( res . statusCode );
});
next ();
});
Enterprise scaling considerations
For enterprise-level traffic:
Multi-site support - Manage multiple sites from a single account
Multi-environment - Separate staging and production
Custom CDN integration - Use your own CDN if needed
Enterprise support - Direct access to ButterCMS engineering team
Contact the ButterCMS sales team for enterprise scaling requirements and custom solutions.