Best Screenshot APIs in 2026: Complete Comparison
If you’re building a SaaS product, generating reports, or automating web workflows, chances are you need a screenshot API. Whether it’s for link previews, visual regression testing, PDF generation, or archiving web pages, the right API can save you hundreds of hours compared to managing headless browsers yourself. In this comprehensive comparison, we’ll evaluate the best screenshot APIs in 2026, break down their features, pricing, and limitations, and help you choose the right one for your project. ...
Extract Website Metadata with an API: Title, Description, OG Tags
Every website has metadata hidden in its HTML — titles, descriptions, Open Graph tags, favicons, Twitter cards, and more. Extracting this metadata is essential for building link previews, SEO analysis tools, competitive intelligence dashboards, and content aggregators. In this guide, we’ll show you how to use the ToolCenter Metadata API to extract structured metadata from any URL, with practical examples and real-world use cases. What Is Website Metadata? Website metadata is information embedded in a page’s HTML <head> section that describes the page’s content. It includes: ...
Generate QR Codes with an API: The Developer's Guide
QR codes are everywhere — from restaurant menus to payment systems, event tickets to WiFi sharing. As a developer, you’ll eventually need to generate QR codes programmatically, and using a QR code API is the fastest, most reliable way to do it. In this guide, we’ll show you how to generate QR codes using the ToolCenter QR API, including customization options, real-world use cases, and code examples in multiple languages. ...
How to Convert HTML to PDF with an API: Complete Guide
Converting HTML to PDF is one of the most common tasks in web development. Whether you’re generating invoices, creating reports, exporting tickets, or building a document management system, you need a reliable way to turn HTML content into pixel-perfect PDF documents. In this guide, we’ll walk through how to use the ToolCenter PDF API to convert HTML to PDF programmatically, with complete code examples in Python, Node.js, PHP, and cURL. ...
How to Generate Dynamic Open Graph Images for Social Media
When you share a link on Twitter, Facebook, LinkedIn, or Slack, the platform displays a preview card with an image, title, and description. That image is the Open Graph (OG) image, and it has a massive impact on whether people click your link. Static OG images are fine for your homepage, but what about blog posts, product pages, user profiles, or dynamic content? You need dynamic OG image generation — and in this guide, we’ll show you how to do it with the ToolCenter OG Image API. ...
Screenshot API Pricing: How Much Should You Pay in 2026?
The Screenshot API Market in 2026 Screenshot APIs have become essential infrastructure for developers building link previews, monitoring tools, visual testing, and content archival. But pricing varies wildly — from free tiers to enterprise plans costing thousands per month. This guide breaks down how screenshot API pricing works, what you should expect to pay, and how to calculate the best option for your needs. Common Pricing Models Per-Screenshot Pricing The most straightforward model. You pay for each screenshot captured. ...
Use Signed URLs to Embed Screenshots Directly in HTML
What Are Signed URLs? A signed URL is a regular URL with a cryptographic signature appended as a query parameter. It proves the request was authorized without exposing your API key. The signature is generated server-side using your secret key, but the URL can be used client-side in <img> tags, emails, or anywhere that loads images. <!-- This just works — no backend proxy needed --> <img src="https://api.toolcenter.dev/v1/screenshot?url=https://example.com&width=1280&height=800&sig=a1b2c3d4e5f6" /> Why Use Signed URLs? The Problem with API Keys in Frontend You can’t put API keys in client-side code: ...
Async Screenshot Processing with Webhooks: No More Timeouts
The Timeout Problem Synchronous screenshot APIs have a fundamental issue: HTTP timeouts. When you capture a complex page that takes 15-30 seconds to render, your HTTP connection can time out. Load balancers, proxies, and client libraries all enforce timeout limits. The solution? Asynchronous processing with webhooks. Submit the job, get a job ID, and receive a notification when it’s done. How Async Webhooks Work The flow is simple: Submit — Send a screenshot request with a webhookUrl Receive job ID — API returns immediately with a job identifier Processing — API captures the screenshot in the background Notification — API sends the result to your webhook URL Retrieve — Download the screenshot from the provided URL C A A l P P i I I e n → → t C W → l e i b A e t h P n i o I t m o : : e k : " " p C G a " a o s J p t s o t e b u i s r t a e , b c t j 1 h o 2 i b 3 s I i U D s R : L d , a o b n n c e o 1 , t 2 i 3 h f " e y r e m ' e s a t t h e w e s b c h r o e o e k n . s e h x o a t m p U l R e L . " c o m / h o o k " Submitting Async Requests Node.js const axios = require('axios'); async function submitScreenshotJob(url, options = {}) { const response = await axios.post( 'https://api.toolcenter.dev/v1/screenshot', { url: url, width: options.width || 1280, height: options.height || 800, format: options.format || 'png', fullPage: options.fullPage || false, webhookUrl: 'https://your-server.com/api/webhook/screenshot', }, { headers: { 'Authorization': 'Bearer YOUR_API_KEY' }, } ); return response.data; // { jobId: 'abc123', status: 'queued' } } Python import requests def submit_screenshot_job(url, webhook_url): response = requests.post( 'https://api.toolcenter.dev/v1/screenshot', json={ 'url': url, 'width': 1280, 'height': 800, 'format': 'png', 'webhookUrl': webhook_url, }, headers={'Authorization': 'Bearer YOUR_API_KEY'} ) return response.json() # {'jobId': 'abc123', 'status': 'queued'} Building the Webhook Receiver Express.js const express = require('express'); const crypto = require('crypto'); const app = express(); app.use(express.json()); // Store for pending jobs const pendingJobs = new Map(); app.post('/api/webhook/screenshot', (req, res) => { const { jobId, status, screenshotUrl, error } = req.body; // Verify webhook signature const signature = req.headers['x-webhook-signature']; const expectedSig = crypto .createHmac('sha256', process.env.WEBHOOK_SECRET) .update(JSON.stringify(req.body)) .digest('hex'); if (signature !== expectedSig) { return res.status(401).json({ error: 'Invalid signature' }); } console.log(`Job ${jobId}: ${status}`); if (status === 'completed') { // Download and process the screenshot processCompletedScreenshot(jobId, screenshotUrl); } else if (status === 'failed') { console.error(`Job ${jobId} failed: ${error}`); handleFailedJob(jobId, error); } // Always respond 200 to acknowledge receipt res.status(200).json({ received: true }); }); async function processCompletedScreenshot(jobId, screenshotUrl) { const response = await axios.get(screenshotUrl, { responseType: 'arraybuffer' }); const filename = `screenshots/${jobId}.png`; fs.writeFileSync(filename, response.data); console.log(`Saved: ${filename}`); // Resolve any pending promises const resolver = pendingJobs.get(jobId); if (resolver) { resolver.resolve(filename); pendingJobs.delete(jobId); } } app.listen(3000, () => console.log('Webhook server ready')); Flask (Python) from flask import Flask, request, jsonify import hmac import hashlib import os app = Flask(__name__) WEBHOOK_SECRET = os.environ['WEBHOOK_SECRET'] @app.route('/api/webhook/screenshot', methods=['POST']) def handle_webhook(): # Verify signature signature = request.headers.get('X-Webhook-Signature', '') expected = hmac.new( WEBHOOK_SECRET.encode(), request.data, hashlib.sha256 ).hexdigest() if not hmac.compare_digest(signature, expected): return jsonify({'error': 'Invalid signature'}), 401 data = request.json job_id = data['jobId'] status = data['status'] if status == 'completed': screenshot_url = data['screenshotUrl'] process_screenshot(job_id, screenshot_url) elif status == 'failed': handle_failure(job_id, data.get('error')) return jsonify({'received': True}), 200 def process_screenshot(job_id, url): response = requests.get(url) with open(f'screenshots/{job_id}.png', 'wb') as f: f.write(response.content) print(f'Saved screenshot for job {job_id}') Promise-Based Async Pattern Create a clean interface that submits the job and resolves when the webhook fires: ...
Process Thousands of Screenshots: Bulk API Guide
When You Need Screenshots at Scale Some use cases demand thousands or tens of thousands of screenshots: monitoring a large portfolio of websites, generating thumbnails for a directory, creating visual archives, or running visual regression tests across hundreds of pages. Processing this volume requires more than a simple for-loop. You need concurrency control, error handling, rate limiting, and progress tracking. The Naive Approach (Don’t Do This) // ❌ Sequential — painfully slow for (const url of urls) { const screenshot = await takeScreenshot(url); saveScreenshot(screenshot); } // 10,000 URLs × 3 seconds each = 8+ hours The Right Approach: Controlled Concurrency Node.js with p-limit const axios = require('axios'); const pLimit = require('p-limit'); const fs = require('fs'); const API_KEY = process.env.DEVTOOLBOX_API_KEY; const CONCURRENCY = 10; // Parallel requests const limit = pLimit(CONCURRENCY); async function takeScreenshot(url, retries = 3) { for (let attempt = 1; attempt <= retries; attempt++) { try { const response = await axios.post( 'https://api.toolcenter.dev/v1/screenshot', { url, width: 1280, height: 800, format: 'png' }, { headers: { 'Authorization': `Bearer ${API_KEY}` }, responseType: 'arraybuffer', timeout: 30000, } ); return { url, success: true, data: response.data }; } catch (error) { if (attempt === retries) { return { url, success: false, error: error.message }; } // Exponential backoff await sleep(Math.pow(2, attempt) * 1000); } } } function sleep(ms) { return new Promise(resolve => setTimeout(resolve, ms)); } async function processUrls(urls) { let completed = 0; const results = await Promise.all( urls.map(url => limit(async () => { const result = await takeScreenshot(url); completed++; if (completed % 100 === 0) { console.log(`Progress: ${completed}/${urls.length} (${((completed/urls.length)*100).toFixed(1)}%)`); } if (result.success) { const filename = urlToFilename(url); fs.writeFileSync(`./screenshots/${filename}`, result.data); } return result; }) ) ); const succeeded = results.filter(r => r.success).length; const failed = results.filter(r => !r.success).length; console.log(`\nComplete: ${succeeded} succeeded, ${failed} failed`); return results; } function urlToFilename(url) { return url.replace(/https?:\/\//, '').replace(/[^a-zA-Z0-9]/g, '_').slice(0, 100) + '.png'; } Python with asyncio import asyncio import aiohttp import os from urllib.parse import urlparse API_KEY = os.environ['DEVTOOLBOX_API_KEY'] CONCURRENCY = 10 semaphore = asyncio.Semaphore(CONCURRENCY) async def take_screenshot(session, url, retries=3): async with semaphore: for attempt in range(retries): try: async with session.post( 'https://api.toolcenter.dev/v1/screenshot', json={'url': url, 'width': 1280, 'height': 800, 'format': 'png'}, headers={'Authorization': f'Bearer {API_KEY}'}, timeout=aiohttp.ClientTimeout(total=30) ) as response: if response.status == 200: data = await response.read() return {'url': url, 'success': True, 'data': data} elif response.status == 429: # Rate limited — wait and retry await asyncio.sleep(2 ** (attempt + 1)) continue else: return {'url': url, 'success': False, 'error': f'HTTP {response.status}'} except Exception as e: if attempt == retries - 1: return {'url': url, 'success': False, 'error': str(e)} await asyncio.sleep(2 ** attempt) async def process_urls(urls): os.makedirs('screenshots', exist_ok=True) completed = 0 async with aiohttp.ClientSession() as session: tasks = [take_screenshot(session, url) for url in urls] results = [] for coro in asyncio.as_completed(tasks): result = await coro completed += 1 if result['success']: filename = url_to_filename(result['url']) with open(f'screenshots/{filename}', 'wb') as f: f.write(result['data']) if completed % 100 == 0: print(f'Progress: {completed}/{len(urls)}') results.append(result) succeeded = sum(1 for r in results if r['success']) print(f'Done: {succeeded}/{len(urls)} succeeded') return results def url_to_filename(url): parsed = urlparse(url) name = f"{parsed.netloc}{parsed.path}".replace('/', '_')[:100] return f"{name}.png" # Run it urls = open('urls.txt').read().strip().split('\n') asyncio.run(process_urls(urls)) Rate Limiting and Backoff Respect API rate limits to avoid getting blocked: ...
Build an SEO Audit Tool: Extract Meta Tags at Scale
Why Automate SEO Audits? SEO audits are tedious. Checking meta titles, descriptions, Open Graph tags, and structured data across hundreds of pages takes hours manually. An automated tool can scan your entire site in minutes and flag issues before they hurt your rankings. In this tutorial, we’ll build an SEO audit tool using the ToolCenter Metadata API to extract and analyze meta tags at scale. What the Metadata API Returns The ToolCenter Metadata API extracts comprehensive metadata from any URL: ...