Web Scraping Stack for Cursor
Configuration
{
"mcpServers": {
"firecrawl-mcp": {
"command": "npx",
"args": [
"-y",
"firecrawl-mcp"
],
"env": {
"FIRECRAWL_API_KEY": "YOUR_FIRECRAWL_API_KEY"
}
},
"browserbase-mcp": {
"command": "npx",
"args": [
"-y",
"@browserbasehq/mcp-server-browserbase"
],
"env": {
"BROWSERBASE_API_KEY": "YOUR_BROWSERBASE_API_KEY",
"BROWSERBASE_PROJECT_ID": "YOUR_BROWSERBASE_PROJECT_ID"
}
},
"playwright-mcp": {
"command": "npx",
"args": [
"-y",
"@playwright/mcp"
]
},
"puppeteer-mcp": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-puppeteer"
]
},
"filesystem-mcp": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/path/to/allowed/directory"
]
}
}
}Where to save
Paste the config above into:
.cursor/mcp.jsonEnvironment Variables
Replace the YOUR_ placeholders with your actual values.
FIRECRAWL_API_KEYrequiredFirecrawl API key
Used by: Firecrawl MCP
BROWSERBASE_API_KEYrequiredBrowserBase API key for authentication
Used by: BrowserBase MCP
BROWSERBASE_PROJECT_IDrequiredBrowserBase project identifier
Used by: BrowserBase MCP
What’s in this stack
Scrape and crawl websites, extract structured data, and perform batch web scraping with LLM-powered content analysis.
Crawl entire websites and extract clean, structured content from any page — handles JavaScript rendering and pagination automatically.
Automate cloud browsers with BrowserBase and Stagehand to navigate pages, extract content, take screenshots, and interact with web elements.
Run headless browsers in the cloud with built-in anti-detection, proxy rotation, and CAPTCHA solving for scraping at scale.
Automate browser interactions, take screenshots, fill forms, and test web applications using Microsoft Playwright from your AI editor.
Automate complex browser interactions like login flows, infinite scrolling, and multi-step navigation to reach deeply nested content.
Automate browsers with Puppeteer. Navigate pages, take screenshots, fill forms, and generate PDFs from your AI editor.
Control Chromium directly for fine-grained scraping tasks like screenshot capture, PDF generation, and dynamic page interaction.
Read, write, search, and manage files on your local filesystem with secure directory-scoped access for your AI editor.
Save scraped data as JSON, CSV, or structured files locally — the final step in any data extraction pipeline.