Data Science Stack for Claude Code
Configuration
{
"mcpServers": {
"postgres-mcp": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-postgres",
"postgresql://localhost/mydb"
],
"env": {
"POSTGRES_CONNECTION_STRING": "YOUR_POSTGRES_CONNECTION_STRING"
}
},
"sqlite-mcp": {
"command": "npx",
"args": [
"-y",
"sqlite-mcp"
]
},
"filesystem-mcp": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/path/to/allowed/directory"
]
},
"exa-mcp": {
"command": "npx",
"args": [
"-y",
"exa-mcp-server"
],
"env": {
"EXA_API_KEY": "YOUR_EXA_API_KEY"
}
},
"sequential-thinking-mcp": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-sequential-thinking"
]
}
}
}CLI Commands
Alternatively, add each server via the Claude Code CLI:
claude mcp add postgres-mcp -e POSTGRES_CONNECTION_STRING=YOUR_POSTGRES_CONNECTION_STRING -- npx -y @modelcontextprotocol/server-postgres postgresql://localhost/mydb
claude mcp add sqlite-mcp -- npx -y sqlite-mcp
claude mcp add filesystem-mcp -- npx -y @modelcontextprotocol/server-filesystem /path/to/allowed/directory
claude mcp add exa-mcp -e EXA_API_KEY=YOUR_EXA_API_KEY -- npx -y exa-mcp-server
claude mcp add sequential-thinking-mcp -- npx -y @modelcontextprotocol/server-sequential-thinkingWhere to save
Paste the config above into:
~/.claude.jsonEnvironment Variables
Replace the YOUR_ placeholders with your actual values.
POSTGRES_CONNECTION_STRINGrequiredPostgreSQL connection string (e.g. postgresql://user:pass@localhost:5432/mydb)
Used by: PostgreSQL MCP
EXA_API_KEYrequiredExa API key
Used by: Exa Search MCP
What’s in this stack
Query and interact with PostgreSQL databases, inspect schemas, and run SQL directly from your AI editor.
Query production databases directly from your AI assistant. Essential for exploratory data analysis without writing one-off scripts.
Query and manage SQLite databases. Create tables, run queries, and inspect data directly from your AI editor.
Spin up lightweight local databases for experiments and intermediate results. Perfect for prototyping data pipelines before scaling to Postgres.
Read, write, search, and manage files on your local filesystem with secure directory-scoped access for your AI editor.
Read CSVs, write outputs, and manage Jupyter notebooks. Data science lives and dies by file I/O.
AI-powered web search and crawling with Exa. Get semantically relevant results, extract content, and find similar pages.
Find research papers, datasets, and methodology references with semantic search. Speeds up the literature review phase significantly.
Enable structured, step-by-step reasoning for complex problem solving. Helps AI break down problems into logical sequences.
Statistical analysis and pipeline design require step-by-step reasoning. Prevents your AI from skipping critical assumptions in complex analyses.