A Model Context Protocol (MCP) server for PostgreSQL databases with enhanced capabilities for AI agents.
PG-MCP is a server implementation of the Model Context Protocol for PostgreSQL databases. It provides a comprehensive API for AI agents to discover, connect to, query, and understand PostgreSQL databases through MCP's resource-oriented architecture.
This implementation builds upon and extends the reference Postgres MCP implementation with several key enhancements:
- Full Server Implementation: Built as a complete server with SSE transport for production use
- Multi-database Support: Connect to multiple PostgreSQL databases simultaneously
- Rich Catalog Information: Extracts and exposes table/column descriptions from the database catalog
- Extension Context: Provides detailed YAML-based knowledge about PostgreSQL extensions like PostGIS and pgvector
- Query Explanation: Includes a dedicated tool for analyzing query execution plans
- Robust Connection Management: Proper lifecycle for database connections with secure connection ID handling
- Connect Tool: Register PostgreSQL connection strings and get a secure connection ID
- Disconnect Tool: Explicitly close database connections when done
- Connection Pooling: Efficient connection management with pooling
- pg_query: Execute read-only SQL queries using a connection ID
- pg_explain: Analyze query execution plans in JSON format
- List schemas with descriptions
- List tables with descriptions and row counts
- Get column details with data types and descriptions
- View table constraints and indexes
- Explore database extensions
- Sample table data (with pagination)
- Get approximate row counts
Built-in contextual information for PostgreSQL extensions like:
- PostGIS: Spatial data types, functions, and examples
- pgvector: Vector similarity search functions and best practices
Additional extensions can be easily added via YAML config files.
- Python 3.13+
- PostgreSQL database(s)
# Clone the repository
git clone https://github.com/stuzero/pg-mcp.git
cd pg-mcp # Build and run with Docker Compose
docker-compose up -d
# Clone the repository
git clone https://github.com/stuzero/pg-mcp.git
cd pg-mcp # Create and activate a virtual environment
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate # Install using uv
uv sync --frozen # Run the server
python -m server.app
The repository includes test scripts to verify server functionality:
# Basic server functionality test
python test.py "postgresql://username:password@hostname:port/database" # Claude-powered natural language to SQL conversion
python client/claude_cli.py "Show me the top 5 customers by total sales"
The claude_cli.py
script requires environment variables:
# .env file
DATABASE_URL=postgresql://username:password@hostname:port/database
ANTHROPIC_API_KEY=your-anthropic-api-key
PG_MCP_URL=http://localhost:8000/sse
Example prompt for use with agents:
Use the PostgreSQL MCP server to analyze the database.
Available tools:
- connect: Register a database connection string and get a connection ID
- disconnect: Close a database connection
- pg_query: Execute SQL queries using a connection ID
- pg_explain: Get query execution plans
You can explore schema resources via:
pgmcp://{conn_id}/schemas
pgmcp://{conn_id}/schemas/{schema}/tables
pgmcp://{conn_id}/schemas/{schema}/tables/{table}/columns
This server is built on:
- MCP: The Model Context Protocol foundation
- FastMCP: Python library for MCP
- asyncpg: Asynchronous PostgreSQL client
- YAML: For extension context information
- The server runs in read-only mode by default (enforced via transaction settings)
- Connection details are never exposed in resource URLs, only opaque connection IDs
- Database credentials only need to be sent once during the initial connection
Contributions are welcome! Areas for expansion:
- Additional PostgreSQL extension context files
- More schema introspection resources
- Query optimization suggestions