saras-mcp
Pulse Backend MCP Server is a Model Context Protocol server designed to provide access to BigQuery and specialized data tools for increased developer productivity. It supports BigQuery SQL queries and client data retrieval through a standardized protocol for connecting AI models with data sources.
Pulse Backend MCP Server
A Model Context Protocol (MCP) server that provides BigQuery access and specialized data tools for developers within our company to increase productivity.
Overview
This MCP server implements the Model Context Protocol to enable LLM-powered applications to access company data and execute specialized data functions in a controlled manner. The server exposes several tools for interacting with BigQuery and company-specific data structures.
Project Structure
pulse-backend-mcp/
├── README.md # Project documentation
├── pyproject.toml # Python project configuration
├── uv.lock # Dependency lock file
├── .env # Environment variables (create this file)
└── src/ # Source code directory
└── server.py # Main MCP server implementation
What is MCP?
The Model Context Protocol (MCP) is an open protocol that standardizes how applications provide context to LLMs. Similar to how USB-C provides a standardized way to connect devices to peripherals, MCP provides a standardized way to connect AI models to different data sources and tools.
MCP Architecture
MCP follows a client-server architecture:
- MCP Hosts: Programs like Claude Desktop, IDEs, or AI tools that want to access data through MCP
- MCP Clients: Protocol clients that maintain 1:1 connections with servers
- MCP Servers: Lightweight programs (like this one) that expose specific capabilities through the standardized protocol
- Data Sources: Your databases, files, or services that MCP servers can securely access
Communication Flow
- The host application (e.g., Claude Desktop) initializes a connection to our MCP server
- The client discovers the available tools through the
tools/list
endpoint - When prompted by a user, the LLM can use our tools to execute BigQuery queries or retrieve client information
- Our server executes the requested operations and returns results to the client
- The client presents the results to the user within the host application
Key Features
- BigQuery Integration: Execute SQL queries against company BigQuery datasets
- Client Data Access: Retrieve client details and datasets from our data warehouse
- Extensible Architecture: Add new tools to support additional use cases
Prerequisites
- Python 3.13 or higher
- Google Cloud account with BigQuery access
- Service account credentials with appropriate permissions
- ClickUp API key (for task integration)
Installation
-
Clone the repository:
git clone https://github.com/yourusername/saras-mcp.git cd saras-mcp
-
Create a virtual environment:
python -m venv venv source venv/bin/activate # On Windows, use: venv\Scripts\activate
-
Install dependencies:
pip install -r requirements.txt
Configuration
-
Set up Google Cloud credentials by either:
- Setting the
GOOGLE_APPLICATION_CREDENTIALS
environment variable to point to your service account key file:export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account-key.json"
- Passing the service account path directly to the tools when calling them
- Setting the
-
Create a
.env
file in the root directory with the following variables:GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account-key.json GOOGLE_PROJECT_ID=your-project-id CLICKUP_API_KEY=your-clickup-api-key
-
(Optional) Adjust the default project ID in the tool definitions if needed
Usage
Start the MCP server in the inspector:
mcp dev src/server.py
The server will start on the default MCP port (typically 8080). You can now connect MCP-compatible clients to this server.
Testing with MCP Inspector
To test your server implementation:
- Install the MCP Inspector
- Connect to your running server
- Explore available tools and test their functionality
Available Tools
1. execute_bigquery
Execute BigQuery SQL queries and receive results as structured data.
Parameters:
query
(string, required): The SQL query to executeproject_id
(string, optional): Google Cloud project ID (default: "insightsprod")service_account_path
(string, optional): Path to service account JSON credentials
Tool Annotations:
- Read-only: Yes (doesn't modify data)
- Open World: Yes (interacts with external BigQuery service)
2. get_client_details
Retrieve client information from our data warehouse.
Parameters:
client_id
(string, optional): Specific client ID to filter byclient_name
(string, optional): Client name to search for (supports partial matches)project_id
(string, optional): Google Cloud project ID (default: "insightsprod")service_account_path
(string, optional): Path to service account JSON credentials
Tool Annotations:
- Read-only: Yes (doesn't modify data)
- Open World: No (operates on internal data warehouse)
3. get_client_datasets
Retrieve available datasets for a specific client.
Parameters:
client_id
(string, optional): Specific client ID to filter byclient_name
(string, optional): Client name to search for (supports partial matches)project_id
(string, optional): Google Cloud project ID (default: "insightsprod")service_account_path
(string, optional): Path to service account JSON credentials
Tool Annotations:
- Read-only: Yes (doesn't modify data)
- Open World: No (operates on internal data warehouse)
4. get_dataset_tables
List all tables in a specific BigQuery dataset with their metadata.
Parameters:
dataset_id
(string, required): The ID of the BigQuery dataset to list tables fromproject_id
(string, optional): Google Cloud project ID (default: from environment)service_account_path
(string, optional): Path to service account JSON credentials
Tool Annotations:
- Read-only: Yes (doesn't modify data)
- Open World: No (operates on internal data warehouse)
5. get_clickup_task
Retrieve detailed information about a specific ClickUp task.
Parameters:
task_id
(string, required): The unique identifier of the ClickUp taskapi_key
(string, optional): ClickUp API key for authentication (default: from environment)include_subtasks
(boolean, optional): Whether to include subtask informationinclude_comments
(boolean, optional): Whether to include task comments
Tool Annotations:
- Read-only: Yes (doesn't modify data)
- Open World: Yes (interacts with external ClickUp API)
Extending the Server
Adding New Tools
To add a new tool to the MCP server:
- Add a new function to
server.py
decorated with@mcp.tool()
- Define the parameters and return type for your function
- Add comprehensive docstrings to document the tool's purpose and usage
- Implement error handling for a robust user experience
Example:
@mcp.tool()
def my_new_tool(param1: str, param2: int = 0) -> dict:
"""Description of what the tool does.
Args:
param1: Description of param1
param2: (Optional) Description of param2
Returns:
Dictionary containing the results or error information
"""
try:
# Implementation
return {"success": True, "results": [...]}
except Exception as e:
return {
"success": False,
"error": "Error Type",
"message": str(e),
"code": 500
}
Proper Error Handling
For tools that might encounter errors:
- Use the appropriate error structure
- Return specific error codes when possible
- Provide meaningful error messages
Example:
try:
# Tool operation
result = perform_operation()
return {"success": True, "results": result}
except NotFound as e:
return {
"success": False,
"error": "Not Found",
"message": str(e),
"code": 404,
}
except Exception as e:
return {
"success": False,
"error": "Execution Error",
"message": str(e),
"code": 500,
}
Tool Annotations
When defining tools, consider adding annotations to help clients understand the tool's behavior:
readOnlyHint
: Indicates if the tool modifies its environmentdestructiveHint
: Indicates if the tool may perform destructive operationsidempotentHint
: Indicates if repeated calls with the same arguments have no additional effectopenWorldHint
: Indicates if the tool interacts with external entities
Security Considerations
When developing MCP servers, follow these security best practices:
-
Input Validation
- Validate all parameters against their schemas
- Sanitize SQL queries to prevent injection attacks
- Check parameter sizes and ranges
-
Access Control
- Implement appropriate authentication when needed
- Use proper authorization for accessing sensitive data
- Consider rate limiting for resource-intensive operations
-
Error Handling
- Don't expose internal errors to clients
- Log security-relevant errors
- Clean up resources appropriately after errors
MCP Protocol Resources
Contributing
- Create a new branch for your feature or bugfix
- Add appropriate tests for your changes
- Submit a pull request with a clear description of the changes
License
[Your License Here]