VictoriaMetrics MCP Server
The implementation of Model Context Protocol (MCP) server for VictoriaMetrics.
This provides access to your VictoriaMetrics instance and seamless integration with VictoriaMetrics APIs and documentation. It can give you a comprehensive interface for monitoring, observability, and debugging tasks related to your VictoriaMetrics instances, enable advanced automation and interaction capabilities for engineers and tools.
Features
This MCP server allows you to use almost all read-only APIs of VictoriaMetrics, i.e. all functions available in VMUI:
- Querying metrics and exploring data (even drawing graphs if your client supports it)
- Listing and exporting available metrics, labels, labels values and entire series
- Analyzing and testing your alerting and recording rules and alerts
- Showing parameters of your VictoriaMetrics instance
- Exploring cardinality of your data and metrics usage statistics
- Analyzing, tracing, prettifying and explaining your queries
- Debugging your relabeling rules, downsampling and retention policy configurations
- Integration with VictoriaMetrics Cloud
In addition, the MCP server contains embedded up-to-date documentation and is able to search it without online access.
More details about the exact available tools and prompts can be found in the Usage section.
You can combine functionality of tools, docs search in your prompts and invent great usage scenarios for your VictoriaMetrics instance. Just check the Dialog example section to see how it can work. And please note the fact that the quality of the MCP Server and its responses depends very much on the capabilities of your client and the quality of the model you are using.
You can also combine the MCP server with other observability or doc search related MCP Servers and get even more powerful results.
Try without installation
There is a publicly available instance of the VictoriaMetrics MCP Server that you can use to test the features without installing it:
https://play-mcp.victoriametrics.com/mcp
Attention! This URL is not supposed to be opened in a browser, it is intended to be used in MCP clients.
It's available in Streamable HTTP mode and configured to work with Public VictoriaMetrics Playground.
Here is example of configuration for Claude Desktop:
Requirements
- VictoriaMetrics or VictoriaMetrics Cloud instance (single-node or cluster)
- Go 1.24 or higher (if you want to build from source)
Installation
Go
go install github.com/VictoriaMetrics-Community/mcp-victoriametrics/cmd/mcp-victoriametrics@latest
Binaries
Just download the latest release from Releases page and put it to your PATH.
Example for Linux x86_64 (note that other architectures and platforms are also available):
latest=$(curl -s https://api.github.com/repos/VictoriaMetrics-Community/mcp-victoriametrics/releases/latest | grep 'tag_name' | cut -d\" -f4)
wget https://github.com/VictoriaMetrics-Community/mcp-victoriametrics/releases/download/$latest/mcp-victoriametrics_Linux_x86_64.tar.gz
tar axvf mcp-victoriametrics_Linux_x86_64.tar.gz
Docker
You can run VictoriaMetrics MCP Server using Docker.
This is the easiest way to get started without needing to install Go or build from source.
docker run -d --name mcp-victoriametrics \
-e VM_INSTANCE_ENTRYPOINT=https://play.victoriametrics.com \
-e VM_INSTANCE_TYPE=cluster \
-e MCP_SERVER_MODE=sse \
-e MCP_LISTEN_ADDR=:8080 \
-p 8080:8080 \
ghcr.io/victoriametrics-community/mcp-victoriametrics
You should replace environment variables with your own parameters.
Note that the MCP_SERVER_MODE=http flag is used to enable Streamable HTTP mode.
More details about server modes can be found in the Configuration section.
See available docker images in github registry.
Also see Using Docker instead of binary section for more details about using Docker with MCP server with clients in stdio mode.
Source Code
For building binary from source code you can use the following approach:
- Clone repo:
bash
git clone https://github.com/VictoriaMetrics-Community/mcp-victoriametrics.git
cd mcp-victoriametrics
- Build binary from cloned source code:
bash
make build
# after that you can find binary mcp-victoriametrics and copy this file to your PATH or run inplace
- Build image from cloned source code:
bash
docker build -t mcp-victoriametrics .
# after that you can use docker image mcp-victoriametrics for running or pushing
Configuration
MCP Server for VictoriaMetrics is configured via environment variables:
| Variable | Description | Required | Default | Allowed values |
|---|---|---|---|---|
VM_INSTANCE_ENTRYPOINT / VMC_API_KEY |
URL to VictoriaMetrics instance (it should be root / URL of vmsingle or vmselect) |
Yes (if you don't use VMC_API_KEY) |
- | - |
VM_INSTANCE_TYPE |
Type of VictoriaMetrics instance | Yes (if you don't use VMC_API_KEY) |
- | single, cluster |
VM_INSTANCE_BEARER_TOKEN |
Authentication token for VictoriaMetrics API | No | - | - |
VM_INSTANCE_HEADERS |
Custom HTTP headers to send with requests (comma-separated key=value pairs) | No | - | - |
VM_DEFAULT_TENANT_ID |
Default tenant ID for cluster mode. Format: accountID or accountID:projectID (32-bit integers). See VictoriaMetrics cluster docs |
No | 0 |
- |
VMC_API_KEY |
API key from VictoriaMetrics Cloud Console | No | - | - |
MCP_SERVER_MODE |
Server operation mode. See Modes for details. | No | stdio |
stdio, sse, http |
MCP_LISTEN_ADDR |
Address for SSE or HTTP server to listen on | No | localhost:8080 |
- |
MCP_DISABLED_TOOLS |
Comma-separated list of tools to disable | No | 'export,flags,metric_relabel_debug,downsampling_filters_debug,retention_filters_debug,test_rules' | - |
MCP_DISABLE_RESOURCES |
Disable all resources (documentation tool will continue to work) | No | false |
false, true |
MCP_HEARTBEAT_INTERVAL |
Defines the heartbeat interval for the streamable-http protocol. It means the MCP server will send a heartbeat to the client through the GET connection, to keep the connection alive from being closed by the network infrastructure (e.g. gateways) |
No | 30s |
- |
MCP_LOG_FORMAT |
Log output format | No | text |
text, json |
MCP_LOG_LEVEL |
Minimum log level | No | info |
debug, info, warn, error |
You can use two options to connect to your VictoriaMetrics instance:
- Using
VM_INSTANCE_ENTRYPOINT+VM_INSTANCE_TYPE+VM_INSTANCE_BEARER_TOKEN(optional) environment variables to connect to any single-node or cluster instance of VictoriaMetrics. - Using
VMC_API_KEYenvironment variable to work with your VictoriaMetrics Cloud instances.
Modes
MCP Server supports the following modes of operation (transports):
stdio- Standard input/output mode, where the server reads commands from standard input and writes responses to standard output. This is the default mode and is suitable for local servers.sse- Server-Sent Events. Server will expose the/sseand/messageendpoints for SSE connections.http- Streamable HTTP. Server will expose the/mcpendpoint for HTTP connections.
More info about traqnsports you can find in MCP docs:
Π‘onfiguration examples
# For a single-node instance
export VM_INSTANCE_ENTRYPOINT="http://localhost:8428"
export VM_INSTANCE_TYPE="single"
export VM_INSTANCE_BEARER_TOKEN="your-token"
# For a cluster
export VM_INSTANCE_ENTRYPOINT="https://play.victoriametrics.com"
export VM_INSTANCE_TYPE="cluster"
export MCP_DISABLED_TOOLS="export,metric_statistics,test_rules" # disable export, statistics and rules unit test tools
# For VictoriaMetrics Cloud
export VMC_API_KEY="<you-api-key>"
# Server mode
export MCP_SERVER_MODE="sse"
export MCP_LISTEN_ADDR="0.0.0.0:8080"
# Custom headers for authentication (e.g., behind a reverse proxy)
# Expected syntax is key=value separated by commas
export VM_INSTANCE_HEADERS="<HEADER>=<HEADER_VALUE>,<HEADER>=<HEADER_VALUE>"
Endpoints
In SSE and HTTP modes the MCP server provides the following endpoints:
| Endpoint | Description |
|---|---|
/sse + /message |
Endpoints for messages in SSE mode (for MCP clients that support SSE) |
/mcp |
HTTP endpoint for streaming messages in HTTP mode (for MCP clients that support Streamable HTTP) |
/metrics |
Metrics in Prometheus format for monitoring the MCP server |
/health/liveness |
Liveness check endpoint to ensure the server is running |
/health/readiness |
Readiness check endpoint to ensure the server is ready to accept requests |
Setup in clients
Cursor
Go to: Settings -> Cursor Settings -> MCP -> Add new global MCP server and paste the following configuration into your Cursor ~/.cursor/mcp.json file:
{
"mcpServers": {
"victoriametrics": {
"command": "/path/to/mcp-victoriametrics",
"env": {
"VM_INSTANCE_ENTRYPOINT": "<YOUR_VM_INSTANCE>",
"VM_INSTANCE_TYPE": "<YOUR_VM_INSTANCE_TYPE>",
"VM_INSTANCE_BEARER_TOKEN": "<YOUR_VM_BEARER_TOKEN>",
"VM_INSTANCE_HEADERS": "<HEADER>=<HEADER_VALUE>,<HEADER>=<HEADER_VALUE>"
}
}
}
}
See Cursor MCP docs for more info.
Claude Desktop
Add this to your Claude Desktop claude_desktop_config.json file (you can find it if open Settings -> Developer -> Edit config):
{
"mcpServers": {
"victoriametrics": {
"command": "/path/to/mcp-victoriametrics",
"env": {
"VM_INSTANCE_ENTRYPOINT": "<YOUR_VM_INSTANCE>",
"VM_INSTANCE_TYPE": "<YOUR_VM_INSTANCE_TYPE>",
"VM_INSTANCE_BEARER_TOKEN": "<YOUR_VM_BEARER_TOKEN>",
"VM_INSTANCE_HEADERS": "<HEADER>=<HEADER_VALUE>,<HEADER>=<HEADER_VALUE>"
}
}
}
}
See Claude Desktop MCP docs for more info.
Claude Code
Run the command:
claude mcp add victoriametrics -- /path/to/mcp-victoriametrics \
-e VM_INSTANCE_ENTRYPOINT=<YOUR_VM_INSTANCE> \
-e VM_INSTANCE_TYPE=<YOUR_VM_INSTANCE_TYPE>
-e VM_INSTANCE_BEARER_TOKEN=<YOUR_VM_BEARER_TOKEN>
-e VM_INSTANCE_HEADERS="<HEADER>=<HEADER_VALUE>,<HEADER>=<HEADER_VALUE>"
See Claude Code MCP docs for more info.
Visual Studio Code
Add this to your VS Code MCP config file:
{
"servers": {
"victoriametrics": {
"type": "stdio",
"command": "/path/to/mcp-victoriametrics",
"env": {
"VM_INSTANCE_ENTRYPOINT": "<YOUR_VM_INSTANCE>",
"VM_INSTANCE_TYPE": "<YOUR_VM_INSTANCE_TYPE>",
"VM_INSTANCE_BEARER_TOKEN": "<YOUR_VM_BEARER_TOKEN>",
"VM_INSTANCE_HEADERS": "<HEADER>=<HEADER_VALUE>,<HEADER>=<HEADER_VALUE>"
}
}
}
}
See VS Code MCP docs for more info.
Zed
Add the following to your Zed config file:
"context_servers": {
"victoriametrics": {
"command": {
"path": "/path/to/mcp-victoriametrics",
"args": [],
"env": {
"VM_INSTANCE_ENTRYPOINT": "<YOUR_VM_INSTANCE>",
"VM_INSTANCE_TYPE": "<YOUR_VM_INSTANCE_TYPE>",
"VM_INSTANCE_BEARER_TOKEN": "<YOUR_VM_BEARER_TOKEN>",
"VM_INSTANCE_HEADERS": "<HEADER>=<HEADER_VALUE>,<HEADER>=<HEADER_VALUE>"
}
},
"settings": {}
}
}
See Zed MCP docs for more info.
JetBrains IDEs
- Open
Settings->Tools->AI Assistant->Model Context Protocol (MCP). - Click
Add (+) - Select
As JSON - Put the following to the input field:
{
"mcpServers": {
"victoriametrics": {
"command": "/path/to/mcp-victoriametrics",
"env": {
"VM_INSTANCE_ENTRYPOINT": "<YOUR_VM_INSTANCE>",
"VM_INSTANCE_TYPE": "<YOUR_VM_INSTANCE_TYPE>",
"VM_INSTANCE_BEARER_TOKEN": "<YOUR_VM_BEARER_TOKEN>",
"VM_INSTANCE_HEADERS": "<HEADER>=<HEADER_VALUE>,<HEADER>=<HEADER_VALUE>"
}
}
}
}
Windsurf
Add the following to your Windsurf MCP config file.
{
"mcpServers": {
"victoriametrics": {
"command": "/path/to/mcp-victoriametrics",
"env": {
"VM_INSTANCE_ENTRYPOINT": "<YOUR_VM_INSTANCE>",
"VM_INSTANCE_TYPE": "<YOUR_VM_INSTANCE_TYPE>",
"VM_INSTANCE_BEARER_TOKEN": "<YOUR_VM_BEARER_TOKEN>",
"VM_INSTANCE_HEADERS": "<HEADER>=<HEADER_VALUE>,<HEADER>=<HEADER_VALUE>"
}
}
}
}
See Windsurf MCP docs for more info.
Using Docker instead of binary
You can run VictoriaMetrics MCP server using Docker instead of local binary.
You should replace run command in configuration examples above in the following way:
{
"mcpServers": {
"victoriametrics": {
"command": "docker",
"args": [
"run",
"-i", "--rm",
"-e", "VM_INSTANCE_ENTRYPOINT",
"-e", "VM_INSTANCE_TYPE",
"-e", "VM_INSTANCE_BEARER_TOKEN",
"-e", "VM_INSTANCE_HEADERS",
"ghcr.io/victoriametrics-community/mcp-victoriametrics",
],
"env": {
"VM_INSTANCE_ENTRYPOINT": "<YOUR_VM_INSTANCE>",
"VM_INSTANCE_TYPE": "<YOUR_VM_INSTANCE_TYPE>",
"VM_INSTANCE_BEARER_TOKEN": "<YOUR_VM_BEARER_TOKEN>",
"VM_INSTANCE_HEADERS": "<HEADER>=<HEADER_VALUE>,<HEADER>=<HEADER_VALUE>"
}
}
}
}
Usage
After installing and configuring the MCP server, you can start using it with your favorite MCP client.
You can start dialog with AI assistant from the phrase:
Use MCP VictoriaMetrics in the following answers
But it's not required, you can just start asking questions and the assistant will automatically use the tools and documentation to provide you with the best answers. Just take a look into Dialog example section for better understanding what you can do with it.
Toolset
MCP VictoriaMetrics provides numerous tools for interacting with your VictoriaMetrics instance.
Here's a list of common available tools:
| Tool | Description | Enabled by default |
|---|---|---|
query |
Execute instant PromQL/MetricsQL queries | β |
query_range |
Execute range PromQL/MetricsQL queries over a time period | β |
metrics |
List available metrics | β |
metrics_metadata |
Stored metrics metadata (type, help and unit) | β |
labels |
List available label names | β |
label_values |
List values for a specific label | β |
series |
List available time series | β |
export |
Export raw time series data to JSON or CSV | β |
rules |
View alerting and recording rules | β |
alerts |
View current alerts (firing and pending) | β |
flags |
View non-default flags of the VictoriaMetrics instance | β |
metric_statistics |
Get metrics usage (in queries) statistics | β |
active_queries |
View currently executing queries | β |
top_queries |
View most frequent or slowest queries | β |
tsdb_status |
View TSDB cardinality statistics | β |
tenants |
List available tenants in multi-tenant cluster setup | β |
documentation |
Search in embedded VictoriaMetrics documentation | β |
metric_relabel_debug |
Debug Prometheus-compatible relabeling rules | β |
downsampling_filters_debug |
Debug downsampling configuration | β |
retention_filters_debug |
Debug retention filters configuration | β |
prettify_query |
Prettify and format PromQL/MetricsQL queries | β |
explain_query |
Parse PromQL/MetricsQL queries and explain how it works | β |
test_rules |
Unit-test alerting and recording rules using vmalert tool | β |
Here are some additional tools that are available for VictoriaMetrics Cloud (with specifying VMC_API_KEY parameter) users:
| Tool | Description | Enabled by default |
|---|---|---|
deployments |
List available deployments in VictoriaMetrics Cloud | β |
cloud_providers |
List available cloud providers in VictoriaMetrics Cloud | β |
regions |
List available cloud providers regions in VictoriaMetrics Cloud | β |
tiers |
List available deployment tiers in VictoriaMetrics Cloud | β |
access_tokens |
List available deployment access tokens in VictoriaMetrics Cloud | β |
rule_filenames |
List available alerting and recording rule filenames in VictoriaMetrics Cloud | β |
rule_file |
Content of a specific alerting or recording rule file in VictoriaMetrics Cloud | β |
Prompts
The server includes pre-defined prompts for common tasks.
These are just examples at the moment, the prompt library will be added to in the future:
| Prompt | Description |
|---|---|
unused_metrics |
Find metrics that are never queried and create relabel configs to stop ingesting them |
documentation |
Search VictoriaMetrics documentation for specific topics |
rarely_used_metrics_with_high_cardinality |
Identify high-cardinality metrics that are rarely queried |
Dialog example
This is just example of dialog with Assistant using MCP Server that can help to understand how it can work (but's it's real dialog with using the Victoria Metrics MCP server with Claude Desktop and configured to public playground instance):
-
You: What tenants are available in my VM?
-
Assistant In your VictoriaMetrics instance, the following tenants are available
0:0(default tenant),1:0,10:0,100:0 -
You: From now on, please use only default tenant
-
Assistant: I understand. From now on, I'll use only the default tenant (0:0) for all operations with your VictoriaMetrics instance.
-
You: Which metrics do we have?
-
Assistant: There are many metrics available in your VictoriaMetrics instance. The list includes metrics from various systems and components, such as:
- System metrics (
node_,machine_, etc.), - Kubernetes metrics (
kube_,container_,apiserver_, etc.), - VictoriaMetrics-specific metrics (
vm_,vmagent_,vmalert_, etc.), - Go runtime metrics (
go_), - Prometheus-related metrics (
scrape_,promhttp_), - Various application metrics (
argocd_,alertmanager_).
This is a comprehensive list of over 2000 metrics that are available for querying. Is there a specific type of metric you're interested in exploring further?
- System metrics (
-
You: Ok, let's work with kubernetes metrics from now on. Please show me which of these metrics never queried?
-
Assistant: Based on the results, I can see that there are two Kubernetes metrics that have never been queried (query count is 0):
kube_persistentvolumeclaim_deletion_timestamp- This metric provides information about when PersistentVolumeClaims were marked for deletion.kube_pod_init_container_status_last_terminated_reason- This metric shows the reason why init containers in pods were last terminated.
These metrics might be valuable for monitoring and troubleshooting purposes:
- The PVC deletion timestamp metric could help track deletion patterns or identify stuck PVCs.
- The init container termination reason metric could help identify common failure patterns in your initialization containers.
Would you like me to explain more about what these metrics track or suggest some useful queries to start monitoring them?
-
You: Yes, please. And also, can you create relabeling rules to stop ingesting these metrics?
-
Assistant: I'll create a relabel configuration to exclude these never-queried metrics from collection. This will help reduce storage and processing overhead. Here's the relabel configuration you can add to your VictoriaMetrics scrape configuration:
yaml relabel_configs: - action: drop source_labels: [__name__] regex: kube_persistentvolumeclaim_deletion_timestamp|kube_pod_init_container_status_last_terminated_reasonThis configuration will: 1. Look at each metric's name (the
__name__label) 2. Check if it matches either of the specified metrics 3. Drop the metrics that match, preventing them from being collected and storedYou can add this to your VictoriaMetrics configuration file or to the relevant scrape job configurations in your Prometheus configuration if you're using Prometheus as a scraper for VictoriaMetrics. If you want to test this relabel configuration before implementing it, you can use the metric relabel debug tool to see how it would affect your metrics.
-
You: Yes, please debug this relabel config on real time series to see how it would affect my metrics
-
Assistant: Perfect! The relabel config works as expected:
- The first two test cases show that both
kube_persistentvolumeclaim_deletion_timestampandkube_pod_init_container_status_last_terminated_reasonmetrics are correctly dropped (resulting labels are empty{}). - The third test case confirms that other metrics like
kube_pod_status_phaseare preserved and not affected by our drop rule.
You can confidently add this relabel configuration to your VictoriaMetrics or Prometheus configuration to stop collecting these unused metrics. This will save storage space and reduce processing overhead without affecting your current monitoring capabilities.
- The first two test cases show that both
-
You: But i'm using vmagent resource of VictoriaMetrics operator, could you please give me part of specification for my agent with this relabel config?
-
Assistant: You're right, since you're using the VictoriaMetrics Operator and the VMAgent custom resource, the configuration needs to be in the appropriate format for that CR. Here's how you can add the relabel configuration to your VMAgent specification:
```yaml apiVersion: operator.victoriametrics.com/v1beta1 kind: VMAgent metadata: name: your-vmagent-name spec: # ... your existing configuration ...
# Add this section to configure global metric relabeling inlineRelabelConfig: - action: drop sourceLabels: ["name"] regex: kube_persistentvolumeclaim_deletion_timestamp|kube_pod_init_container_status_last_terminated_reason
# ... rest of your configuration ... ```
Key points:
- In the VMAgent spec, you use
inlineRelabelConfigfor inline relabeling rules - The field is called
sourceLabels(notsource_labelsas in Prometheus config) - This is a global relabeling that will be applied to all metrics before they're stored
- In the VMAgent spec, you use
-
You: I decided not to remove those metrics, but just lower the sample rate for them using Downsampling. What downsampling parameters i have now?
-
Assistant: I can see that you currently have the following downsampling configuration:
-downsampling.period="7d:1m,30d:5m,180d:30m"This means:
- Data newer than 7 days is kept at its original resolution
- Data between 7-30 days old is downsampled to 1-minute resolution
- Data between 30-180 days old is downsampled to 5-minute resolution
- Data older than 180 days is downsampled to 30-minute resolution
Now, if you want to add specific downsampling rules for those rarely-queried metrics, you can use the downsampling filters debug tool to test how new rules would work. Let's test a configuration where we apply more aggressive downsampling to those specific metrics:
...getting series and using downsampling filter debugger...
Great! Now our test is working correctly. Based on the current configuration and our testing, here's a suggested downsampling configuration for your VMAgent CR that would apply more aggressive downsampling to those rarely-queried metrics:
```yaml apiVersion: operator.victoriametrics.com/v1beta1 kind: VMCluster metadata: name: monitoring-vm-cluster spec: # ... your existing configuration ...
# Update the vmstorage section with the new downsampling parameters vmstorage: extraArgs: downsampling.period: '{name=\"kube_persistentvolumeclaim_deletion_timestamp\"}:1d:10m,{name=\"kube_pod_init_container_status_last_terminated_reason\"}:1d:10m,7d:1m,30d:5m,180d:30m'
# ... rest of your configuration ... ```
This configuration:
- Applies 10-minute resolution downsampling for the rarely-queried metrics after just 1 day - Maintains your existing downsampling policy for all other metrics
The order of the downsampling rules is important - more specific rules should be listed first.
During this dialog, the assistant was using the following tools:
tenantsto get the list of available tenantsdocumentationto get information about functionality and data formatsmetricsto get the list of available metricsmetrics_staisticsto get the information about metrics usageseriesto get the time series for debuggingmetric_relabel_debugto debug relabeling rulesflagsto get the information about instance parametersdownsampling_filters_debugto debug downsampling configuration
But you can use any other tools and combine them in your own way.
Monitoring
In SSE and HTTP modes the MCP Server provides metrics in Prometheus format (see endpoints) and you can find in repo simple grafana dashboard for these metrics.
Roadmap
- [x] Support "Prettify query" tool (done in
v0.0.5) - [x] Support "Explain query" tool (done in
v0.0.6) - [x] Support CI pipeline for building and pushing multiarch docker images (done in
v1.0.0) - [ ] Support tool for analysis of Query execution statistics
- [ ] Support vmanomaly
- [x] Support tool for unit-testing of alerting and recording rules (done in
v0.0.7) - [x] Support optional integration with VictoriaMetrics Cloud (via API keys) (done in
v0.0.9) - [ ] Add some extra knowledge to server in addition to current documentation tool:
- [x] VictoriaMetrics blog posts (done in
v1.1.0) - [ ] Github issues
- [ ] Public slack chat history
- [ ] CRD schemas
- [ ] Alerting and recording rule sets
- [ ] Implement multitenant version of MCP (that will support several deployments)
- [ ] Add flags/configs validation tool
- [ ] Support tools for vmagent API
- [ ] Support new vmalert API
- [x] Enabling/disabling tools via configuration (done in
v0.0.8) - [ ] Tools for Alertmanager APIs #6
- [ ] Support for metrics metadata in case of implementation in VictoriaMetrics
- [ ] Support authentication
- [ ] Add static index page with description and links to documentation
Mentions
Disclaimer
AI services and agents along with MCP servers like this cannot guarantee the accuracy, completeness and reliability of results. You should double check the results obtained with AI.
The quality of the MCP Server and its responses depends very much on the capabilities of your client and the quality of the model you are using.
Contributing
Contributions to the MCP VictoriaMetrics project are welcome!
Please feel free to submit issues, feature requests, or pull requests.
No tools listed for this server.
out of 100
Security Review
Integration: VictoriaMetrics
Repository: https://github.com/VictoriaMetrics-Community/mcp-victoriametrics
Commit: latest
Scan Date: 2026-03-15 04:37 UTC
Security Score
80 / 100
Tier Classification
Silver
OWASP Alignment
OWASP Rubric
- Standard: OWASP Top 10 (2021) aligned review
- Core methodology: architecture context, trust boundaries, data-flow tracing, threat modeling, control verification, and evidence-backed validation
- Key characteristics considered: exploitability, impact, likelihood, attacker preconditions, and business context
OWASP Security Category Mapping
- A01 Broken Access Control: none
- A02 Cryptographic Failures: none
- A03 Injection: none
- A04 Insecure Design: none
- A05 Security Misconfiguration: none
- A06 Vulnerable and Outdated Components: none
- A07 Identification and Authentication Failures: none
- A08 Software and Data Integrity Failures: none
- A09 Security Logging and Monitoring Failures: none
- A10 Server-Side Request Forgery: none
Static Analysis Findings (Bandit)
High Severity
None
Medium Severity
None
Low Severity
None
Build Status
SKIPPED
Build step was skipped to avoid running untrusted build commands by default.
Tests
Not detected
Documentation
README: Present
Dependency file: Present
AI Security Review
Security Code Review β VictoriaMetrics (mcp-victoriametrics)
Reviewer: automated OWASP-aligned security code review
Date: 2026-03-15
Summary: I reviewed the MCP (Model Context Protocol) integration for VictoriaMetrics (cmd/mcp-victoriametrics) using an OWASP methodology: I inspected the main entrypoint, configuration parsing, HTTP client usage, tool implementations, logging/hooks, and utility helpers. I traced data flows from incoming MCP tool calls into HTTP requests to the VictoriaMetrics instance or VictoriaMetrics Cloud API, and I inspected how secrets and responses are handled and logged. Static scans included the vendorized client and mcp-go server code where relevant.
Contents
1. OWASP Review Methodology Applied
2. Findings mapped to OWASP Top 10 2021 categories
3. Critical vulnerabilities
4. High severity issues
5. Medium severity issues
6. Low severity issues
7. Key risk characteristics for each finding
8. Positive security practices observed
9. Concrete recommendations (file:line pointers where applicable)
10. Next tier upgrade plan (current tier and actions to reach next tier)
1) OWASP Review Methodology Applied
- Orientation: read cmd/mcp-victoriametrics/main.go and config/config.go to understand runtime modes and configuration sources (env vars). Reviewed prioritized tool files (cmd/mcp-victoriametrics/tools/*), utils, logging, hooks, and vendor client for cloud API.
- Entry points: main.go initializes MCP server and registers tools & prompts. Tools expose many HTTP-based operations which build internal HTTP requests to the configured VictoriaMetrics entrypoint or cloud API.
- Trust boundaries & inputs: MCP client-supplied tool arguments (mcp.CallToolRequest) are untrusted. Environment variables and cloud API responses are trusted at different levels (owner-provided config vs. remote API). I traced how tool args are used to build URLs, query params, and how request/response bodies and secrets flow into logs and caches.
- Data flow tracing: Followed tool handlers -> CreateSelectRequest/CreateAdminRequest -> getSelectURL/getRootURL -> getBearerToken -> http.NewRequestWithContext -> http.DefaultClient.Do -> response processing (GetTextBodyForRequest) and finally returning mcp.CallToolResult back to the client.
- Threat modeling: focused on SSRF, DoS/resource exhaustion, exposure of secrets in logs/responses, unbounded memory use, panics/DoS via slicing/indexing, insecure header handling, and dependency risk.
- Verification: inspected concrete source lines in files listed below and reproduced reasoning from code constructs to derive findings.
2) OWASP Top 10 2021 Category Mapping
- A01 Broken Access Control: Not observed as an immediate remote bypass, but caution recommended when returning cloud resources that may contain sensitive data (see logging and tool outputs).
- A02 Cryptographic Failures: Not directly observed (no custom crypto), but sensitive tokens are handled in memory and could be leaked (see A09/A06 overlaps).
- A03 Injection: Not applicable for traditional SQL/OS injection; no shell execution in primary integration code.
- A04 Insecure Design: Some design choices permit large responses to be fully read into memory and full request/response objects to be logged (design-level issues causing DoS and secret exposure).
- A05 Security Misconfiguration: Use of custom headers that may override Authorization header; lack of explicit maximum body size handling.
- A06 Vulnerable and Outdated Components: Dependency list present (go.mod); recommend regular vulnerability scanning and dependency updates.
- A07 Identification and Authentication Failures: Use of bearer tokens and cloud API keys is correct but needs careful handling to avoid exposure in logs and results.
- A08 Software and Data Integrity Failures: Not observed.
- A09 Security Logging and Monitoring Failures: Logging hooks log request and result payloads (could leak secrets and large payloads). Also lack of redaction/size limits.
- A10 Server-Side Request Forgery (SSRF): Low/medium risk due to building HTTP requests from a combination of config and user-supplied path/query values; entrypoint is primarily configured but some user-supplied path segments are appended.
3) Critical Vulnerabilities (RCE, unsafe deserialization, auth bypass)
- None discovered in the examined code.
4) High Severity Issues
1. Excessive/unbounded logging of request and result payloads (sensitive data leakage, information exposure)
- Files/locations:
- cmd/mcp-victoriametrics/hooks/hooks.go β AddBeforeAny logs full request message: slog.Info("MCP request received", ..., "message", toJSON(message))
(approx file lines around hooks.AddBeforeAny definition)
- cmd/mcp-victoriametrics/hooks/hooks.go β AddOnSuccess logs full result: slog.Info("MCP request succeeded", ..., "result", toJSON(result))
- cmd/mcp-victoriametrics/hooks/hooks.go β AddOnError logs error and message: slog.Error(..., "message", toJSON(message), "error", err.Error())
- Severity: High (A09 - Security Logging and Monitoring Failures)
- Why: toJSON() marshals the entire request/result object. These objects can include tool arguments and returned API payloads that may contain sensitive data (e.g., rule files, tokens revealed by cloud APIs, tenant identifiers, secrets embedded in config responses). Storing them in logs unredacted can leak secrets to any system consuming log output.
- Remediation: Redact sensitive fields and limit size. Replace toJSON() usage with a safe serializer that filters or redacts known sensitive keys (authorization headers, token secrets, access tokens, rule file contents), and enforce a maximum logged payload size (e.g., truncate > 4KB). Also add a configuration switch to disable request/response logging or switch to metadata-only logging.
- Entire response body read into memory without limits (resource exhaustion / DoS)
- Files/locations: cmd/mcp-victoriametrics/tools/utils.go β GetTextBodyForRequest reads full body using io.ReadAll(resp.Body) (function definition at ~lines 219-239 in that file)
- Severity: High (A05/A04)
- Why: Endpoints like /api/v1/export or other endpoints may return large responses. io.ReadAll will attempt to read the entire response into memory; a large response can cause excessive memory use, OOM or degraded service for the MCP process. Attackers with ability to control query params (match, start/end) or cloud deployment responses could trigger large payloads.
- Remediation: Replace io.ReadAll with a bounded reader (io.LimitReader) and/or stream results to the client, and enforce a maximum acceptable response size (configurable), returning an error if exceeded. For export endpoints, prefer streaming results back to the MCP client, or implement paged requests on the upstream API. Also apply per-request timeouts and monitoring.
5) Medium Severity Issues
1. Slice bounds/Indexing panic in alerts pagination (DoS via panic)
- File/line: cmd/mcp-victoriametrics/tools/alerts.go, slice operation around: filteredAlerts = filteredAlerts[int(offset):int(offset+limit)] (approx lines 156β166)
- Severity: Medium (A04/DOS) β can lead to panic inside a tool handler
- Why: offset and limit are float64 parameters that may be missing bounds checks. If offset or offset+limit exceed len(filteredAlerts) or are negative, the explicit slice operation will panic. Although server.WithRecovery() is used in main.go to recover panics in tool handlers, causing the panic to be converted to an error, repeated triggering may cause log flooding and service instability.
- Remediation: Validate/normalize offset and limit before slicing: clamp to [0, len(filteredAlerts)], convert float64 safely to int after bounds checks, ensure limit==0 means no limit, and handle offset >= len -> return empty slice. Add mcp.Min(0) constraints on these parameters during tool definition (and treat limit==0 specially) or perform defensive code checks.
- Large / unbounded API responses returned directly to clients (resource exhaustion & information exposure)
- Files: many tools call GetTextBodyForRequest (tools/utils.go) and then return that content. Examples: cmd/mcp-victoriametrics/tools/export.go (export endpoint) and many other tools (query, query_range, series, etc.). See search results for GetTextBodyForRequest usage.
- Severity: Medium (A04/A05)
- Why: multiple tools will forward API responses without size checks or streaming. The export endpoint (CSV/JSON) could be very large.
-
Remediation: For endpoints that may return large content (export), implement streaming to the MCP client and/or put a read size limit and an explicit error/notice when response is larger than acceptable.
-
Path/parameter validation too permissive for some path segments
- Files/lines:
- cmd/mcp-victoriametrics/tools/labelvalues.go: label_name registered with mcp.Pattern(
^.+$) and later used in CreateSelectRequest path (used as URL path segment). Pattern is too permissive (line where mcp.WithString("label_name", ...) is defined).
- cmd/mcp-victoriametrics/tools/labelvalues.go: label_name registered with mcp.Pattern(
- Severity: Medium (A10 / SSRF surface and path manipulation)
- Why: An overly permissive pattern may allow unexpected characters in path segments. While the entrypoint URL is controlled by config and JoinPath is used, permitting arbitrary characters without encoding/validation could lead to malformed requests or unintended URLs.
- Remediation: Restrict label_name to a safe character class (e.g., [A-Za-z0-9_:-]+) or properly url.PathEscape user-provided path segments before calling url.JoinPath. Prefer using query parameters when possible. Add tests to assert proper URL construction for edge characters.
6) Low Severity Issues
1. Custom headers are applied after Authorization header and can overwrite it
- File/line: cmd/mcp-victoriametrics/tools/utils.go β CreateSelectRequest and CreateAdminRequest: req.Header.Set("Authorization", ...) followed by loop adding cfg.CustomHeaders() (no check for overriding Authorization) (around lines 18β36 and 40β60)
- Severity: Low (A05)
- Why: If VM_INSTANCE_HEADERS contains Authorization header, it overwrites the bearer token set earlier. This may be intended but should be explicit/documented and/or prevented for accidental override.
- Remediation: Either apply custom headers first and then set Authorization last to enforce configured/bearer token, or explicitly detect and disallow overriding Authorization in custom headers, or log when overriding occurs (with redaction). Document behavior.
- In-memory cache of cloud access tokens lacks eviction
- File/line: cmd/mcp-victoriametrics/tools/utils.go β cloudAccessTokenCache map with RWMutex (lines where declared)
- Severity: Low (resource/maintenance)
- Why: The cache grows per-deployment id without TTL or eviction; a malicious or misconfigured system might cause it to grow unbounded.
-
Remediation: Add a TTL and/or bounded size and eviction policy to the cache.
-
Authorization header set even when bearer token is empty
- File/line: CreateSelectRequest/CreateAdminRequest set Authorization without checking for empty token.
- Severity: Low
- Why: Adds header "Authorization: Bearer " when token missing; harmless but noisy and could confuse upstream servers and logs.
-
Remediation: Only set Authorization header when token is non-empty.
-
Minor: minor variable naming inconsistency in rule_file.go (ruleFilenames vs content), not a security issue but confusing.
7) Key Risk Characteristics (exploitability, impact, likelihood, preconditions)
- Excessive logging of payloads (hooks/hooks.go): Exploitability: trivial (attacker just triggers tools that return sensitive data or sends crafted requests). Impact: high (secrets can leak to logs). Likelihood: high if logs are collected centrally. Preconditions: attacker needs privileged access to call tools that return sensitive data (MCP client must be authorized to talk to server). Business context: logs may be aggregated and retainedβbig risk.
-
ReadAll of response bodies (utils.GetTextBodyForRequest): Exploitability: moderate (attacker can trigger queries that result in large responses). Impact: medium/high (OOM or service disruption). Likelihood: moderate. Preconditions: attacker can send tool calls that cause large response (e.g., export with broad match).
-
Alerts pagination panic (alerts.go): Exploitability: trivial (attacker passes offset/limit values causing out-of-bounds). Impact: medium (panic recovered but causes error, DoS of that tool, logs). Likelihood: moderate (parameters are tool args). Preconditions: none beyond ability to call the tool.
-
Permissive path segment (label_name): Exploitability: low/moderate (could cause malformed requests or strange upstream behavior). Impact: low. Preconditions: control of label_name parameter.
-
Custom headers override Authorization: Exploitability: low (requires environment override by deployer). Impact: low/medium (unexpected auth behavior). Preconditions: attacker with ability to set env VM_INSTANCE_HEADERS or operator misconfiguration.
8) Positive Security Practices Observed
- Use of context.Context when creating requests (http.NewRequestWithContext) β allows upstream cancellations and server timeouts.
- Use of url.Parse and url.JoinPath for constructing upstream URLs (safer than string concatenation).
- Use of mutexes (RWMutex) to protect token and deployment info caches.
- Use of server.WithRecovery() in main.go to recover tool handler panics (defensive programming to avoid process crash).
- Parameter pattern constraints for many tool parameters (mcp.Pattern used to constrain many IDs, timestamps, etc.).
- Tests exist for many utility behaviors (good test coverage for routines like GetToolReqParam and GetSelectURL). Tests demonstrate careful behavior in typical cases.
9) Recommendations β Specific fixes with file:line references and OWASP context
(Use the repository line numbers as approximate references where exact line numbers may vary.)
High priority (fix within days):
1. Reduce logging of raw request/result payloads and add redaction
- Files: cmd/mcp-victoriametrics/hooks/hooks.go β functions AddBeforeAny, AddOnSuccess, AddOnError (where toJSON(message) and toJSON(result) are used).
- Severity/Category: High / A09
- Remediation: Replace toJSON(...) with a safe serializer that:
- Removes or redacts known sensitive fields (e.g., Authorization headers, token secrets, file contents, big binary blobs, request/response bodies). Examples: any field names like "secret", "token", "authorization", "credentials", or rule file contents returned by cloud APIs.
- Limits the length of data to log (truncate at a safe limit, e.g., 4 KB), and log a size/indicator that content was truncated.
- Make request/response logging configurable (enable/disable) via env flags (e.g., MCP_LOG_REQUESTS=false) and default to disabled or metadata-only.
- Add automated tests that simulate responses with secrets to ensure they are redacted. (OWASP: A09)
- Limit response body size and support streaming
- Files: cmd/mcp-victoriametrics/tools/utils.go β GetTextBodyForRequest uses io.ReadAll (approx lines 219-239).
- Severity/Category: High / A05, A04
-
Remediation: Implement a max read size and/or streaming path. Example remediation:
- Replace io.ReadAll(resp.Body) with io.ReadAll(io.LimitReader(resp.Body, MaxResponseBytes)) where MaxResponseBytes is configurable (e.g., 5MB), and return an explicit error if the upstream response exceeds this. For export endpoints where large content is expected, implement streaming via a different tool path that streams back to the client (not buffer entire body in memory).
- Ensure timeouts and request cancelation are enforced (the request uses context alreadyβkeep it).
-
Fix slice bounds/pagination panic in alerts tool
- File/line: cmd/mcp-victoriametrics/tools/alerts.go β the line that slices filteredAlerts[int(offset):int(offset+limit)] (approx around where limit/offset handled).
- Severity/Category: Medium / A04
- Remediation: Validate offset and limit before slicing and clamp to len(filteredAlerts). Example logic:
- Convert offset and limit safely: oi := max(0, int(offset)), li := int(limit)
- If limit <= 0 => treat as no limit
- Compute end := oi + li; if end > len(filteredAlerts) { end = len(filteredAlerts) }
- If oi >= len(filteredAlerts) => filteredAlerts = []
- Use these bounds to slice without risk of panic.
- Add mcp.Min(0) when registering the limit/offset tool params if supported, and reject negative values in the handler.
Medium priority (fix within 1β2 weeks):
4. Restrict/encode path segments coming from user input
- Files/lines: cmd/mcp-victoriametrics/tools/labelvalues.go (mcp.WithString("label_name", pattern ^.+$)) and all tools that append user-provided values to URL path segments (search for CreateSelectRequest(..., path... where path components contain tool args)
- Severity/Category: Medium / A10
- Remediation: Restrict label_name and other path components to a safe character class (for labels: ^[A-Za-z0-9_:\-]+$), or explicitly call url.PathEscape on user-provided segments before using them in URL path construction. Add tests for characters such as /, .., spaces, large unicode.
- Avoid allowing custom headers to accidentally override Authorization
- Files: cmd/mcp-victoriametrics/tools/utils.go β CreateSelectRequest and CreateAdminRequest.
- Severity/Category: Low / A05
-
Remediation: Either apply custom headers before setting Authorization (so Authorization wins), or explicitly filter out Authorization from custom headers and log (redacted) the override. Document behavior in README and env var parsing.
-
Add eviction/TTL to cloud access token cache
- File/line: cmd/mcp-victoriametrics/tools/utils.go β cloudAccessTokenCache map (declaration area)
- Severity/Category: Low
- Remediation: Implement TTL and/or LRU/bounded cache for cloudAccessTokenCache to avoid unbounded growth. Consider using a lock-protected map with timestamps or a small 3rd-party bounded cache with TTL.
Other recommendations / hygiene
- Only set Authorization header when token non-empty: cmd/mcp-victoriametrics/tools/utils.go CreateSelectRequest/CreateAdminRequest.
- Add explicit per-tool parameter validation where mcp.Pattern is currently permissive (e.g., label_name uses ^.+$). Add mcp.Min/Max for numeric parameters where appropriate (offset/limit on alerts should have Min 0).
- Add monitoring/alerting for error rates and request sizes so the operator can detect attempted DoS via large responses.
- Run dependency vulnerability scanning (govulncheck, github dependabot already is present) and upgrade dependencies regularly. (A06)
10) Next Tier Upgrade Plan (security integration tier)
Interpretation: I use a small maturity model: Bronze = basic functionality with minimal security hygiene, Silver = reasonable protections (input validation, secret handling, logging hygiene), Gold = hardened (rate-limits, streaming, secrets redaction, secure defaults), Reject = critical issues.
- Current likely tier: Bronze/Silver boundary
-
Rationale: The integration follows many good practices (use of context, validation patterns, recovery middleware, caching, mutexes) but contains high-impact issues (unredacted logs and unbounded response reads). These issues prevent confidently calling the integration Silver.
-
Target next tier: Silver -> Gold
- Prioritized actions to reach Silver (short term)
- Implement logging redaction and size limits in hooks (hooks/hooks.go). This addresses the highest severity issue (secret exposure) quickly.
- Implement maximum response body size and streaming for export endpoints (tools/utils.go). This prevents OOM/DoS from large upstream responses.
- Fix alerts pagination bounds checks (tools/alerts.go) to remove panic risk.
- Add stricter input patterns / path-escape for user-provided path components (label names and filenames) and add tests around edge characters.
- Additional actions to reach Gold (medium term)
- Add TTL/eviction for cloud token cache and instrument cache metrics.
- Add configurable caps for per-request payload size and a server-wide resource limit per-request or per-session.
- Add optional server-side rate limiting and per-session quotas for expensive tools (export, query_range large ranges).
- Harden logging configuration defaults to metadata-only and make redaction default-on.
- Schedule regular dependency vulnerability scans and upgrade critical dependencies.
Appendix: Concrete code pointers
- main server initialization: cmd/mcp-victoriametrics/main.go β server.NewMCPServer(..., server.WithRecovery(), server.WithLogging(), ...). Recovery helps avoid process crash but does not prevent DoS (line near top of main.go).
- GetTextBodyForRequest: cmd/mcp-victoriametrics/tools/utils.go β io.ReadAll(resp.Body) and no size limit. (approx func start at line ~219)
- CreateSelectRequest/CreateAdminRequest: cmd/mcp-victoriametrics/tools/utils.go β NewRequestWithContext and header handling (Authorization then custom headers). (approx lines 18β36 and 40β60)
- getBearerToken: cmd/mcp-victoriametrics/tools/utils.go β cloud token discovery, RevealDeploymentAccessToken usage and in-memory caching. (approx lines 100β160)
- cloud cache declarations: cmd/mcp-victoriametrics/tools/utils.go β cloudAccessTokenCache and cloudDeploymentInfoCache declarations near top of file.
- Alerts pagination slice: cmd/mcp-victoriametrics/tools/alerts.go β filteredAlerts = filteredAlerts[int(offset):int(offset+limit)] (approx lines 156β166)
- Logging hooks: cmd/mcp-victoriametrics/hooks/hooks.go β AddBeforeAny/AddOnSuccess/AddOnError use toJSON(message) and toJSON(result). (around when hooks added)
- Label name pattern: cmd/mcp-victoriametrics/tools/labelvalues.go β mcp.WithString("label_name", mcp.Pattern(^.+$)) (pattern too permissive)
Closing summary
Overall the MCP integration is well-organized and uses many good practices (context on requests, parameter patterns, mutex-protected caches, recovery middleware). The most urgent issues are two high-severity problems: (1) logging of entire requests/results (which may leak secrets) and (2) reading upstream response bodies into memory without any size limit (risk of OOM/DoS). Medium severity issues (panic via slice indexing, permissive path segments) should also be addressed promptly. Fixing these issues along with modest enhancements (redaction, streaming, caching eviction) will raise the integration to a robust Silver/Gold posture.
If you would like, I can produce suggested code patches for the highest-priority fixes (safe logging/redaction wrapper and bounded read for GetTextBodyForRequest) including tests and small API changes.
Summary
Security Score: 80/100 (Silver)
Static analysis found 0 high, 0 medium, and 0 low severity issues.
Build step skipped for safety.
No automated tests detected.
Sign in to leave a review
No reviews yet β be the first!
Configuration
VM_INSTANCE_BEARER_TOKEN
required
π password
Configure the connection to VictoriaMetrics MCP Server
VM_INSTANCE_ENTRYPOINT
required
string
URL of VictoriaMetrics instance (it should be root URL of vmsingle or vmselect)
VM_INSTANCE_TYPE
required
string
Type of VictoriaMetrics instance (possible values: single, cluster)
VM_INSTANCE_HEADERS
string
Optional custom headers to include in requests to VictoriaMetrics instance, formatted as 'header1=value1,header2=value2'
MCP_SERVER_MODE
string
Mode in which the MCP server operates (possible values: stdio, http, sse)
MCP_LISTEN_ADDR
string
Address and port on which the MCP server listens for incoming connections (e.g., ':8080')
MCP_DISABLED_TOOLS
string
Comma-separated list of tools to disable (possible values: export, metric_statistics, test_rules)
MCP_DISABLE_RESOURCES
string
Set to 'true' to disable all resource endpoints
MCP_HEARTBEAT_INTERVAL
string
Defines the heartbeat interval for the streamable-http protocol. It means the MCP server will send a heartbeat to the client through the GET connection, to keep the connection alive from being closed by the network infrastructure (e.g. gateways)
Docker Image
Docker HubPublished by github.com/VictoriaMetrics-Community