Datasets:
The dataset viewer is not available for this subset.
Exception: ConnectionError
Message: Couldn't reach 'PolicyLayer/mcp-server-catalogue' on the Hub (ReadTimeout)
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 268, in get_dataset_config_info
builder = load_dataset_builder(
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1315, in load_dataset_builder
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1207, in dataset_module_factory
raise e1 from None
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1149, in dataset_module_factory
raise ConnectionError(f"Couldn't reach '{path}' on the Hub ({e.__class__.__name__})") from e
ConnectionError: Couldn't reach 'PolicyLayer/mcp-server-catalogue' on the Hub (ReadTimeout)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
PolicyLayer MCP Server Catalogue
A risk-classified catalogue of every Model Context Protocol (MCP) server reachable through the public registries. Each tool exposed by each server is classified into one of six risk categories using a verb-based classifier with input-schema heuristics.
1,787 servers · 25,329 tools · classified May 2026 · refreshed monthly.
The catalogue is the underlying dataset for PolicyLayer's research report, The State of MCP Security — May 2026.
Loading the dataset
from datasets import load_dataset
ds = load_dataset("PolicyLayer/mcp-server-catalogue")
servers = ds["servers"]
tools = ds["tools"]
print(f"{len(servers)} servers, {len(tools)} tools")
# Find every destructive tool exposed by an identity-provider MCP server
ip_slugs = [s["slug"] for s in servers if "identity" in (s["description"] or "").lower()]
destructive = tools.filter(
lambda t: t["category"] == "Destructive" and t["server_slug"] in ip_slugs
)
Schema
servers split
One row per Model Context Protocol server with at least one classified tool.
| Field | Type | Description |
|---|---|---|
slug |
string | Stable identifier. Use this to join with tools.server_slug. |
name |
string | Human-readable server name as published by its author. |
packages |
array of strings | npm or other package identifiers exposing this server. |
repo_url |
string | null | Source repository URL where available. |
description |
string | null | Server-author-supplied description. |
tool_count |
int | Number of tools the server exposes. |
categories |
array of strings | Distinct risk categories present in this server's tools. Subset of {Read, Write, Execute, Destructive, Financial, Other}. |
source |
string | Discovery channel: crawler (npm auto-discovery), smithery (Smithery registry), seed (manually-added official), user_scan (community contribution). |
confidence |
string | Classifier confidence for this server's labels: high, verified, medium, low. |
last_scanned_at |
string (ISO 8601) | Most recent scan timestamp. |
npm_weekly_downloads |
int | null | Weekly downloads of the server's npm package as of the snapshot date. Null if the server is not on npm or the registry didn't return data. |
npm_latest_version |
string | null | Latest published version on npm. |
npm_last_published |
string (ISO 8601) | null | Timestamp of the latest npm publish. |
github_stars |
int | null | GitHub stargazer count for the source repository. |
github_forks |
int | null | GitHub fork count. |
github_open_issues |
int | null | Currently-open GitHub issues + pull requests. |
github_last_commit |
string (ISO 8601) | null | pushed_at timestamp on the GitHub repository — proxy for liveness. |
github_license |
string | null | SPDX licence identifier reported by GitHub (e.g., MIT, Apache-2.0). |
github_archived |
bool | null | Whether the GitHub repository is archived (read-only). |
tools split
One row per tool exposed by a server in the servers split.
| Field | Type | Description |
|---|---|---|
server_slug |
string | Foreign key to servers.slug. |
name |
string | Tool name as exposed via tools/list. |
description |
string | null | Tool-author-supplied description. |
category |
string | Risk category. One of Read, Write, Execute, Destructive, Financial, Other. |
severity |
string | Discrete severity bucket: Low, Medium, High, Critical. |
confidence |
string | Classifier confidence: high, verified, medium, low. |
risk_weight |
float | Risk weight in [0.0, 1.0]. 0.0 = read-only, 1.0 = destructive financial. |
risk_type |
string | null | Specific risk type where determinable (e.g., delete, drop, payment). |
input_schema |
string | null | JSON-encoded tool input schema. May be null if the source provided no schema. |
Risk categories
| Category | Definition | Risk weight typical range |
|---|---|---|
Read |
Retrieves data without modification. | 0.05–0.15 |
Write |
Creates or modifies data and resources. | 0.20–0.45 |
Execute |
Runs code, scripts, or container commands. | 0.40–0.70 |
Destructive |
Permanently deletes or revokes resources. | 0.55–0.85 |
Financial |
Moves real money: charges, payments, refunds, transfers. | 0.65–1.00 |
Other |
Does not fit the above categories. | varies |
Classification is verb-based with input-schema heuristics. Methodology details and known failure modes are documented in the research report.
Coverage and limitations
- Public registries only. The dataset covers servers reachable through the official Model Context Protocol registry, npm, Smithery, and Glama. Private and self-hosted servers are not represented.
- Lower bound. Some registry-listed servers are unreachable through the scan pipeline (broken installs, dependency failures, or auth-walled launchers) and are excluded. The dataset is therefore a lower bound on the real ecosystem.
- Static snapshot. This release is the May 2026 edition. New monthly versions are released on the 1st of each month.
- Classifier confidence varies. 72.3% of tool classifications are high-confidence and 15.4% are verified. The remaining 12.3% are medium or low confidence and should be treated as advisory rather than authoritative for downstream risk decisions.
- Usage metrics are point-in-time.
npm_weekly_downloadsreflects the week immediately before the snapshot.github_*fields reflect the GitHub repository state at fetch time. Metrics are null where the server is not on npm, has no GitHub repo, or the upstream API rate-limited the fetch.
Citation
If you use this dataset in research or commentary, please cite the research report:
PolicyLayer. (2026). The State of MCP Security — May 2026.
https://policylayer.com/research/state-of-mcp-2026
For the dataset itself:
PolicyLayer. (2026). PolicyLayer MCP Server Catalogue [Data set].
Hugging Face. https://huggingface.co/datasets/PolicyLayer/mcp-server-catalogue
Updates
The catalogue is regenerated monthly from PolicyLayer's continuously-updated scan pipeline. Each monthly release is tagged on Hugging Face. Subscribe to the dataset to be notified of new versions.
| Edition | Tag | Servers | Tools |
|---|---|---|---|
| May 2026 | 2026-05 |
1,787 | 25,329 |
Related
- Research report: The State of MCP Security — May 2026
- Public tool catalogue: policylayer.com/tools — browse classifications by server.
- MCP attack database: policylayer.com/attacks — documented MCP incidents and attack patterns.
- PolicyLayer: policylayer.com — the MCP control plane (gateway, policy engine, audit log) for production agent fleets.
Questions, methodology details, or data requests: research@policylayer.com.
Licence
This dataset is released under Creative Commons Attribution 4.0 International (CC-BY-4.0).
You are free to use, share, adapt, and redistribute the data, including for commercial purposes, with attribution to PolicyLayer. The classifier output is the work product of PolicyLayer's research team; tool descriptions and server metadata are the work of their respective authors and remain under whatever licence the source projects supply.
- Downloads last month
- 28