Skillibary

API reference

Six HTTP endpoints make up the public surface of the registry. The MCP server, the CLI, and any third-party integration use this same set. Everything is read-only and unauthenticated. Submit and edit are exposed only through the web app with a GitHub session cookie.

Conventions

  • Base URL: https://skillibary.com (or your self-hosted instance).
  • All responses are JSON. Errors return { "error": "..." }.
  • Only status = "verified" skills are visible. Pending and rejected entries return 404 to anonymous callers.
  • Rate limits are per-IP and documented per endpoint.

GET /api/search

Semantic search across verified skills. Embeds the query with voyage-code-3 and matches against the stored vectors via cosine similarity.

ParamTypeNotes
qstringRequired when present. Max 200 chars. If omitted, returns top skills by use_count.
tagsstringComma-separated tag filter. Applied on top of the semantic match (AND).
bash
curl 'https://skillibary.com/api/search?q=git+commit&tags=git'
json
[
  {
    "id": "a1b2c3d4-...",
    "name": "git-commit-writer",
    "description": "Writes clear, conventional git commit messages from staged diffs.",
    "tags": ["git", "commits", "conventional-commits"],
    "use_count": 1284,
    "version": "1.0.0",
    "author": "skillibary-seed",
    "status": "verified",
    "similarity": 0.83
  }
]

Rate limit: 30 requests per IP per minute on the embed path (when q is set). The no-query path is not rate-limited; it's a plain DB read.

Errors: 400 on query too long. 429 on rate limit. 500 on registry-side failure.

GET /api/list-skills

Paginated list of verified skills, ordered by use_count descending. No embeddings involved, so it's the cheap path when you don't have a search query yet.

ParamTypeNotes
tagsstringComma-separated OR-match filter.
limitnumberDefault 20, max 100.
offsetnumberDefault 0. Use with limit for paging.
bash
curl 'https://skillibary.com/api/list-skills?tags=sql&limit=10'

Response shape is identical to /api/search but without the similarity field.

GET /api/skills/[id]

Fetch a verified skill by UUID. Returns the full body content (content field) along with metadata. This is what the MCP server's get_skill tool calls.

bash
curl 'https://skillibary.com/api/skills/<uuid>'
json
{
  "id": "a1b2c3d4-...",
  "name": "git-commit-writer",
  "description": "Writes clear, conventional git commit messages from staged diffs.",
  "content": "## Description\n\nAnalyzes a git diff...",
  "tags": ["git", "commits"],
  "author": "skillibary-seed",
  "version": "1.0.0",
  "use_count": 1284,
  "status": "verified",
  "created_at": "2026-04-12T09:31:00.000Z"
}

Errors: 400 on malformed UUID. 404 on missing or unverified skill.

GET /api/skills/by-name/[name]

Look up a verified skill by its slug. Useful when you already know the name but not the id — the CLI's install command uses this.

bash
curl 'https://skillibary.com/api/skills/by-name/git-commit-writer'

Response shape is identical to GET /api/skills/[id]. Returns 400 on names that don't match ^[a-z0-9]+(?:-[a-z0-9]+)*$.

GET /api/skills/[id]/stats

Usage counters for a verified skill: total, 7-day, 30-day, and the skill's creation date.

bash
curl 'https://skillibary.com/api/skills/<uuid>/stats'
json
{
  "use_count": 1284,
  "uses_last_7_days": 87,
  "uses_last_30_days": 412,
  "created_at": "2026-04-12T09:31:00.000Z"
}

POST /api/skills/[id]/use

Logs a use event and atomically increments use_count. The MCP server's get_skill tool calls this in the background after fetching the skill body.

bash
curl -X POST 'https://skillibary.com/api/skills/<uuid>/use'

Body: none required.
Response: { "ok": true, "use_count": <new total> }.
Rate limit: 60 requests per IP per minute.
Errors: 400 on malformed UUID. 404 on missing skill. 409 if the skill exists but isn't verified. 429 on rate limit.

Building on the API

If you build something on top of these endpoints (a Slack bot, an IDE extension, an alternative frontend), there are no API keys to request and no quotas beyond the per-IP rate limits above. Let us know in an issue if your use case is hitting the limits — bumping them per-domain is fine.

For client-side authoring, the CLI is the recommended starting point. See CLI reference.