Connector Contribution Guide¶
This guide tells you everything you need to know to write an ACES connector and submit it for community review. It is written for developers — starting with the assumption that you know how to call a REST API and write a little JSON.
1. What is an ACES Connector?¶
A connector is a translator.
On one side: a vendor's API — messy, vendor-specific, inconsistently shaped. On the other side: clean, structured ACES metric rows that any compliant platform can read.
Think of it as an inverted bowtie:
CrowdStrike API response (messy) ──┐
NinjaRMM API response (different) ──┤──► [ ACES Connector ] ──► Standard metric rows
SentinelOne API response (unique) ──┘
Your connector doesn't need to understand compliance frameworks, scoring, or control mappings. That's the platform's job. Your connector has exactly one job: call the vendor API, normalize the data, return ACES metric rows.
If you can write a function that takes credentials and returns an array of metric rows, you can write an ACES connector.
2. The Connector Contract¶
Every ACES connector must fulfill this exact input/output contract.
Input¶
A credential object defined by your connector's credential schema. The keys are your connector's own — ACES does not prescribe credential field names.
Output¶
A single JSON object:
{
"connector_type_id": "string — your connector slug (e.g. 'huntress')",
"collected_at": "string — ISO 8601 datetime when collection completed",
"tenant": {
"company_id": "string — MSP/top-level tenant identifier",
"client_id": "string — end-client identifier within MSP"
},
"metrics": [
{
"category": "string — metric category (see §5 for standard categories)",
"metric_key": "string — metric identifier (see §4 for naming rules)",
"metric_value": "string — always a string, regardless of type",
"metric_type": "number | percentage | boolean | count | duration | datetime | json | string",
"unit": "string (optional) — display unit e.g. '%', 'hours', 'devices'"
}
],
"metadata": {
"api_calls_made": "number — how many API requests were made",
"collection_duration_ms": "number — wall-clock time in milliseconds",
"collection_method": "string — name of collection function called"
}
}
metric_value is always a string
Even when the value is a number, boolean, or timestamp — it must be serialized as a string in metric_value. Consumers use metric_type to cast it. This is a hard requirement for compatibility across heterogeneous connector outputs.
✅ "metric_value": "97.4" with "metric_type": "percentage" ❌ "metric_value": 97.4
What your connector must NOT do¶
- Do not map evidence to controls or frameworks — that is the platform's job
- Do not calculate compliance scores — produce raw metrics only
- Do not store credentials — read them from the input and discard
- Do not make metric keys tenant-specific —
agents_onlinenotacme_corp_agents_online
3. Connector Slug Naming Rules¶
Your connector slug is its permanent identifier. Choose carefully — it cannot change after registration.
Rules:
- Lowercase only
- No spaces
- Hyphens allowed, underscores not
- Use the product name, not the company name where they differ
- Keep it short — 20 characters maximum
Examples:
| Product | Company | Correct Slug |
|---|---|---|
| Huntress | Huntress Labs | huntress |
| SentinelOne | SentinelOne Inc. | sentinelone |
| NinjaRMM | NinjaOne | ninjarmm |
| CIS-CAT Pro | CIS | cis-cat |
| KnowBe4 | KnowBe4 Inc. | knowbe4 |
| Microsoft 365 | Microsoft | msgraph (use the API name) |
| ConnectSecure | ConnectSecure | connectsecure |
If your connector maps data to a compliance framework (e.g. a CIS benchmark scanner), also register your connector's framework mapping in Framework Keys.
4. Metric Key Naming Rules¶
Metric keys are the field names of your output. They must be consistent, readable, and unambiguous.
Rules:
| Rule | Correct | Wrong |
|---|---|---|
| Lowercase, underscore-separated | agents_online | AgentsOnline, agents-online |
Format: {noun}_{descriptor} | mfa_enrollment_percent | percent_mfa, mfaEnrollment |
Percentage metrics end in _pct or _percent | patch_compliance_pct | patch_compliance, pct_patched |
Count metrics end in _count | incidents_critical_count | critical_incidents, num_incidents |
| Boolean metrics are positive assertions | mfa_enabled | mfa_disabled, no_mfa |
| Avoid abbreviations except established ones | agents_online | agts_onln |
| Never tenant-specific | agents_online | acme_agents_online |
Established abbreviations: pct (percentage), avg (average), mfa, edr, mttd, mttr, api, os, ip
Example well-named metric keys from production connectors:
agents_total → count of all agents
agents_online_percentage → percentage online (ends in _percentage)
incidents_critical → count of critical incidents
mean_time_to_detect_hours → duration in hours
mfa_enrollment_percent → percentage enrolled in MFA
audit_log_enabled → boolean, positive assertion
last_collection_at → datetime of last collection
device_type_distribution → json breakdown object
5. Category Naming Rules¶
Categories group related metrics. Use a standard category before inventing a new one.
Standard categories:
| Category | Use For |
|---|---|
endpoint_protection | EDR/AV agent health, coverage, threat detection |
vulnerability_management | CVE data, scan coverage, remediation |
access_control | MFA, conditional access, privilege management |
identity | User counts, licensing, guest users |
patch_management | Patch compliance, pending patches, scan freshness |
security_awareness_training | SAT enrollment, completion rates |
phishing_simulation | Phishing campaign click/report rates |
network_monitoring | Asset/network inventory, coverage |
configuration_management | CIS benchmark, hardening compliance |
audit_logging | Log retention, log coverage, SIEM health |
device_management | Managed device compliance, OS distribution |
incident_response | Open/closed incidents, MTTD/MTTR |
application_control | Allow/deny actions, approval queue |
asset_inventory | Total assets, stale assets, scan coverage |
compliance | General compliance scores, check pass rates |
Proposing a new category: Open a GitHub Issue tagged category-proposal. Provide: proposed name, 2+ metric keys that belong to it, and why existing categories don't cover it.
6. Required vs Optional Metrics¶
Every connector MUST produce:¶
- At least one metric per category it claims to cover — do not register a category with zero metrics
- A coverage percentage metric for any countable resource — if you collect agents, produce
agents_coverage_pct; if users, produceusers_coverage_pct - A
last_collection_atdatetime metric in every category — consumers use this to detect stale data
Recommended (not required):¶
- A
total_{resource}count metric for any resource type you collect - An
api_accessibleboolean metric for API tiers your connector may not have access to (set"false"with explanation in notes if the endpoint isn't available) - A
collection_duration_msmetric for performance monitoring
Hardcoded values¶
If a metric is defined in your spec but your connector cannot collect it from this vendor's API, you MUST either:
- Omit it from the output entirely, OR
- Return it with
metric_value: "0"and add a note in your connector spec page documenting the limitation
Do NOT silently return hardcoded placeholder values without documentation.
7. Credential Schema¶
Define what credentials your connector needs using a JSON Schema object. This schema is used by platforms to build the configuration UI and validate credentials before calling your connector.
Format:
{
"type": "object",
"required": ["fieldName1", "fieldName2"],
"properties": {
"fieldName1": {
"type": "string",
"title": "Human-Readable Label",
"description": "Where to find this value",
"secret": true
}
}
}
Example: API key only¶
{
"type": "object",
"required": ["apiKey"],
"properties": {
"apiKey": {
"type": "string",
"title": "API Key",
"description": "Generate in vendor dashboard under Settings → API",
"secret": true
}
}
}
Example: Dual API key (key + secret)¶
{
"type": "object",
"required": ["apiKey", "apiSecret"],
"properties": {
"apiKey": {
"type": "string",
"title": "API Key",
"description": "API key from vendor portal",
"secret": false
},
"apiSecret": {
"type": "string",
"title": "API Secret",
"description": "API secret — treat like a password",
"secret": true
}
}
}
Example: OAuth2 client credentials¶
{
"type": "object",
"required": ["clientId", "clientSecret", "tenantId"],
"properties": {
"clientId": {
"type": "string",
"title": "Client ID",
"description": "Application (client) ID from Azure App Registration",
"secret": false
},
"clientSecret": {
"type": "string",
"title": "Client Secret",
"description": "Client secret value — only shown once at creation",
"secret": true
},
"tenantId": {
"type": "string",
"title": "Azure Tenant ID",
"description": "Directory (tenant) ID from Azure AD → Properties",
"secret": false
}
}
}
secret: true tells platforms to store the value encrypted and mask it in UIs. Use it for passwords, secrets, tokens, and private keys. Do NOT use it for IDs, URLs, or usernames.
8. Submitting Your Connector¶
-
Fork
ComplianceScorecard/aceson GitHub -
Create your connector spec page at
docs/connectors/{your-slug}.md— use the template below -
If your connector maps to a compliance framework, add your slug mapping to
docs/specification/framework-keys.md -
Submit a Pull Request with title:
Connector: {Display Name}— e.g.Connector: Huntress -
Your PR must include:
- Credential schema (JSON Schema format, §7)
- Complete metrics table (all categories × all metric keys)
- At least one real JSON output example with realistic values
-
Any known limitations or hardcoded values documented
-
Review period: minimum 7 days. Reviewers will check:
- Metric key naming follows §4 rules
- All required metrics present (§6)
- Credential schema is complete and marks secrets correctly
-
JSON example is valid and matches the metric table
-
Merge: maintainer merges after community review and approval
First-time contributors
Read the Huntress connector spec before writing yours — it is the canonical reference example showing exactly what a complete connector spec looks like.
9. Connector Spec Page Template¶
Copy this template to docs/connectors/{your-slug}.md and fill it in.
# {Display Name} Connector
**Slug:** `{slug}`
**Vendor:** {Vendor/Company Name}
**Category:** {one of: vulnerability | edr | dlp | siem | asset | benchmark | pii_discovery | access_audit}
**Auth type:** {api_key | oauth2 | basic | certificate}
**Status:** {draft | review | stable}
---
## Overview
{2-3 sentences: what does this tool do, what compliance data does it produce, what kind of MSP/security team uses it}
---
## Credential Schema
| Field | Required | Secret | Description |
|-------|----------|--------|-------------|
| `fieldName` | Yes/No | Yes/No | Where to find it, what it controls |
```json
{
"type": "object",
"required": [...],
"properties": { ... }
}
```
---
## Metrics
| Category | metric_key | metric_type | unit | Description |
|----------|-----------|-------------|------|-------------|
| `category` | `metric_key` | type | unit or — | What this measures |
---
## Example Output
```json
{
"connector_type_id": "{slug}",
"collected_at": "2026-03-22T12:00:00Z",
"tenant": {
"company_id": "msp-example",
"client_id": "client-acme"
},
"metrics": [
{
"category": "...",
"metric_key": "...",
"metric_value": "...",
"metric_type": "...",
"unit": "..."
}
],
"metadata": {
"api_calls_made": 4,
"collection_duration_ms": 1820,
"collection_method": "collectAll"
}
}
```
---
## Framework Mappings
| ACES Framework Key | Coverage | Notes |
|-------------------|----------|-------|
| `cis-v8` | Partial / Full / None | Which controls this connector provides evidence for |
---
## Notes
- Any known API limitations
- Hardcoded values and why
- Rate limits or pagination behaviour contributors should be aware of
- Links to vendor API documentation
10. Real Example — Huntress¶
See the complete reference implementation: Huntress Connector Spec
This page was written using real production data from the CSC MCP connector implementation and demonstrates every element the template requires.