GNAT
Cyber Threat Intel Made Simple
v1.9.0 • 159 Connectors • STIX 2.1 • Python 3.9+
The Problem
Every platform is an island
- Different APIs, auth schemes, data models
- No shared data contract between systems
- Custom code for every integration
- Fragile automation that breaks on API changes
The Solution
One Library. Every Platform.
- ✓ One unified interface (GNATClient)
- ✓ STIX 2.1 everywhere
- ✓ Ingest from any source
- ✓ Export to any destination
- ✓ Scheduled jobs, reports, investigations
Core Architecture
Analyst / Automation
↓
GNAT Core (Ingest, Export, AI, Reports, TAXII, Solr)
↓
STIX 2.1 ORM (Indicator, Actor, Vulnerability, Campaign, Malware)
↓
159 Connectors
↓
External Platforms
The Abstraction Advantage
- Portability: Switch platforms, pipeline stays the same
- Maintenance: API changes affect one connector, not every script
- Consistency: One interface works across 159 platforms
- Coherence: One scheduler, one log stream, one health endpoint
- Incremental: Each layer independently useful
- Testing: 5,100+ unit tests, 70% coverage minimum
159 Platform Connectors
Categories: Commercial TI • Dark Web/Cybercrime • Public Feeds • SIEM/Analytics • SOAR/DFIR • NDR/OT • Endpoint/XDR/MDR • Vulnerability Mgmt • Cloud/ASM • Identity/Email • Sandbox/BAS • AI/OSINT
Examples: ThreatQ • CrowdStrike • Recorded Future • VirusTotal • Splunk • Elastic • QRadar • Microsoft Sentinel • ... and 151 more
Unified contract: Every connector implements authenticate() • health_check() • get/list/upsert/delete_object() • to_stix() • from_stix()
STIX 2.1 ORM
Universal data contract — normalize all platforms into same object types
- Indicator — IOCs with STIX patterns: [ipv4-addr:value = '...']
- ThreatActor — Profiles, aliases, motivation, attribution
- Vulnerability — CVEs, CVSS scores, exploited flag, products
- AttackPattern — MITRE ATT&CK TTPs with tactic mapping
- Malware — Families, capabilities, kill-chain
- Relationship — Bidirectional links (indicates, uses, targets)
Unified Interface Contract
Same method signature across all 159 connectors:
authenticate() — OAuth2, API key, Basic, mTLS, etc.
health_check() — Lightweight ping + status
get_object(stix_id) — Fetch one object
list_objects(stix_type, limit, offset) — Paginated results
upsert_object(stix_obj) — Create or update
delete_object(stix_id) — Soft or hard delete
to_stix(native_obj) — Platform → STIX
from_stix(stix_obj) — STIX → platform format
Ingest Pipeline
SourceReader (14 types) → RecordMapper (12 types) → Workspace
- Readers: PlainText, CSV, JSON, JSONL, STIX Bundle, TAXII, SQL, MISP, Syslog, RSS, Email, OpenIOC, Splunk, Elastic, Kafka
- Mappers: FlatIOC, STIXPassthrough, MISP, CEF, SQLRow, CSV, RSSEntry, Email, OpenIOC, Splunk, Elastic, NVD CVE, Telemetry
- Processing: Deduplication, confidence scoring, x_target_sectors normalization, TTL tracking
- Output: STIX objects → Workspace (SQLite/Postgres/FlatFile)
Export Pipeline
Workspace → ExportFilter → ExportTransform → Delivery
- Filters: TypeFilter (indicator|malware|campaign...), ConfidenceFilter (min/max %), TLPFilter (WHITE|GREEN|AMBER|RED), SectorFilter, IOCTypeFilter, LimitFilter
- Transforms: EDLTransform (IP/domain/hash lists), NetskopeCETransform (tenant format), STIXBundle, CSV
- Delivery: FileDelivery (disk), EDLServer (HTTP), PlatformDelivery (write back to connectors), MultiDelivery (parallel)
- Real example: ThreatQ → GNAT → Netskope CE (FQDN+URL+SHA256 every 15 min) → Firewall EDLs
Scheduling & Job Orchestration
FeedScheduler — unified thread-safe scheduler for all job types
- Job Types: ExportJob (EDL/CE delivery), ReportJob (PDF/DOCX/HTML), CurationJob (library dedup), HealthCheckJob (connector monitoring), custom jobs
- Guarantees: Drift-corrected timing (hourly jobs stay hourly even if run takes 5min), overlap protection (skip or queue policy), on-success/on-failure hooks
- Adapters: APScheduler/Celery bridges for external schedulers,
to_cron_lines() for crontab export
- CLI:
FeedScheduler.summary() → single view of all running/queued/failed jobs
AI Agents
Multi-LLM threat research with human-in-the-loop confidence gating
- LLMClient — Unified facade for Claude, OpenAI, Grok (xAI), Gemini. Automatic provider fallback on errors/rate limits. Configured via [claude] [openai] [grok] [gemini] INI sections.
- ResearchAgent (SourceReader): Topic-driven research synthesis (e.g., "Lazarus Group infrastructure 2024"). Feed-driven monitoring of threat sources.
- ParsingAgent (RecordMapper): Extract structured STIX from unstructured text (reports, tweets, emails). All output capped at confidence ≤60, tagged x_source_type=ai_extracted.
- AI Trust Model: confidence_ceiling=60 (default, configurable) prevents AI intel from reaching EDLs without analyst review. Export pipelines default to ConfidenceFilter(min=70), excluding AI intel until promoted.
Natural Language Queries
Query threat intel using plain English. Two backends, same QuerySpec output
- BuiltinParser: Rule-based extraction — zero extra dependencies. Regex + keyword rules extract entities, IOC types, time ranges, platform filters.
- ClaudeParser: Claude API structured extraction. Returns same QuerySpec as builtin. Configured via [nlp] backend=claude.
- Usage:
gnat nlq "Get all IPs associated with Lazarus in last 30 days" → dispatched to all configured connectors
- QuerySpec output: entities, ioc_types, since, until, platforms, limit
Research Library
Three-tier analyst knowledge base with TTL enforcement
- Personal Workspaces: Analyst-owned, active investigation. Isolated per analyst.
- Staging (_gnat_staging):
lib.promote(ws, topic, researcher, note='...') — anyone can write, nothing auto-promotes. Manual review gate.
- Library (_gnat_library): Curated, read-only to analysts. CurationJob every 4h. Deduplication, TTL enforcement: indicators 24h, vulnerabilities 72h, campaigns 14d, actors 30d, other 7d.
- Workflow: Analyst checks
lib.is_fresh("APT29") → ResearchAgent if stale → manual review → promotes to staging with note → CurationJob to library
Automated Report Generation
Multi-audience, multi-format report production
- Daily Intel: 06:00 daily. PDF/HTML/Markdown. Audience: SOC analysts, shift handoff. Assisted AI mode (claude backend).
- Trends: Weekly. PDF/HTML. Audience: team leads, analysts. Assisted AI mode.
- Yearly Intel: Annual/manual. PDF/DOCX. Audience: management, compliance. Full AI mode (comprehensive synthesis).
- Pipeline: DataAggregator (no AI) → ReportSynthesizer (one LLM call per section) → Renderers (MD/HTML/PDF/DOCX) → Delivery (Email + SharePoint)
Data Lineage & Analyst Metrics
Append-only audit trail + ring-buffer aggregation
- Data Lineage (gnat.lineage): Append-only event log (INGESTED, ENRICHED, NORMALIZED, LINKED, EXPORTED, REPORTED, DELETED). UUID4 id, timestamp, object_id, actor, source, metadata (immutable). SQLAlchemy-backed lineage_events table.
- Analyst Metrics (gnat.metrics): Thread-safe ring-buffer. 9 metric types: INVESTIGATION_CREATED/CLOSED, ENRICHMENT_ATTEMPTED/SUCCESS, REPORT_PUBLISHED, GAP_DETECTED, FALSE_POSITIVE_FLAGGED, ANALYST_OVERRIDE, INTEL_PROMOTED.
- Queries: investigation_summary(days) → status breakdown + close rate. enrichment_effectiveness(platform, days) → hit rate + avg confidence. gap_frequency(days). false_positive_rate(days).
Search Sidecar
Solr 9.x full-text search across all 159 connectors simultaneously
- GNATIndexer: Add/update/delete Solr documents. Batch indexing with configurable batch_size. Field-mapped from STIX objects (type, value, confidence, x_target_sectors, tlp, source, timestamp).
- SearchMixin: Drop-in connector mixin. Auto-indexes on upsert_object() calls. Zero code change to existing connectors.
- Pipeline Patch: Routes ingest pipeline records through Solr indexer post-map.
- Library Patch: ResearchLibrary search-backed lookups. Cross-source correlation at query time.
- Config: [search] solr_url = http://localhost:8983/solr/gnat · enabled=true · batch_size=100
TAXII 2.1 Server
Full read + write protocol. Each workspace exposed as TAXII collection
- Discovery: GET /taxii2/ → server metadata
- Collections: GET /taxii2/roots/gnat/collections/ → list all workspace collections
- Objects: GET /taxii2/.../objects/ (paginated bundle) · POST /taxii2/.../objects/ (ingest with WRITE_TAXII permission) · DELETE /taxii2/.../objects/{id} (soft-delete by STIX ID)
- Manifest: GET /taxii2/.../manifest/ → object metadata · GET /taxii2/.../objects/{oid}/versions/ → version history
- CLI:
gnat taxii --port 8090 --api-key s3cr3t · TLP:AMBER + RED collections allow write · 56 unit tests
STIX Pattern Validator
Two-tier validation: pure-Python recursive descent + optional ANTLR grammar
- Tier 1 (always): Pure-Python recursive descent. Zero dependencies. Validates syntax and basic semantics.
- Tier 2 (optional): stix2-patterns ANTLR grammar. Full W3C STIX Pattern specification compliance. Requires
gnat[stix-validate].
- API:
from gnat.stix import validate_pattern, PatternValidationError · result = validate_pattern("[ipv4-addr:value = '1.2.3.4']")
- ORM opt-in:
Indicator(pattern="[domain-name:value = 'evil.com']", validate=True)
- CLI:
gnat validate pattern "[ipv4-addr:value = '1.2.3.4']" · gnat validate bundle indicators.json --strict
Health Monitoring
ConnectorHealthJob + schema drift detection
- ConnectorHealthJob (FeedJob subclass): Periodic health checks. Calls health_check() on all configured connectors. Samples list_objects(limit=1), stores field fingerprint.
- DriftReport: Alerts when changed fields exceed drift_threshold (default 20%). Slack webhook or email alerts on drift detection.
- Config [health_monitor]: enabled=true · interval_minutes=60 · alert_webhook=https://hooks.slack.com/... · drift_threshold=0.2
- CLI:
gnat health check · gnat health baseline · 74 unit tests
Capability Reflection
Runtime introspection of connector capabilities with guarded dispatch
- capabilities(): Returns [{name, signature, doc, type: 'auth|read|write|helper', platform_specific: bool}, ...]
- Guarded dispatch:
client.call("list_objects", stix_type="indicator") — safe, reads only. client.call("upsert", obj, allow_write=True) — requires explicit allow_write flag.
- Use cases: Dynamic UI generation (which connectors support write?). Automated testing (are all methods implemented?). Validation (does this method exist?)
- CLI:
gnat client capabilities --platform threatq · gnat client call --platform splunk --method list_objects · 31 unit tests
Terminal UI
Works over SSH — no browser, no display server required. Modern TUI built on Textual
- F1 Query: NLP search bar + scrollable STIX results table. Execute
gnat nlq "..." in-place.
- F2 Library: Research Library browser. Promote (Ctrl+P) / reject (Ctrl+X) staging items. TTL visibility.
- F3 Scheduler: Live job status (running/queued/failed). Manual trigger with Ctrl+T. Next-run countdown.
- F4 Reports: PDF/HTML/DOCX browser. Open in browser (Ctrl+O). Export to disk (Ctrl+E).
- F5 Investigations: Investigation browser. Status transitions. Ctrl+N new, Ctrl+R refresh.
- F6 Review: AI Intel Review Queue. Approve/Reject/Modify with confidence override. Ctrl+A bulk approve.
- Launch:
gnat tui · gnat tui review (start on review screen) · pip install "gnat[tui]" · 38 unit tests
Web Dashboard
Browser-based dashboard for server deployments. X-Api-Key auth, rate-limited 100 req/min
- REST API endpoints: GET/POST /api/library (research, promote, reject) · GET /api/reports (list, serve HTML inline) · GET/POST /api/scheduler/jobs (status, trigger) · GET /health (unauthenticated liveness)
- Single-page dashboard: GET / → dark theme, no build step. Live job monitoring, report browser, library search.
- Authentication: X-Api-Key header. Per-key role (VIEWER/ANALYST/OPERATOR/ADMIN) + permission enforcement.
- Launch:
gnat serve --port 8088 --api-key $(openssl rand -hex 16) · Bind 127.0.0.1 by default · pip install "gnat[serve]" · 54 unit tests
XSOAR Content Pack Generator
Auto-generate valid XSOAR 6 content packs from connector introspection
- What it generates: pack_metadata.json · Integrations//.yml (command defs) · Integrations//.py (Python bridge) · ReleaseNotes/.md
- Intelligence: Write methods flagged dangerous=true · Auth type auto-detected from constructor · Full integration signature extraction
- Upstream Contribution Pipeline: Packages connector as draft GitHub PR through 7-step compliance gate: 1) opt-in guard 2) in CLIENT_REGISTRY 3) ComplianceMatrix (8 methods + tests) 4) test suite pass 5) branch:contribute/{platform}-{ts} 6) commit+push to fork 7) draft PR via GitHub API
- CLI:
gnat codegen xsoar --connector threatq --output ./packs/ · gnat codegen openapi --spec api.yaml --ai (Claude-powered) · gnat codegen tests --connector crowdstrike
- Tests: 40 xsoar tests · 47 contribution pipeline tests
Docker Containerization
Production stack via docker-compose.yml with optional profile-based services
- Core services: gnat-scheduler (FeedScheduler, all ingest/export/AI jobs) · gnat-edl :8080 (EDL server, survives scheduler restart) · gnat-monitor :8090 (health endpoint, GET /status)
- Optional profiles: --profile search → solr :8983 (Solr search sidecar for full-text) · --profile monitoring → grafana :3000 (Solr + GNAT metrics dashboards)
- Volumes: Named volume gnat-workspace (persistent data) · .env.example → .env (credentials)
- Commands:
make docker-build · make docker-up · make docker-search · make docker-full · make docker-down
- Integration tests: Elasticsearch 8.13.4 + Solr 9.6 (non-conflicting ports 19200, 18983) ·
make test-docker (up → run → down)
Multi-Tenant Workspace Isolation
Transparent namespace prefixing — no schema migration — works with SQLite and FlatFile
- API:
manager = WorkspaceManager.default() · acme = manager.for_tenant("acme") · ws = acme.create("apt28-investigation") → stored as "acme::apt28-investigation" · beta = manager.for_tenant("beta") · ws2 = beta.create("apt28-investigation") → "beta::apt28-investigation" (no collision)
- TenantRegistry: Existing workspaces → "default" tenant · Per-tenant config_path in INI · list() scoped to tenant namespace
- Persistence: Works with SQLite, PostgreSQL, FlatFile backends. Config option GNAT_DB_URL → [database] INI → default.
- CLI:
gnat tenant list · gnat tenant create acme --display-name "Acme Corp" --config /etc/gnat/acme.ini · gnat tenant workspaces acme · gnat tenant delete acme --yes
- Tests: 63 unit tests
Database Migrations
Alembic 1.13 schema management. Zero runtime dependencies
- Migration files: 0001_init_all_tables (investigations, reports, workspaces, workspace_objects, enrichment_log, context_globals) · 0002_add_lineage_events (lineage_events table + composite index) · 0003_add_metrics_events (metrics_events table + index)
- CLI:
gnat-db upgrade head (apply all) · gnat-db current (show revision) · gnat-db history (show migration history) · gnat-db downgrade -1 (rollback)
- Configuration: GNAT_DB_URL env var → [database] INI section → default SQLite. get_combined_metadata() for Alembic auto-detect.
- Installation: pip install "gnat[migrations]"
Plugin System
Extensible architecture with entry_points discovery, filesystem scanning, HookBus pub/sub
- GNATPlugin ABC: name, version, capabilities (list[PluginCapability]), register(registry) method
- PluginCapability enum: CONNECTOR, READER, MAPPER, AGENT, REPORTER, HOOK
- HookBus: Thread-safe pub/sub. 14 built-in KNOWN_EVENTS (on_object_ingested, on_enrichment_complete, on_report_published, etc.). Async handlers. Exceptions never propagate.
- PluginRegistry: load/unload/get/list · entry_points discovery · GNAT_PLUGIN_DIRS env var · [plugins] dirs = /opt/gnat/plugins INI section
- Use case: Custom connector? Write GNATPlugin + register(). Custom reporter? Implement Reporter capability. Custom data sink? Hook into on_object_ingested.
Policy Engine (RBAC)
Role-based access control orthogonal to TLP labels. Static permission matrix. FastAPI-native
- Roles: VIEWER (READ_OBJECTS, READ_REPORTS, QUERY_NLP) · ANALYST (+ WRITE_OBJECTS, ENRICH, PROMOTE_INTEL) · OPERATOR (+ MANAGE_SCHEDULES, WRITE_TAXII, EXPORT) · ADMIN (+ MANAGE_KEYS, MANAGE_TENANTS, all perms)
- STIX Object Validator: Validates required properties, vocabularies, confidence, refs, ID format. 19 SDO types (indicator, malware, campaign...) · 16 SCO types (ipv4-addr, domain-name, file...) · 2 SRO types (relationship, sighting) · 4 meta types (bundle, marking-definition...).
- Audit Middleware: Every API request emits api_request HookBus event + structured log (actor, endpoint, timestamp, result).
- FastAPI integration:
engine.require(Permission.WRITE_TAXII, key_store) as Depends() guard
Workflow DAG Engine
Sequential DAG executor with success/failure routing, cycle detection, elapsed timing
- WorkflowStep: name, fn (callable), on_success, on_failure. Optional routing based on step result.
- WorkflowContext: investigation, enriched_objects, gaps, draft, errors, metadata. Shared state across all steps.
- WorkflowResult: success, steps_run, errors, elapsed_ms. Full execution trace.
- Built-in step factories: enrich_step, correlate_step, gap_detect_step, draft_report_step, transition_step, fn_step
- Pre-built workflows: PhishingTriage (enrich → correlate → gap_detect → draft_report → transition(IN_PROGRESS)) · IncidentResponse (same but transition(REVIEW))
- Usage:
from gnat.agents.workflows.phishing_triage import build_phishing_triage_workflow · result = build_phishing_triage_workflow(...).execute(ctx)
AI Intel Review Queue
Human-in-the-loop gate: PENDING → APPROVED → PROMOTED (or REJECTED / MODIFIED)
- ReviewService methods: submit(stix_id, stix_type, stix_data) → PENDING · approve(id, reviewer, notes, confidence) → APPROVED + optional confidence override · reject(id, reviewer, reason) → REJECTED · modify(id, reviewer, modified_properties) → capture analyst overrides · promote(id, reviewer, workspace_manager) → merge overrides + x_source_type=analyst_verified · bulk_approve/reject(ids) · stats() → per-status breakdown
- REST API: GET /api/review (list) · POST /api/review (submit) · GET /api/review/{id} (detail) · POST /api/review/{id}/approve · POST /api/review/{id}/reject · POST /api/review/{id}/modify · POST /api/review/{id}/promote · GET /api/review/stats
- CLI:
gnat review list --status pending --type indicator · gnat review approve --by alice --confidence 85 · gnat review stats
- TUI: F6 ReviewScreen. Approve/Reject/Modify with confidence input. Ctrl+A bulk approve. Per-status counts.
Control, Reasoning & Safety
ExecutionContext, domain boundaries, hypothesis engine, agent governor, HITL
- ExecutionContext: Every operation stamped with context_id, domain, trust_level, workspace_id. Append-only execution_log. Full audit trail from connector call to report.
- Domain Boundaries: Ingestion ↔ Analysis ↔ Investigation ↔ Reporting enforced. @domain_boundary decorator raises DomainBoundaryViolation on illegal cross-domain calls.
- HypothesisEngine + NegativeEvidence: propose → evaluate → close lifecycle. Confidence scoring weighted by connector TRUST_LEVEL (0.9/0.6/0.3). NegativeEvidenceRecord suppresses redundant re-queries within TTL.
- Connector Trust Model: 31 trusted_internal (0.9) · 61 semi_trusted (0.6) · 7 untrusted_external (0.3)
- AgentGovernor + HITL: All agent actions routed through permission matrix ([agent_policy] INI). High/critical actions create PENDING ReviewItem. Critical triggers XSOAR playbook.
GNATHunt — Detection Rules & Hunting
Convert STIX to detection rules. Hunt packages. ATT&CK coverage mapping. Drift detection.
- STIX → Detection Rules: Malware/Campaign/Tool indicators → Sigma + YARA + Suricata + Snort rules. Confidence-weighted rule weight. Automated rule publication to EDL + SOC platform integrations.
- Hunt Packages: Bundled rules with metadata (adversary, campaign, techniques, severity). Importable into Splunk, Elastic, Chronicle, QRadar. Version control + diff tracking.
- ATT&CK Coverage Mapping: Every rule tagged with MITRE ATT&CK technique/sub-technique. Coverage heatmap by tactic. Gaps identified and prioritized.
- Drift Detection: Scheduled hunt job compares rule effectiveness (hit count, FP rate) against baseline. Drift >10% triggers analyst review. Rules auto-archived if no hits in 90 days.
- Integration: GNATHunt hooks into campaign tracking — when Campaign confirmed, emit detection rules + hunt package. Daily hunts run via FeedScheduler.
Security & CI
GitHub Actions, Ruff, mypy, pytest, Dependabot, Snyk, secret scanning
- GitHub Actions CI: pylint on every push. Python 3.9/3.10/3.11/3.12 matrix. Fails build on any lint error. Badge on README signals status.
- Ruff + mypy: E, F, W, I, UP, B, C4, SIM rules. Line length 100, Python 3.9 target. warn_return_any, strict configs.
make check = lint + typecheck gate.
- Testing: 5,100+ unit tests across 40+ files. 70% minimum coverage enforced (fail_under=70 in pyproject.toml). Docker integration harness (Elasticsearch + Solr).
- Dependabot: Automated dependency update PRs. One PR per dependency for audit trail. Pinned versions reviewed before merge. Python + GitHub Actions deps.
- Security scanning: Snyk: dependency vulnerability scanning + code security analysis. GitHub secret scanning: blocks known patterns before history. AI confidence ceiling 60, draft PRs only, localhost binding by default, call() write guard, no credentials in source, HMAC constant-time validation.
Implementation Sequence
6-phase rollout timeline
- Phase 1 (Days 1-2): Install + configure config.ini · Test connectivity: gnat ping · First ingest dry-run
- Phase 2 (Days 3-5): All connector configs · Primary ingest jobs (TQ, RF, CS) · FeedScheduler running
- Phase 3 (Days 6-8): Netskope CE ExportJob (15 min) · EDL files hourly · Verify TQ → CE → firewall EDLs
- Phase 4 (Days 9-10): Configure [claude] in config.ini · First ResearchAgent query · CurationJob to scheduler
- Phase 5 (Days 11-14): Daily report — review PDF/HTML · Configure email delivery · ReportJob at 06:00 daily
- Phase 6 (Ongoing): Analyst config templates · Share EXAMPLES.md · Library review process
Deployment Architecture
Single Azure B2s VM (~$50/month baseline) with scale-out path
- Baseline deployment (single VM): gnat-scheduler.service (FeedScheduler: ingest, export, AI research, curation, reports) · gnat-edl.service :8080 (EDLServer: independent, survives scheduler restart) · gnat-health.service :8090 (Health endpoint: GET /status → JSON, Azure Monitor/Grafana ping)
- Storage: ~/.gnat/config.ini · workspaces/ (SQLite or Postgres) · /var/reports/ (PDF/DOCX output)
- Scale-out patterns: 100+ feeds → AI jobs → Azure Container Instances. EDL SLA <5 min → dedicate B1s VM to gnat-edl. 10+ analysts → FlatFileStore → PostgreSQL (one config change). Multi-tenant → 1 workspace namespace per tenant, shared codebase.
- Monitoring:
gnat health check for baseline · Prometheus + Grafana for scale · make docker-full --profile monitoring for local testing
All Roadmap Items Complete
✓ Every pending item has shipped — v1.9.0 complete
- ✅ 159 connectors · STIX 2.1 ORM · Ingest/export pipelines · Kafka telemetry ingestion
- ✅ HuntGNAT: STIX → Sigma/YARA/Suricata/Snort · Hunt packages · ATT&CK coverage
- ✅ Attribution: Campaign tracking · Diamond Model · Kill-chain · Infrastructure classification
- ✅ AI agents (Claude/OpenAI/Grok/Gemini) · TAXII 2.1 · Policy RBAC · 5,100+ tests
- ✅ Terminal UI (Textual) · Web Dashboard (FastAPI) · Docker (core + search + monitoring)
- ✅ Migrations (Alembic) · Plugin system (HookBus) · Workflows (DAG engine) · Multi-tenant
- ✅ Health monitoring · Capability reflection · XSOAR codegen · Upstream contributions
- ✅ Data lineage · Analyst metrics · Review queue · Phase 4 safety (ExecutionContext, domain boundaries, hypothesis engine, agent governor, HITL)
GNAT
One library. Every platform. Total control.
Version 1.9.0 • Python 3.9+ • Apache 2.0
159 connectors • STIX 2.1 • AI-assisted • Threat intel made simple
GitHub: wrhalpin/GNAT · Docs: wrhalpin.github.io/GNAT · Read: EXAMPLES.md, IMPLEMENTATION_PLAN.md, README.md