QEMD-ENTRa Modules

Preview of three patent-pending modules. Join the early-access list to pilot in lab or field environments.

Dark Communicator Coming soon

Software-first secure channel with quantum-inspired key derivation and in-band fidelity scoring. Built for spectrum-constrained, GPS-denied, and contested environments.

  • Hardware optional: runs over existing radios, mesh, or wired links.
  • HKDF-based keying with tamper alerts and trust downgrades.
  • Fidelity telemetry to gate coordination boosts autonomously.
  • Edge-attestable logs with hash-chained ledger integration.
Join Early Access Learn More

WrongUnraveling Detector Coming soon

Overconfidence and drift detection for autonomous systems and AI agents. Flags confidence-evidence gaps with adaptive thresholds and audit-grade telemetry.

  • Standalone library or edge service with JSONL telemetry.
  • Works on-device without model retraining or weight sharing.
  • One-click evidence packs for compliance and forensics.
  • SDK hooks for planners, simulators, CI pipelines, and LLM stacks.
Request Pilot Learn More

Dark State Detector Coming soon

Quantum-inspired state sanity checker. Compares live state vectors against WKB-style references and updates a curvature tensor to expose deep misalignment.

  • Physics-plausibility score and anomaly magnitude per tick.
  • Damping hooks for planners and risk gates.
  • Pure software (optional hardware accel).
  • JSON / Protobuf API with edge-first logging.
Get on the List Learn More

Specifications

Dark Communicator

  • Transport: Any byte stream: UDP, TCP, serial, mesh radios. Hardware optional.
  • Crypto: HKDF-derived session keys, per-packet nonces, authenticated framing.
  • Fidelity scoring: In-band score per exchange; trust downgrades when anomalous.
  • Attestation: Hash-chained ledger writes with optional RFC3161 notarization.
  • Footprint: < 300 KB core library; runs in user space.

Outcome: Gate coordination boosts by real-time link fidelity instead of hope.

WrongUnraveling Detector

v1.0 • Model-agnostic SDK

What it does

  • Detects and steers away from overconfident failures in LLMs/VLMs.
  • Hybrid OCI: confidence-evidence gap + normalized semantic entropy.
  • Superposition fusion: blend of policy and evidence.
  • VTI steering: nudges latents toward stability when risk spikes.
  • Auditability: JSONL telemetry + hash-chained evidence records.

Specifications

  • Latency: ~2 ms p95 (CPU) for 1k inputs; ~0.8 ms with GPU acceleration.
  • Inputs: Confidence / TRSD (or JSD), optional generations, policy distribution, evidence vector, fidelity/coherence.
  • Outputs: Hybrid OCI, severity tier, recommended action, optional fused distribution.
  • Telemetry: JSONL with timestamps, flags, thresholds, and hash chain.
  • Footprint: < 10 MB; CPU-first.
from wrong_unraveling_detector import WrongUnravelingDetector
import numpy as np

detector = WrongUnravelingDetector(trsd_threshold=0.30, semantic_n=8)

confidence = 0.90
trsd = 0.20
p_classical = np.random.dirichlet(np.ones(64))
evidence = np.random.randn(64)
generations = np.random.randn(8, 64)  # optional

result = detector.decide(confidence=confidence,
                         trsd_score=trsd,
                         p_classical=p_classical,
                         evidence=evidence,
                         generations=generations,
                         fidelity=0.95,
                         coherence=0.9,
                         entropy=0.1)

print(result["action"], result["scores"])
        

Dark State Detector

  • Method: WKB-style reference, curvature tensor updates, mismatch norm as anomaly.
  • Fallback: Classical mode for platforms without quantum libs; same API.
  • API: score(state_vec) → anomaly, physics_plausibility; emits JSONL telemetry.
  • Use cases: Physics sanity for guidance, black-box monitor for ML policies, risk-gated planning.
Get on the List

Integrations

OpenAI (GPT/ChatGPT)

Call detector.decide() post-logits, or batch_detect() for bulk.

Use cases: ChatGPT Enterprise add-on safety; internal tools with guardrails.

Buyer value: safety + auditability (hash-chained ledger) for regulated orgs.

Microsoft (Azure OpenAI / Copilot)

Insert as a Semantic Kernel step or LangChain link.

Azure ML: batch-score outputs and block regressions via CI.

Buyer value: Responsible AI posture; easy AKS containerization.

Meta (Llama)

Transformers: plug in post-softmax using AutoModelForCausalLM.

Multi-agent dark comms with per-agent HKDF keys + fidelity scoring.

Buyer value: open-model flexibility with enterprise guardrails.

Frameworks & Deployment

LangChain • Haystack (RAG) • Semantic Kernel.

HTTP/gRPC microservice for language-agnostic stacks.

pip for app-embedded use; Docker for on-prem; serverless for bursty checks.

Exports JSONL telemetry to S3/Blob/GCS; SIEM-friendly.

Talk to Engineering

Commercial Readiness & Pricing

Performance

  • Latency: ~2 ms p95 (CPU) for 1k decisions; ~0.8 ms with GPU.
  • Hallucination reduction: Target 50–80% on TruthfulQA-style prompts (validation in progress).

Tiers

  • Indie: $99 one-time (local use, single project).
  • Pro: $199 one-time or $20/mo (priority updates, email support).
  • Enterprise: Custom (SLA, SSO, usage-based licensing, private support).
Request Pricing

Roadmap

  • PyPI package & CLI entrypoint
  • Public benchmarks (TruthfulQA/HaluEval)
  • Azure/AWS marketplace listings
  • GitHub demo notebook & docs site