DocsArchitecture

Architecture

Two-tier AI architecture: continuous anomaly detection + conversational reasoning.

Two-Tier AI

Each agent runs two AI models directly on the resource. Together, they provide both continuous monitoring and deep investigative capability — without sending any data off-resource.

TNN™ and TNN Mesh™ are Patent Pending technologies of TernaryPhysics LLC.

Tier 1: TNN™

Anomaly Detection

  • • Ultra-compact neural network
  • • Minimal memory footprint
  • • Sub-millisecond inference
  • • Runs continuously
  • • Learns YOUR resource's patterns

Tier 2: TernaryPhysics-7B

Conversational Reasoning

  • • 7 billion parameters
  • • 4-bit quantized (Q4_K_M)
  • • ~15 tokens/second on CPU
  • • No GPU required
  • • Apache 2.0 licensed

How They Work Together

1

Normal Operation

The TNN watches continuously, comparing current metrics to learned baselines. It uses minimal CPU and detects anomalies in under 1ms. During normal operation, TernaryPhysics-7B sleeps.

2

Anomaly Detected

When the TNN detects deviation from baseline, it wakes TernaryPhysics-7B. The LLM analyzes logs, metrics, and system state to determine root cause. It reports findings and recommends actions.

3

Human Asks Question

When you run tp-ops ask, TernaryPhysics-7B activates to answer. It reads live data, queries other agents in the mesh, and provides contextual answers based on this specific resource's history.

TNN™: The Sentinel

The TNN™ (Ternary Neural Network) is our proprietary anomaly detection system. It uses a novel architecture optimized for continuous monitoring with minimal resource overhead.

Key Benefits

Extremely efficient: Proprietary architecture enables ultra-fast inference with minimal compute.
Runs anywhere: No GPU required. Works on virtually any hardware.
Always-on monitoring: Continuous anomaly detection without impacting your workloads.

The TNN™ continuously learns what's "normal" for your specific resource. It adapts to your traffic patterns, your deployment schedule, your peak hours. This per-resource baseline is what makes anomaly detection accurate — it's not comparing to generic thresholds, but to YOUR resource's actual behavior.

TNN™ technology is Patent Pending.

TernaryPhysics-7B: The Brain

TernaryPhysics-7B is a quantized large language model optimized for infrastructure investigation. It powers the conversational interface — when you ask questions, this is what answers.

Model Size7 billion parameters (quantized)
Disk Space~4-5GB
Inference SpeedReal-time conversational (CPU)
GPU RequiredNo

The model runs entirely on CPU. On modern hardware, you'll get real-time conversational responses. Older hardware still works, just with slightly longer response times.

Local Execution

Both AI models run directly on the resource. Your infrastructure data is processed locally and never transmitted externally.

What stays local:

  • All logs, metrics, and traces
  • Query results and investigation data
  • AI model inference
  • Credentials and secrets

What's sent externally:

Only billing metadata (GB processed) for usage tracking. No actual data content.