DemoAgentsDocsBlogFAQLog inGet started

Drop gents. The more you drop, the smarter they get.

Each agent lives on a resource. Drop more, they discover each other and form a mesh. Ask one question — the investigation traces causality across your entire infrastructure.

For SRE teams and platform engineers tired of being the integration layer.

Founder

Jessie Hermosillo

Founder & CEO

Built TernaryPhysics from the conviction that AI agents should reason through uncertainty instead of guessing around it. Background in applied physics and distributed systems. Filed the foundational patent in 2026.

01

Your tools don't talk to each other.

The API is slow. K8s shows nothing. Postgres shows nothing. Dashboards say green across the board.

"error: no correlation"

Each tool sees one resource. Nobody sees the chain. You're the integration layer.

You open five tabs, SSH into three boxes, and spend 45 minutes finding what changed.

"time_to_root_cause: 45m"

The investigation is manual. The tools are passive. You do the reasoning.

Your on-call rotation wakes up at 3am to stare at the same dashboards you built to prevent this.

"alert: page │ action: look"

Alerts detect. They don't investigate. The human is still the runtime.

02

Drop. Mesh. Ask.

tp-ops drop
SSH to a resource. Agent auto-detects what it's running on.
agents mesh
They find each other via mDNS or K8s DNS. Cross-resource correlation is automatic.
tp-ops ask
Ask in plain English. The mesh traces causality through every resource.
k8sgatewayvmpostgressecurityyou ask one question
Investigate

One question triggers agents across K8s, Postgres, VMs, and gateways. The mesh follows the thread.

Explain

Root cause chains with evidence. Not "something changed" — "deploy at 02:00 removed POOL_SIZE config."

Approve

Agents recommend specific commands. You say yes or no. They never act without you.

03

Resolved in 47 seconds.

$ tp-ops ask prod-cluster
prod-cluster > why is the API slow?
Investigating across mesh...
→ k8s-agent: payment-api response times 3x baseline since 02:03 UTC
→ postgres-agent: Connection pool exhausted (147/150 connections)
→ cicd-agent: Deploy at 02:00 UTC changed POOL_SIZE config
Root cause: Deploy removed pool size config, defaulting to 150.
Fix: Restore POOL_SIZE=50 in payment-api config.
Apply? [yes/no]
prod-cluster > yes
✓ Config updated. Rolling restart...
✓ Connections dropped to 48. Latency normal.
Resolved in 47 seconds. 3 agents contributed.
Processed: 0.8 GB │ Cost: $0.40
04

Four commands. That's the setup.

# 1. Install on your resource
$ ssh prod-cluster && pip install tp-ops

# 2. Login
$ tp-ops login --token tp_live_xxxxxxxxxxxx

# 3. Drop an agent (auto-detects resource type)
$ tp-ops drop

# 4. From anywhere, talk to it
$ tp-ops ask prod-cluster

Compute · Databases · Networking · Messaging · Storage · DevOps. 200+ agents. One CLI. Auto-detects on drop.

05

Six categories. Every resource covered.

Drop an agent onto any resource. It auto-detects its environment and specializes immediately. 200+ agents across six categories.

Compute 20 agents
Kubernetes clusters, nodes, and namespaces. Linux and Windows VMs. Containers. Serverless (Lambda, Azure Functions, GCF). GPU instances. Bare metal. Spot instances. Batch compute.
$ tp-ops drop # auto-detects: k8s, vm, container, lambda...
Databases 30 agents
PostgreSQL, MySQL, SQL Server, MongoDB, Redis, Elasticsearch, Cassandra, DynamoDB, ClickHouse, TimescaleDB, Snowflake, BigQuery, Supabase, PlanetScale, SQLite, and more.
$ tp-ops ask payments-db "suggest indexes I'm missing"
Networking 25 agents
Load balancers (NGINX, HAProxy, ALB/NLB). API gateways (Kong, AWS, Azure). DNS. CDN (CloudFront, Front Door). Firewalls. Service mesh (Istio, Linkerd). VPN. BGP. gRPC. GraphQL.
$ tp-ops ask edge-gw "which APIs have the highest error rate?"
Messaging 15 agents
Kafka, RabbitMQ, SQS, SNS, NATS, Kinesis, Redis Streams, Pulsar, EventBridge, Event Grid, Event Hubs, Celery, Sidekiq, BullMQ.
$ tp-ops ask orders-kafka "show me consumer lag by partition"
Storage 15 agents
S3, Azure Blob, GCS, NFS/CIFS, MinIO, Ceph, PersistentVolumes, container registries, backup systems (Velero, Restic), LVM/ZFS, EBS, enterprise SAN.
$ tp-ops ask data-bucket "find unused objects costing us money"
DevOps 100+ agents
CI/CD (GitHub Actions, Jenkins, ArgoCD, Flux). IaC (Terraform, CloudFormation). Observability (Datadog, Grafana, PagerDuty, Splunk). Cloud platforms (AWS, Azure, GCP). Security. Application.
$ tp-ops ask prod "find overprivileged service accounts"
Agents deployedMesh capability
1Expert on one resource.
5–10Correlated insights across a service boundary.
20+Full infrastructure mesh. Any question traces through everything.
200+Complete coverage. Every resource type, every cloud, on-prem.
06

Two AI models. Both run on your hardware.

metrics · logs · configs · eventsTNN2,888 params · <1ms · always onTernaryPhysics-7B7B params · ~15 tok/s · no GPUyouanomalyroot cause
Tier 1

Ternary Neural Network

  • 2,888 parameters · <1KB
  • <1ms inference · integer math only
  • Weights: {-1, 0, +1}
  • Always on. Watches metrics, logs, connections.
  • Learns this resource's normal patterns.
  • Hot-swap weight updates. Zero downtime.
Tier 2

TernaryPhysics-7B

  • 7 billion parameters · 4-bit quantized
  • ~15 tok/s on commodity CPU
  • No GPU required
  • Powers conversation. Reasons about problems.
  • Reads logs, metrics, configs, events.
  • Builds root cause chains. Suggests commands.
No cloud. No GPU. No internet required. Both models run locally on the resource. Your data never leaves your infrastructure. Only billing metadata exits.
07

Reads everything. Writes nothing without you.

Human-in-the-loop

Agents investigate autonomously. They never write, modify, delete, restart, or scale without your explicit yes. Every time. No exceptions.

Runs locally

Both AI models execute on your resource. Data stays on your infrastructure. No cloud dependency. No data in transit to third-party servers.

No credentials stored

Uses your existing kubeconfig, DB credentials, and cloud tokens. Never copies or caches them. No secret store to manage.

Audit trail

Every conversation logged. Every action tracked. Export to your SIEM. Full forensic chain from question to execution.

Works air-gapped and offline. No internet required after initial setup.
08

Pay per GB.

$0.50 / GB processed
First 1 GB free.
No subscriptionTalk to your agent when you need it.
No monthly feePay for what it processes. Nothing else.
Visible costRunning total in every conversation.
10

Start here.

Get your API token

pip install tp-ops

Open source · Human-in-the-loop · Air-gapped capable · Patent pending