SuperBot Architecture — How We Trained an AI to Coach Like a 25-Year Veteran

Justin Harris
2 min read
AI
Architecture
TroponinIQ
SuperBot

The technical architecture behind TroponinIQ: how we encode decades of coaching expertise into a system that actually thinks like Justin Harris.

Domain Replication

The most significant scaling property: the entire methodology is domain-agnostic. The extraction pipeline, test framework, and optimization cycle work identically for any domain where a human expert exists. A physical therapist, an immigration lawyer, a master sommelier — anyone with deep tacit knowledge that can be extracted through structured conversation. The investment is in the expert’s time, not in compute.

The knowledge base grew from ~100 lines to 3,278 lines across three optimization sessions in two days. At this rate, a single domain expert can encode decades of experience into a deployable AI system within weeks, not months.

---

Why Not Fine-Tuning?

Fine-tuning a model on coaching data would produce a system that is:

* Opaque — You cannot inspect which specific rules the model learned, or trace a bad output to a specific training example.

* Irreversible — You cannot selectively remove a bad heuristic without retraining.

* Expensive — Each iteration requires a full training run ($1,000–$50,000 depending on model size).

* Slow — Training cycles are measured in hours or days, not minutes.

The knowledge-base approach produces a system that is fully auditable (every rule is a named constant with a source annotation), instantly reversible (revert a line of code), effectively free to update (edit a file), and iterates in real-time (minutes per optimization cycle).

The tradeoff is that you need a frontier model for inference, which costs more per-query than a fine-tuned smaller model. For the foreseeable future, the auditability and iteration speed advantages massively outweigh this cost differential for any domain where accuracy matters.

What’s Next

We’re building toward a fully automated nightly optimization cycle: source ingestion at midnight, knowledge extraction at 12:30 AM, staging deployment at 1:00 AM, automated verification at 1:15 AM, and production promotion at 1:30 AM. TroponinIQ wakes up smarter every morning.

The longer-term convergence is with on-device model training (the Apple Neural Engine work demonstrated by maderix/ANE). The knowledge base becomes the training dataset for a small, locally-hosted inference model — a 3B parameter distillation of the expert’s knowledge running on commodity hardware at negligible cost. The frontier model continues as the extraction engine. This is not speculation; the infrastructure exists today.

TroponinIQ is live at TroponinIQ.com. The knowledge base is open for inspection. The methodology is replicable. The future of domain-expert AI is not bigger models. It’s smarter knowledge.

Troponin Nutrition — troponinnutrition.com