Teach Model
SUPERVISOR Teach Model is how supervisors and administrators correct the AI's output after it has run. Lives at Admin → Knowledge → Teach Model.What it writes
Corrections become rows in cl_knowledge_base with seeded_by='admin' (when you author them) or seeded_by='auto' (when the self-learning hook mirrors a non-REJECTED supervisor approval). Schema:
target_code— the UCS code the correction is aboutcorrection_text— short text the LLM should bias toward next timelearning_weight—1.0default; admin can overridequarantined—0while target_code is in active Master,1if orphaned
What it does NOT do
- It does not write to live PMS data
- It does not retrain a model
- It does not change retrieval weights — only the few-shot context injected into the next AI call
The KB is read-only from the LLM's perspective. The LLM never writes to it. Only humans (and the auto-mirror hook for approved corrections) write to it.
Token-budget cap (v2.31.0.19)
correctionsToFewShotByBudget caps the few-shot context at a token budget rather than a row count. So a long correction takes its share; a short one takes less. Avoids the v2.31.0.18 problem where 50 short rows starved out 10 long, useful ones.