Context
PromptProfile.max_personality_tokens was introduced in #805 (capability-aware prompt profiles) as a validated field (gt=0) with per-tier values (large=500, medium=200, small=80). Currently it serves only as an ordering proxy -- no code reads the field to actually trim personality content.
The field is documented as "reserved for future token-based trimming" but the infrastructure is now in place to activate it.
Scope
Token-based personality trimming (engine/_prompt_helpers.py or engine/prompt.py)
- In
build_core_context(), after assembling personality fields, estimate the token count of the personality section
- If the estimate exceeds
profile.max_personality_tokens, progressively trim:
- Drop behavioral enum fields (risk_tolerance, creativity, verbosity, decision_making, collaboration, conflict_approach) -- these are already excluded by
personality_mode != "full" but trimming adds a token-budget safety net
- Truncate
personality_description to fit within the remaining budget
- If still over, fall back to
communication_style only (minimal mode)
- Log the trimming action with before/after token counts
Integration with existing profile modes
- Token trimming should be a secondary control after
personality_mode -- the mode selects which fields are included, the token limit enforces a hard cap on the combined result
- For the
full profile (500 tokens), trimming should rarely activate -- it is a safety net for unusually verbose personality descriptions
- For
basic profile (80 tokens), the minimal mode already excludes most fields, so the cap reinforces the structural reduction
Estimator reuse
- Reuse the existing
PromptTokenEstimator (char/4 heuristic) or accept a configurable estimator
- The personality section is a small fraction of the total prompt, so precision matters less than consistency
Deliverables
References
Context
PromptProfile.max_personality_tokenswas introduced in #805 (capability-aware prompt profiles) as a validated field (gt=0) with per-tier values (large=500, medium=200, small=80). Currently it serves only as an ordering proxy -- no code reads the field to actually trim personality content.The field is documented as "reserved for future token-based trimming" but the infrastructure is now in place to activate it.
Scope
Token-based personality trimming (
engine/_prompt_helpers.pyorengine/prompt.py)build_core_context(), after assembling personality fields, estimate the token count of the personality sectionprofile.max_personality_tokens, progressively trim:personality_mode != "full"but trimming adds a token-budget safety netpersonality_descriptionto fit within the remaining budgetcommunication_styleonly (minimal mode)Integration with existing profile modes
personality_mode-- the mode selects which fields are included, the token limit enforces a hard cap on the combined resultfullprofile (500 tokens), trimming should rarely activate -- it is a safety net for unusually verbose personality descriptionsbasicprofile (80 tokens), the minimal mode already excludes most fields, so the cap reinforces the structural reductionEstimator reuse
PromptTokenEstimator(char/4 heuristic) or accept a configurable estimatorDeliverables
max_personality_tokensevents.prompt)PromptProfile.max_personality_tokensdocstring to remove "reserved for future" caveatReferences
src/synthorg/engine/prompt_profiles.pysrc/synthorg/engine/_prompt_helpers.py:build_core_context()src/synthorg/engine/token_estimation.py