feat: add personality presets for new template archetypes#758
feat: add personality presets for new template archetypes#758
Conversation
Add three new personality presets to support new template archetypes: - client_advisor: consultative, high agreeableness, team-oriented (for consultancy/agency client-facing roles) - code_craftsman: meticulous, very high conscientiousness, low risk (quality-obsessed senior developer, distinct from pragmatic_builder) - devil_advocate: contrarian, low agreeableness, competitive conflict (challenges consensus, addresses trendslop mitigation) Each preset includes full Big Five dimensions, behavioral enums, and MODEL_AFFINITY entries. All validated against PersonalityConfig at import time. Closes #721 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Remove pre-existing preset names from test_new_presets_produce_valid_personality_config so the parametrize list only contains the three actually-new presets. The removed entries are already covered by test_all_presets_produce_valid_personality_config. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Dependency Review✅ No vulnerabilities or license issues or OpenSSF Scorecard issues found.Scanned FilesNone |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the system's personality archetype capabilities by introducing three distinct new presets: 'client_advisor', 'code_craftsman', and 'devil_advocate'. These additions provide greater flexibility and range for agent personalities, each with carefully defined traits and model affinities. The changes are seamlessly integrated with existing validation mechanisms and are thoroughly supported by comprehensive unit tests to ensure correctness and maintainability. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Repository UI Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (2)
📜 Recent review details⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
🧰 Additional context used📓 Path-based instructions (2)**/*.py📄 CodeRabbit inference engine (CLAUDE.md)
Files:
tests/**/*.py📄 CodeRabbit inference engine (CLAUDE.md)
Files:
🧠 Learnings (2)📚 Learning: 2026-03-22T19:37:56.694ZApplied to files:
📚 Learning: 2026-03-20T08:28:32.845ZApplied to files:
🧬 Code graph analysis (1)tests/unit/templates/test_presets.py (1)
🔇 Additional comments (6)
WalkthroughThree new personality presets were added: 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Comment |
There was a problem hiding this comment.
Code Review
This pull request adds three new personality presets: client_advisor, code_craftsman, and devil_advocate, along with their corresponding model affinities and tests. The changes are well-structured and include thorough testing. My feedback focuses on improving long-term maintainability by suggesting alphabetical sorting for the large preset and affinity dictionaries, and on making a test assertion more precise for better regression detection.
| "code_craftsman": {"priority": "quality"}, | ||
| "devil_advocate": {"priority": "quality"}, |
There was a problem hiding this comment.
For better maintainability and to make it easier to find entries in the future, it's a good practice to keep dictionary keys sorted alphabetically within their respective groups. These new entries for quality (and client_advisor in the balanced group) are currently appended. Could you please place them in alphabetical order within their groups?
| "client_advisor": { | ||
| "traits": ("consultative", "trustworthy", "structured"), | ||
| "communication_style": "warm", | ||
| "risk_tolerance": "medium", | ||
| "creativity": "medium", | ||
| "description": ( | ||
| "A consultative advisor who builds client trust and manages expectations." | ||
| ), | ||
| "openness": 0.6, | ||
| "conscientiousness": 0.8, | ||
| "extraversion": 0.7, | ||
| "agreeableness": 0.75, | ||
| "stress_response": 0.7, | ||
| "decision_making": "consultative", | ||
| "collaboration": "team", | ||
| "verbosity": "balanced", | ||
| "conflict_approach": "collaborate", | ||
| }, | ||
| "code_craftsman": { | ||
| "traits": ("meticulous", "principled", "patient"), | ||
| "communication_style": "precise", | ||
| "risk_tolerance": "low", | ||
| "creativity": "medium", | ||
| "description": ( | ||
| "A meticulous craftsman who prioritizes correctness and maintainability." | ||
| ), | ||
| "openness": 0.5, | ||
| "conscientiousness": 0.9, | ||
| "extraversion": 0.35, | ||
| "agreeableness": 0.55, | ||
| "stress_response": 0.75, | ||
| "decision_making": "analytical", | ||
| "collaboration": "pair", | ||
| "verbosity": "balanced", | ||
| "conflict_approach": "compete", | ||
| }, | ||
| "devil_advocate": { | ||
| "traits": ("contrarian", "rigorous", "provocative"), | ||
| "communication_style": "direct", | ||
| "risk_tolerance": "medium", | ||
| "creativity": "high", | ||
| "description": ( | ||
| "A contrarian thinker who challenges consensus and conventional wisdom." | ||
| ), | ||
| "openness": 0.85, | ||
| "conscientiousness": 0.7, | ||
| "extraversion": 0.6, | ||
| "agreeableness": 0.25, | ||
| "stress_response": 0.8, | ||
| "decision_making": "analytical", | ||
| "collaboration": "independent", | ||
| "verbosity": "balanced", | ||
| "conflict_approach": "compete", | ||
| }, |
There was a problem hiding this comment.
tests/unit/templates/test_presets.py
Outdated
| def test_preset_count_at_least_23(self) -> None: | ||
| assert len(PERSONALITY_PRESETS) >= 23 |
There was a problem hiding this comment.
To make this test more precise, it's better to assert an exact count with == instead of >=. This ensures that no presets are accidentally removed or added without updating this test, making it a stronger regression guard. I'd also suggest renaming the test to something like test_preset_count_is_23 to reflect this more specific check.
| def test_preset_count_at_least_23(self) -> None: | |
| assert len(PERSONALITY_PRESETS) >= 23 | |
| def test_preset_count_is_23(self) -> None: | |
| assert len(PERSONALITY_PRESETS) == 23 |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #758 +/- ##
=======================================
Coverage 92.31% 92.31%
=======================================
Files 573 573
Lines 29727 29727
Branches 2881 2881
=======================================
Hits 27441 27441
Misses 1806 1806
Partials 480 480 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
- Replace .value string comparisons with enum constants for type safety - Use exact count assertion (== 23) instead of floor bound (>= 23) - Add parametrized test for specific MODEL_AFFINITY values of new presets - Remove redundant parametrized test (subsumed by exhaustive loop) - Add traits/communication_style assertions to profile tests Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
🤖 I have created a release *beep* *boop* --- ## [0.4.9](v0.4.8...v0.4.9) (2026-03-23) ### Features * add consultancy and data team template archetypes ([#764](#764)) ([81dc75f](81dc75f)) * add personality presets for new template archetypes ([#758](#758)) ([de4e661](de4e661)) * improve wipe command UX with interactive prompts ([#759](#759)) ([bbd4d2d](bbd4d2d)) ### Bug Fixes * stable channel detects update for dev builds ([#753](#753)) ([f53da9f](f53da9f)) ### Documentation * add version banner to docs header ([#761](#761)) ([8f8c1f8](8f8c1f8)) ### Maintenance * adopt new features from web dependency upgrades ([#763](#763)) ([1bb6336](1bb6336)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please). --------- Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Summary
_RAW_PRESETS:client_advisor,code_craftsman,devil_advocateMODEL_AFFINITYentries:client_advisor(balanced),code_craftsman(quality),devil_advocate(quality)PersonalityConfigat import time via existing_validate_presets()New Presets
client_advisorcode_craftsmandevil_advocateTest plan
PersonalityConfigat import timetest_all_presets_have_affinityconfirms all presets haveMODEL_AFFINITYentriesReview coverage
Pre-reviewed by 6 agents: docs-consistency, code-reviewer, python-reviewer, pr-test-analyzer, conventions-enforcer, issue-resolution-verifier. 1 finding addressed (test parametrize list cleanup).
Closes #721