feat: improve skill descriptions and content across all 9 skills#21
Conversation
Hullo @rohitg00 👋 I ran all nine skills through `tessl skill review` at work and found some targeted improvements. Here's the before/after: | Skill | Before | After | Change | |-------|--------|-------|--------| | pro-workflow | 55% | 93% | +38% | | replay-learnings | 80% | 100% | +20% | | deslop | 86% | 100% | +14% | | learn-rule | 88% | 100% | +12% | | insights | 75% | 86% | +11% | | parallel-worktrees | 81% | 89% | +8% | | smart-commit | 93% | 100% | +7% | | wrap-up | 93% | 100% | +7% | | session-handoff | 88% | 94% | +6% | <details><summary>Summary of changes</summary> - **pro-workflow**: Added `Use when...` clause with concrete trigger terms (AI coding best practices, agent workflows, Claude Code/Cursor productivity patterns). Replaced abstract marketing language with specific actions. Condensed Learn Claude Code section from 35 lines to 3 — the `/learn` command and docs link covers it. Removed Philosophy section (principles are implicit in the patterns). Net reduction of ~44 lines. - **deslop**: Added trigger terms (cleaning up code, simplifying, removing boilerplate). Added explicit 6-step workflow with validation checkpoint — re-run diff to verify only slop removed, run tests to confirm behaviour unchanged. - **insights**: Added trigger terms (stats, statistics, progress, how am I doing, dashboard). Added Data Sources section with concrete bash commands for accessing session history and project memory, plus definition of what constitutes a "correction". - **learn-rule**: Expanded description with additional actions (stores, categorises, retrieves) and trigger terms (don't forget, note this, learn from this). - **replay-learnings**: Added trigger terms (previous mistakes, lessons learned, remind me about). Added concrete grep commands showing how to search learnings/memory files. - **parallel-worktrees**: Added trigger terms (multiple branches, context switching). Added validation checkpoint in guardrails — verify changes committed before worktree removal. - **session-handoff**: Expanded description with concrete actions (captures progress, open tasks, key decisions, context to resume). Added trigger terms (continue later, save progress, session summary, pick up where I left off). - **smart-commit**: Added trigger terms (git commit, save my changes). - **wrap-up**: Added trigger terms (wrap up, done for the day, finish coding). </details> Honest disclosure — I work at @tesslio where we build tooling around skills like these. Not a pitch, just saw room for improvement and wanted to contribute. If you want to run evals yourself, click [here](https://tessl.io/registry/skills/submit). Thanks in advance 🙏
📝 WalkthroughWalkthroughEight skill documentation files receive clarifying updates: descriptions are expanded with explicit trigger phrases and usage examples, workflow sections are added or enriched with concrete guidance, and content is reorganized to emphasize data sources and procedural clarity. No executable code changes. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (4)
skills/replay-learnings/SKILL.md (1)
16-22: Make search patterns parameterized, not hardcoded examples.Step 1 says extract keywords from the task, but Step 2 commands are fixed to
auth|middleware. Consider using a<keywords_regex>placeholder so the workflow is reusable as written.Suggested doc tweak
- grep -i "auth\|middleware" .claude/LEARNED.md 2>/dev/null - grep -i "auth\|middleware" .claude/learning-log.md 2>/dev/null - grep -A2 "\[LEARN\]" CLAUDE.md | grep -i "auth\|middleware" + grep -Ei "<keywords_regex>" .claude/LEARNED.md 2>/dev/null + grep -Ei "<keywords_regex>" .claude/learning-log.md 2>/dev/null + grep -A2 "\[LEARN\]" CLAUDE.md | grep -Ei "<keywords_regex>"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@skills/replay-learnings/SKILL.md` around lines 16 - 22, Update the examples in SKILL.md so the grep commands use a parameterized placeholder instead of the hardcoded pattern "auth\|middleware"; replace the fixed patterns in Step 2 with a `<keywords_regex>` (or similar) token and document that this token should be populated from the extracted keywords in Step 1 so the commands (the three grep lines) are reusable with any keyword set.skills/parallel-worktrees/SKILL.md (1)
81-81: Use a placeholder path in the guardrail command.Hardcoding
../project-featcan be copied verbatim and checked against the wrong worktree. Prefer a generic placeholder so users substitute their actual path.Suggested doc tweak
-- Before removing a worktree, verify changes are committed: `git -C ../project-feat status` +- Before removing a worktree, verify changes are committed: `git -C <worktree-path> status`🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@skills/parallel-worktrees/SKILL.md` at line 81, Replace the hardcoded guardrail path in the command string "git -C ../project-feat status" with a generic placeholder so readers substitute their actual worktree path (e.g., "git -C ../<your-worktree> status" or "git -C ../REPLACE_WITH_WORKTREE_PATH status"); update the SKILL.md line that contains that exact command to use the placeholder form.skills/insights/SKILL.md (1)
28-29: Define a concrete correction marker/source to avoid inconsistent metrics.The text asks to count “explicit correction markers,” but marker format/location is not specified. Add a canonical marker (for example,
[CORRECTION]) and exact source file(s) so counts are reproducible.Suggested doc tweak
-A **correction** is any instance where the user redirected, fixed, or overrode agent output during a session. Count `[LEARN]` entries and explicit correction markers in session history. +A **correction** is any instance where the user redirected, fixed, or overrode agent output during a session. +Count `[LEARN]` entries from memory files and `[CORRECTION]` markers from a defined session log source.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@skills/insights/SKILL.md` around lines 28 - 29, Update the SKILL.md text that defines a "correction" to specify a canonical correction marker and exact sources to count, e.g., change the sentence referencing “[LEARN] entries and explicit correction markers” to state that explicit markers must use the canonical tag “[CORRECTION]” and that counts are pulled from session history entries and corrected_turn markers in session transcripts (i.e., the same places [LEARN] entries are read). Ensure you mention the canonical marker “[CORRECTION]” and the same source locations used for “[LEARN]” so counting is reproducible.skills/learn-rule/SKILL.md (1)
3-3: Align trigger phrases between metadata and the Trigger section.The front-matter description now includes
"note this"and"learn from this", but## Trigger(Line 12) doesn’t mirror that list. Please keep both lists in sync to avoid behavioral/documentation drift.Suggested doc sync
-Use when the user says "remember this", "add to rules", "don't do that again", or after a mistake is identified. +Use when the user says "remember this", "add to rules", "don't do that again", "don't forget", "note this", or "learn from this", or after a mistake is identified.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@skills/learn-rule/SKILL.md` at line 3, The metadata description lists trigger phrases including "note this" and "learn from this" but the "## Trigger" section is out of sync; update the Trigger section in SKILL.md (the list under the "## Trigger" header) to include "note this" and "learn from this" (or remove them from the front-matter if you prefer the shorter set) so both the front-matter description and the Trigger section use the exact same set of phrases.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@skills/deslop/SKILL.md`:
- Line 28: Replace the single test/type-check instruction that currently shows
`npm test -- --changed --passWithNoTests 2>&1 | tail -10` with an explicit
three-step quality gate: run lint (`npm run lint`), run typecheck (`npm run
typecheck`), and run related tests (`npm test -- --related`); update the
SKILL.md step text to list these commands in order and remove the old
single-command example so the validation step clearly requires lint + typecheck
+ related tests.
---
Nitpick comments:
In `@skills/insights/SKILL.md`:
- Around line 28-29: Update the SKILL.md text that defines a "correction" to
specify a canonical correction marker and exact sources to count, e.g., change
the sentence referencing “[LEARN] entries and explicit correction markers” to
state that explicit markers must use the canonical tag “[CORRECTION]” and that
counts are pulled from session history entries and corrected_turn markers in
session transcripts (i.e., the same places [LEARN] entries are read). Ensure you
mention the canonical marker “[CORRECTION]” and the same source locations used
for “[LEARN]” so counting is reproducible.
In `@skills/learn-rule/SKILL.md`:
- Line 3: The metadata description lists trigger phrases including "note this"
and "learn from this" but the "## Trigger" section is out of sync; update the
Trigger section in SKILL.md (the list under the "## Trigger" header) to include
"note this" and "learn from this" (or remove them from the front-matter if you
prefer the shorter set) so both the front-matter description and the Trigger
section use the exact same set of phrases.
In `@skills/parallel-worktrees/SKILL.md`:
- Line 81: Replace the hardcoded guardrail path in the command string "git -C
../project-feat status" with a generic placeholder so readers substitute their
actual worktree path (e.g., "git -C ../<your-worktree> status" or "git -C
../REPLACE_WITH_WORKTREE_PATH status"); update the SKILL.md line that contains
that exact command to use the placeholder form.
In `@skills/replay-learnings/SKILL.md`:
- Around line 16-22: Update the examples in SKILL.md so the grep commands use a
parameterized placeholder instead of the hardcoded pattern "auth\|middleware";
replace the fixed patterns in Step 2 with a `<keywords_regex>` (or similar)
token and document that this token should be populated from the extracted
keywords in Step 1 so the commands (the three grep lines) are reusable with any
keyword set.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
skills/deslop/SKILL.mdskills/insights/SKILL.mdskills/learn-rule/SKILL.mdskills/parallel-worktrees/SKILL.mdskills/pro-workflow/SKILL.mdskills/replay-learnings/SKILL.mdskills/session-handoff/SKILL.mdskills/smart-commit/SKILL.mdskills/wrap-up/SKILL.md
| 2. Identify slop patterns from the focus areas below. | ||
| 3. Apply minimal, focused edits to remove slop. | ||
| 4. Re-run `git diff origin/main...HEAD` to verify only slop was removed. | ||
| 5. Run tests or type-check to confirm behaviour unchanged: `npm test -- --changed --passWithNoTests 2>&1 | tail -10` |
There was a problem hiding this comment.
Align validation step with full quality gates, not a single test command.
Line 28 currently says tests or type-check and only provides one test command. For consistent safety, list lint + typecheck + related tests explicitly in this step.
Based on learnings: After ANY code edit, run quality gates: (1) lint with npm run lint, (2) typecheck with npm run typecheck, (3) run related tests with npm test -- --related.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/deslop/SKILL.md` at line 28, Replace the single test/type-check
instruction that currently shows `npm test -- --changed --passWithNoTests 2>&1 |
tail -10` with an explicit three-step quality gate: run lint (`npm run lint`),
run typecheck (`npm run typecheck`), and run related tests (`npm test --
--related`); update the SKILL.md step text to list these commands in order and
remove the old single-command example so the validation step clearly requires
lint + typecheck + related tests.
Hullo Rohit 👋
Thanks for engaging on the product hunt launch, and for publishing your skills in the Tessl registry! Nice work! I ran all nine skills through
tessl skill reviewat work and found some targeted improvements. Here's the before/after:The pro-workflow one in particular saw more significant improvements.
Summary of changes
Use when...clause with concrete trigger terms (AI coding best practices, agent workflows, Claude Code/Cursor productivity patterns). Replaced abstract marketing language with specific actions. Condensed Learn Claude Code section from 35 lines to 3 — the/learncommand and docs link covers it. Removed Philosophy section (principles are implicit in the patterns). Net reduction of ~44 lines.Thanks again 🙏
Summary by CodeRabbit