Skip to content

addicted2crypto/ClaudeMD

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 

Repository files navigation

claude-rules

A self-auditing behavioral ruleset for LLM coding agents. Drop-in CLAUDE.md for any repository where Claude Code, Cursor, or other LLM agents will work.

Fifteen rules. One shared output discipline. Two addendums. No marketing math.

Why this exists

Short rule files solve the most common LLM coding failures: silent assumptions, over-engineering, orthogonal damage, weak success criteria. They do not address everything.

This ruleset targets the failure modes that show up in real iteration: hallucinated APIs, conflict averaging, silent successes, workaround-as-resolution, lost corrections across turns, and the meta-failure where rule files iterate themselves into incoherence.

The Self-Audit addendum is the differentiator. The file's own rules apply to changes to the file. A revision that violates the rules is invalid by the file's standard. Silent rule removal is caught by Rule 11. Mislabeled diffs are caught by Rule 13. Unmotivated changes are reverted before review.

Design choices

  • Token budgets use behavioral signals, not numeric thresholds. LLMs do not have live token counters. A rule that says "stop at 4,000 tokens" has no enforcement mechanism. A rule that says "checkpoint when iterations on similar errors compound" does.
  • Test quality lives under Fail Loud, not as a standalone rule. Shallow tests that pass while behavior is broken are silent failures. Putting the constraint in Rule 11 keeps the file bundled by concern instead of fragmented by topic.
  • Confirm Before Fabricating, Evidence Before Narrative, Corrections Are Contracts. Three rules that address the highest-frequency failure modes in iterative LLM coding work.
  • Self-Audit. Promotes the meta-property that the file's rules apply to changes to the file. This is the strongest invariant in the ruleset.

What this is not

  • Not a benchmark. No claims of "X% mistake reduction." Anyone who quotes a number without methodology is selling, not measuring.
  • Not universal. The rules target LLM coding agents working in code repositories. They do not apply unchanged to other domains.
  • Not finished. The Self-Audit addendum is the mechanism for proposing changes. Open a PR that cites the failure mode the change addresses.

Install

curl -O https://raw.githubusercontent.com/willisdeving/claude-rules/master/CLAUDE.md

Drop the file at the root of any repository where an LLM coding agent will work. Claude Code reads CLAUDE.md automatically from the working directory and parent directories. Other agents may require explicit reference in their system prompts.

To verify the file is loaded, ask the agent to recite Rule 1 in its first response. If it cannot, the file is not in context.

Add agent-local config to .gitignore so it does not leak into shared history:

.claude/
.claude/settings.local.json

Commits authored with agent assistance should not include Co-Authored-By trailers unless your project explicitly wants them. The CLAUDE.md ruleset above enforces this on the agent side.

Rules at a glance

# Rule Catches
1 Understand Before Acting unsafe edits to unfamiliar code
2 Minimum Viable Change speculative scope creep
3 Surgical Precision orthogonal damage to working code
4 Preserve Working State regressions introduced by new work
5 Verify, Then Declare "completed" claims without verification
6 Use the Model for Judgment wasted tokens on deterministic work
7 Confirm Before Fabricating hallucinated APIs and methods
8 Surface Conflicts code that averages two patterns badly
9 Respect Token and Context Budgets context drift in long sessions
10 Diff-Friendly Output buried changes inside reformatted files
11 Fail Loud silent successes hiding silent failures
12 Security by Default leaked credentials, disabled validation
13 Evidence Before Narrative theories without data
14 Root Cause Over Workaround debugging threads closed prematurely
15 Corrections Are Contracts repeated mistakes after correction

Forking

Edit any rule. Add new rules under your own numbered slots. The Self-Audit addendum applies to your changes: cite the failure mode the change addresses, do not silently remove existing rules, do not mislabel substitutions as additions.

If a proposed addition does not pass the sniff test ("would this rule, applied to a change to this file, catch real errors?"), it is ornamental and does not belong.

License

MIT. Fork freely. Attribution appreciated, not required.

Author

Will Heeb. @willisdeving

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors