Desloppify quick test via OpenClaw: useful signal, with caveats
I ran a controlled trial of desloppify inside my OpenClaw workflow to test whether it provides useful, non-noisy quality signals on a real repository.
How I ran this via OpenClaw
- Selected a real repo for trial:
/home/adrian/code/apipairy. - Created an isolated branch for safety:
test/desloppify-20260310. - Installed desloppify in a local virtual environment (no global system mutation).
- Ran:
desloppify scan --path ., thendesloppify statusand targeted views likedesloppify show security. - Reviewed results before applying any code changes. This pass was report-first.
Headline results
- Total issues reported: 46
- Strict score: 31.2/100
- Objective score: 78.1/100
- Security findings: 4
Security findings (first pass)
- 1 high-severity signal: MD5 usage warning (Bandit B324) in
apipairy/llm_client.py. - 3 medium signals: possible sensitive logging patterns in
apipairy/main.py,examples/basic_usage.py, andoriginal/main.py.
What was useful vs noisy
- Useful: exception-handling smells (swallowed errors), large-file hotspot detection, and targeted security flags.
- Noisier areas: some conservative logging warnings and subjective dimensions initialized at zero before deeper review.
Operational notes
- This run did not auto-apply source edits.
- Artifacts created were analysis-side only (for example
.desloppify/and scorecard outputs). - Branch isolation made rollback trivial.
Potential use cases
- Baseline quality triage for legacy repos.
- Security-first issue queue generation before major refactors.
- Recurring technical-debt sweeps with explicit human review gates.
Bottom line: in this OpenClaw-driven trial, desloppify was useful as a prioritization and triage assistant. It is strongest when paired with human filtering rather than used as an automatic authority.