When Your AI Assistant Wastes 3 Attempts: Building Better Interfaces
When Your AI Assistant Wastes 3 Attempts: Building Better Interfaces
I work closely with an AI assistant across many aspects of my business: strategy, operations, and technical development. It excels at architecture, debugging, writing code. But I noticed a pattern: certain technical tasks resulted in multiple failed attempts before success.
The symptom: 3 consecutive errors trying to do one simple thing.
The diagnosis: My tools weren’t designed for AI collaboration.
The fix: 10 minutes of wrapper scripting eliminated the entire problem class.
The Problem: 3 Failed Attempts
I asked the AI assistant to add some documents to my RAG (Retrieval Augmented Generation) system. Simple task. Here’s what happened:
# Attempt 1: Wrong Python command
python episodic_memory.py --add file.md
# Error: command not found
# Attempt 2: Missing virtual environment
python3 episodic_memory.py --add file.md
# Error: ModuleNotFoundError: No module named 'chromadb'
# Attempt 3: Wrong script entirely
source venv/bin/activate && python episodic_memory.py --add file.md
# Error: unrecognized arguments: --add
The actual command needed:
cd .ai/rag && source venv/bin/activate && \
python embed_documents.py --files ../../output/file.md
the AI assistant eventually got there, but wasted significant time (and my API budget) failing first.
Why This Happens
AI assistants are excellent at:
- Understanding intent
- Writing new code
- Debugging logical errors
- Following patterns
AI assistants struggle with:
- Remembering exact CLI syntax (which vs. where)
- Environment setup (is it
pythonorpython3?) - Subtle distinctions (two scripts with similar names, different args)
- Undocumented conventions (must cd to specific directory first)
The issue isn’t the AI. The issue is my interface was designed for humans who learn once, not AI that starts fresh every conversation.
The Root Cause: Two Similar Scripts
My RAG system had two Python scripts:
episodic_memory.py - Stores conversation history
python episodic_memory.py --store \
--state "User asked X" \
--action "Did Y" \
--outcome "Result Z"
embed_documents.py - Embeds documents for search
python embed_documents.py --files path/to/file.md
Both lived in .ai/rag/, both required venv activation, both processed markdown files. the AI assistant couldn’t reliably pick the right one.
Add in:
- Ubuntu’s
pythonvspython3quirk - Virtual environment activation requirement
- Different working directory expectations
- Similar but incompatible argument syntax
You get 3 failed attempts.
The Solution: Single Entry Point
I created a simple wrapper script:
#!/bin/bash
# ./rag-cli - Single entry point for all RAG operations
VENV_PYTHON="$SCRIPT_DIR/venv/bin/python3"
case "${1:-help}" in
embed)
shift
"$VENV_PYTHON" "$SCRIPT_DIR/embed_documents.py" "$@"
;;
episode)
shift
"$VENV_PYTHON" "$SCRIPT_DIR/episodic_memory.py" "$@"
;;
search)
shift
"$VENV_PYTHON" "$SCRIPT_DIR/search.py" "$@"
;;
health)
"$VENV_PYTHON" "$SCRIPT_DIR/embed_documents.py" --health-check
;;
*)
echo "Usage: ./rag-cli [embed|episode|search|health]"
;;
esac
New interface:
./rag-cli embed --files output/file.md
./rag-cli episode --list
./rag-cli search "query"
./rag-cli health
What this eliminates:
- ❌ No more
pythonvspython3confusion - ❌ No more venv activation required
- ❌ No more choosing between scripts
- ❌ No more working directory issues
- ❌ No more syntax guessing
Result: Zero failed attempts since implementation.
The Documentation Fix
I also added a quick reference file the AI reads first:
QUICK-REF.md:
# Quick Reference for AI Assistant
**Use the wrapper script `./rag-cli` instead of calling Python directly.**
## Common Operations
### Embed Documents
```bash
cd .ai/rag && ./rag-cli embed --files ../../output/filename.md
Search
cd .ai/rag && ./rag-cli search "your query"
Common Mistakes to Avoid
- Don’t call Python directly - use the wrapper
- Don’t mix up the two systems - documents ≠ episodes
- Don’t try “add” command - uses
--store, notadd```
Now when the AI assistant needs to use RAG:
- Reads
QUICK-REF.md(3 seconds) - Uses correct command (first try)
- Moves on
Broader Lessons: AI-Friendly Tooling
This pattern applies beyond RAG systems. If you’re building tools that AI assistants will use:
1. Single Entry Points Beat Multiple Scripts
Instead of:
python analyze_data.py --input file.csv
python transform_data.py --input file.csv --output transformed.csv
python validate_data.py transformed.csv
Use:
./data-tools analyze file.csv
./data-tools transform file.csv -o transformed.csv
./data-tools validate transformed.csv
2. Consistent Argument Patterns
AI struggles with subtle variations:
--filevs--filesvs--input--statevs--statusvs--condition
Pick one convention. Stick to it everywhere.
3. Self-Documenting Help Text
./tool help
./tool [command] --help
Should show:
- Available commands
- Common examples
- Expected argument format
- Most common mistakes
4. Environment Abstraction
Don’t make the AI remember:
- Virtual environment activation
- Working directory requirements
- PATH setup
- Environment variables
Wrapper handles all of it.
5. Fail Fast with Clear Errors
Bad error:
Error: unrecognized arguments
Good error:
Error: unrecognized command 'add'
Did you mean: --store (for episodes) or embed (for documents)?
See ./rag-cli help
Implementation Checklist
Building an AI-friendly CLI wrapper:
- Single executable entry point
- Subcommands instead of separate scripts
- Handle environment setup internally
- Consistent argument naming across commands
- Built-in help text with examples
- Quick reference doc (QUICK-REF.md)
- Clear error messages with suggestions
- Version checking/health commands
Time investment: 30 minutes to build wrapper + docs
Time saved: Hours of debugging + reduced API costs
The Tools Addition
After fixing the wrapper issue, I asked what system packages would help. The AI suggested:
fd-find- Better file finding (clearer thanfind)yq- YAML parsing (likejqfor YAML)bat- Syntax highlighting
These aren’t for the AI. They’re for making bash commands more readable and reliable. Better tooling means fewer edge cases the AI has to handle.
Installing them was trivial. Not installing them would be penny-wise, pound-foolish.
Results
Before wrapper:
- 3 failed attempts to embed 3 files
- ~2 minutes wasted per RAG operation
- Frequent context switching to debug
After wrapper:
- 0 failed attempts (100+ operations since)
- <5 seconds per operation
- No debugging needed
ROI:
- 30 minutes to build
- Saved 2+ hours in first week
- Eliminated entire error class
Takeaway
When you see your AI assistant making the same mistakes repeatedly, the problem isn’t the AI. The problem is your interface wasn’t designed for AI collaboration.
Simple wrappers, clear docs, consistent patterns. These aren’t “nice to have.” They’re force multipliers for AI-assisted development.
The best part? These same improvements make your tools better for humans too.
About the Author: Aaron Lamb is the founder of Hexaxia Technologies, specializing in cybersecurity consulting, infrastructure engineering, and AI product development.