Skill · community

promptfoo-evaluation

Configures and runs LLM evaluation using Promptfoo framework. Use when setting up prompt testing, creating evaluation configs (promptfooconfig.yaml), writing Python custom assertions, implementing llm-rubric for LLM-as-judge, or managing few-shot examples in prompts. Triggers on keywords like "promptfoo", "eval", "LLM evaluation", "prompt testing", or "model comparison".

The particulars.

github-repo
Kind
community
Shelf
directory
Install
github-repo

How to use it.

Clone daymade/claude-code-skills and copy the skill into ~/.claude/skills/, or install via your plugin manager if the repo is a marketplace.
← Back to toolkit