Skip to main content

ai-prompt-engineering

Provides operational guidance for building production-ready prompts in LLM applications, focusing on structured outputs and safety.

Install this skill

or
92/100

Security score

The ai-prompt-engineering skill was audited on Mar 8, 2026 and we found 4 security issues across 2 threat categories. Review the findings below before installing.

Categories Tested

Security Issues

medium line 316

Access to hidden dotfiles in home directory

SourceSKILL.md
316- **AGENTS.md Integration**: Place project-specific prompt guidance in AGENTS.md files at global (~/.codex/AGENTS.md), project-level (./AGENTS.md), or subdirectory scope for layered instructions
low line 8

External URL reference

SourceSKILL.md
8**Modern Best Practices (January 2026)**: versioned prompts, explicit output contracts, regression tests, and safety threat modeling for tool/RAG prompts (OWASP LLM Top 10: https://owasp.org/www-proje
low line 171

External URL reference

SourceSKILL.md
171- **Security**: prompt injection, data exfiltration, and tool misuse are primary threats (OWASP LLM Top 10: https://owasp.org/www-project-top-10-for-large-language-model-applications/).
low line 176

External URL reference

SourceSKILL.md
176- Align tracing/metrics with OpenTelemetry GenAI semantic conventions (https://opentelemetry.io/docs/specs/semconv/gen-ai/).
Scanned on Mar 8, 2026
View Security Dashboard
Installation guide →