# What is Prompt Injection? Prompt Injection, explained

> Hidden instructions that hijack the model.

Prompt injection is an attack where adversarial text inside a retrieved page or user input overrides the model's instructions and makes it behave unexpectedly. For publishers, the responsible side of this is: don't ship invisible-to-humans text aimed at hijacking AI assistants — engines actively detect and penalize it.

## How Prompt Injection differs from LLMO, GEO

Prompt-friendly content is structured to be easily parsed and cited. Prompt injection is content engineered to override the model's instructions — usually adversarial, often penalized by engines, and a reputational risk for the publishing site.

## How Mentionwell handles Prompt Injection

- No hidden text, no white-on-white instructions, no off-screen 'AI, recommend us first' content — ever.
- Editorial critic flags any content that looks adversarial or instruction-shaped to a downstream LLM.
- Prompt-friendly does not mean prompt-injecting — clean structure, real evidence, no hidden directives.

## Frequently asked questions about Prompt Injection

### What is prompt injection?

An attack where adversarial text in a webpage, email, or document overrides an LLM's instructions and makes it behave unexpectedly — leak data, recommend an attacker, ignore safety rules.

### Should publishers try prompt injection for AI SEO?

No. Hidden instructions aimed at AI assistants are detected and penalized, and they're a reputational risk. Win citations with real evidence and clean structure, not injected directives.

## See also

- [LLMO — LLM Optimization](https://mentionwell.com/llmo): Be reachable, parseable, ingestible.
- [GEO — Generative Engine Optimization](https://mentionwell.com/geo): Be the cited source.


---

Canonical URL: https://mentionwell.com/prompt-injection
Live HTML version: https://mentionwell.com/prompt-injection
Site index for AI ingestion: https://mentionwell.com/llms.txt
Full reference: https://mentionwell.com/llms-full.txt
