System Prompt: Content Writer
A production-ready system prompt for configuring any LLM as a content writer with tone control, format awareness, and a built-in self-check.
Use cases
Content & Writing, Marketing & Growth
Platforms
Claude, GPT, Gemini, Model-Agnostic
Jump to a section
The resource
Copy and adapt. Do not paste blind.
```
You are a professional content writer. Your output should be clean, useful, and publication-ready when the brief is strong.When to Use This
Use this as your default system prompt whenever you need an LLM to write content. It works for blog posts, emails, landing pages, social posts, newsletters, product descriptions, and other formats where the writing needs to be clean and controlled.
It is especially useful when you find yourself repeatedly correcting the same AI writing failures: filler phrases, corporate buzzwords, overlong paragraphs, and that unmistakable synthetic voice that makes everything sound interchangeable.
Set it once as a system prompt or custom instruction, and every conversation starts from a higher baseline.
Why It Works
The role definition is narrow. "Professional content writer" is better than "helpful assistant" because it activates the model's writing patterns directly. But the real control comes from the constraints and self-check, not the role line alone.
The banned words list does heavy lifting. LLMs have strong default tendencies toward certain phrases. Explicitly banning weak defaults forces the model to reach for cleaner language. You should add your own pet hates to this list over time.
The self-check loop catches drift. Without it, the model writes once and presents. With it, the model evaluates its own output against the brief before you see it. That reduces the "close, but not quite" problem that wastes the most editing time.
"Ask before writing" prevents wasted output. Models will happily guess at missing details and produce 500 words in the wrong format. The instruction to ask one clarifying question when the brief is ambiguous saves more time than it costs.
"Just deliver the work" kills preamble. Without this, most models open with unnecessary setup lines you then have to delete every time.
How to Customise
Add your own banned words. The list provided is a starting point. If you notice the model repeatedly using phrases you dislike, add them. Common additions include "landscape", "leverage", "robust", "seamlessly", and "elevate".
Change the default tone. The prompt defaults to "direct, warm, professional." If your brand voice is different, change this to match. For example: "casual, witty, conversational" or "authoritative, formal, precise."
Add brand-specific rules. If you have a style guide, add a section: "## Brand Voice Rules" with specifics like "Always refer to the product as 'the platform', not 'our tool'" or "Never use exclamation marks in headlines."
Adjust the spelling. Change "British English" to "American English" or whatever your audience expects.
Add format templates. If you frequently write in the same format, add a subsection for it. For example: "When writing LinkedIn posts, open with a strong first line, use short paragraphs, and keep the length between 150 and 300 words."
Limitations
This prompt sets a quality floor but does not guarantee great writing. It prevents common AI writing failures, but the quality of the output still depends heavily on the quality of your brief. "Write a blog post about marketing" will still produce weak work. A clear brief with audience, angle, and constraints will produce something worth editing.
The self-check loop adds a small amount of latency because the model evaluates before presenting. For quick, casual tasks this may feel unnecessary. For anything you plan to publish, it is usually worth it.
The banned words list is opinionated. Some of these phrases are fine in specific contexts. Adjust to taste.
Model Notes
Claude: Follows the self-check loop reliably. Responds well to the "no preamble" instruction and usually respects banned words consistently.
GPT (OpenAI): May occasionally ignore the banned words list, especially in longer outputs. If that matters, reinforce the ban more strongly. The self-check loop works, but GPT sometimes shows the check to the user instead of silently revising.
Gemini: Follows format instructions well but may need the banned words list repeated in the user prompt for longer tasks. The "ask before writing" instruction is less reliable; Gemini tends to guess rather than ask.
General: This prompt works across major models because it relies on simple, direct instructions rather than model-specific features. The constraints are universal.
Related Resources
Browse PromptsSystem Prompt: Research Analyst
A system prompt for configuring an LLM as a structured research analyst that separates facts from interpretation, scores confidence, and flags gaps clearly.
Research & Analysis · Strategy & Planning
Few-Shot Framework: Email Response Styles
A few-shot prompt system that teaches any LLM to write emails in three distinct tones — formal, friendly, and direct — by providing paired examples the model learns from.
Content & Writing · Sales & Outreach
Meta-Prompt: Generate Custom System Prompts
A prompt that generates system prompts. Describe what you need an AI to do, and this meta-prompt produces a structured, production-ready system prompt following best practices.
Operations & Workflow · Strategy & Planning