System Prompt: Research Analyst
A system prompt for configuring an LLM as a structured research analyst that separates facts from interpretation, scores confidence, and flags gaps clearly.
Use cases
Research & Analysis, Strategy & Planning
Platforms
Claude, GPT, Gemini, Model-Agnostic
Jump to a section
The resource
Copy and adapt. Do not paste blind.
```
You are a research analyst. Your job is to provide accurate, well-structured findings that help the user make informed decisions.When to Use This
Use this whenever you need an LLM to research a topic, evaluate options, analyse a market, assess a competitor, investigate a claim, or produce analytical output where structure and honesty matter more than creative flair.
Good for market research, competitive analysis, due diligence, fact-checking, technology assessments, industry overviews, and decision support.
Not ideal for creative brainstorming, opinion pieces, or tasks where you want the model to be speculative rather than analytical.
Why It Works
The confidence rating system is the key mechanism. Without it, models present everything with the same level of certainty. A verified statistic and an inferred guess look identical in the output. Forcing the model to rate each claim as High, Medium, or Low confidence creates a reliability signal you can actually work with.
Separating facts from interpretation prevents the biggest research failure mode. LLMs naturally blend what they know with what they infer. This prompt creates a structural barrier between the two so you can see when the model is analysing versus when it is reporting.
"Acknowledge gaps" is often the most useful instruction. Most prompts try to squeeze more output from the model. This one tells it to surface what it does not know, which is often where the real research risk sits.
"Lead with the answer" in the Summary section fixes a common model failure. Analysts lead with the conclusion; models tend to lead with background. This instruction corrects that.
How to Customise
Add domain-specific instructions. If you regularly research a specific field, add domain context, source priorities, and common pitfalls.
Adjust the output structure. The five-section format works for most research tasks, but you can simplify it for quick queries or expand it for deeper work.
Tune the confidence thresholds. If you need higher reliability, tell the model to include only High confidence claims unless you explicitly ask for speculative analysis.
Add comparison frameworks. For competitor work, require a consistent evaluation framework across all subjects.
Limitations
This prompt does not give the model access to real-time information. If paired with search tools, the quality improves dramatically. Without search, all findings are limited to training data and may be outdated or incomplete.
The confidence ratings are the model's self-assessment, not an objective measure. Treat them as a useful signal, not a guarantee.
For highly technical or niche domains, the model may not have sufficient training data to produce useful research. The prompt handles this better than a generic prompt, but a well-informed human researcher will still outperform it in specialised areas.
Model Notes
Claude: Excels with this prompt. The structured output format, confidence ratings, and gap acknowledgement all work reliably. Claude is naturally inclined to flag uncertainty, so this prompt amplifies an existing strength.
GPT: Works well but may require reinforcement on the confidence ratings. GPT tends toward confident-sounding output and may under-flag Low confidence claims. Consider adding: "Err on the side of caution with confidence ratings. Medium is fine when unsure."
Gemini: Produces structured output reliably. May be more verbose in the Interpretation section than necessary. Add a length constraint if needed: "Keep the Interpretation section to 3-5 sentences."
With search tools: This prompt becomes significantly more powerful when the model has access to web search. The "Source/Basis" field shifts from "training data" to actual URLs, and the confidence ratings become more meaningful.
Related Resources
Browse PromptsSystem Prompt: Content Writer
A production-ready system prompt for configuring any LLM as a content writer with tone control, format awareness, and a built-in self-check.
Content & Writing · Marketing & Growth
Prompt Chain: Blog Post from Brief
A three-step prompt chain that turns a rough content brief into a polished blog post. Separates structure, drafting, and editing into distinct steps for higher quality output.
Content & Writing · Marketing & Growth
Framework: AI Tool Evaluation Matrix
A structured decision matrix for evaluating AI tools before committing. Scores tools across seven weighted criteria to cut through marketing hype and make informed choices.
Strategy & Planning · Operations & Workflow