How the way you phrase your prompts shapes AI responses. When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. Does the way you write a prompt really make a difference when you’re interacting with an AI tool, like ChatGPT? The short answer is, yes.
![[What can you use Claude for?]](https://cdn.mos.cms.futurecdn.net/p2qrKn2qw2RgyS9MqMF8xE-320-80.png)
A lot of factors contribute to the response you get from an AI tool or chatbot, including the way it was initially created and the data it’s been “trained” on. But one of the most important factors is how you communicate with it – this is where prompting comes in.
A prompt is essentially a question or an instruction that you give an AI tool to generate a response. It can be a short and simple query, a longer, more structured request, or even a series of guidelines, all with the aim of shaping the output and, ultimately, getting what you want. Which means the way you write these prompts directly impacts how clear, relevant, and useful the AI’s response is.
When I first tried using ChatGPT, I spoke to it like a friend, firing off casual and frantic questions and expecting intuitive responses. While there’s nothing necessarily wrong with this approach, I soon realized that being more intentional about how I wrote my prompts gave me much better answers. This is what’s called ‘prompt engineering’, a growing field that’s become so important, people are building careers out of it.
Everyone has a slightly different approach when it comes to what makes a great prompt – and this can vary depending on what you need and what tool you’re using. But, generally, the effectiveness of AI responses has three core elements to consider. The first is context, so an AI tool will perform at its best when it knows enough background to understand what you’re asking it. The more relevant information included, the more accurate and useful the response is likely to be.
Then there’s specificity, so the more clear and precise your prompt is, the better your result will be. Vague prompts tend to produce generic or less relevant answers. Sign up for breaking news, reviews, opinion, top tech deals, and more. Finally, consider structure. If you put time into thinking how you’re wording the prompt, what order you’re shaping it, whether you’re using bullet points or numbers to divide up text, etc, you’re likely to get a more relevant response.
It’s also worth understanding how the AI tool you’re using actually works. For example, chatbots that use Large Language Models (LLMs), which includes ChatGPT, Claude and Google Gemini, generate their responses largely based on probability. Which means they predict the likely sequence of words based on the input you’ve provided. So any changes you make to the way you write a prompt, even if you think it’s generally asking the same sort of thing, could deliver really different results.
Of course, some people have exaggerated the complexity of prompt engineering to sell courses or consultancy services. However, research shows that a well-constructed prompt does yield better AI-generated results. For example, one study found that a well considered prompt can increase the quality of an LLM response by 57.7% and the accuracy by 67.3%.
But you don’t need a research paper to see this for yourself. Try using your preferred AI chatbot to answer a question using a short, vague prompt –– then refine it with more details and instructions. I bet you'll be surprised by how much the response improves.
People have conducted research, created courses and made lots of video content to help you demystify the perfect prompt. But that doesn’t mean there aren’t some key lessons beyond context, specificity and structure to consider. Here’s some of the most popular advice:.
AI tools can be incredibly useful, but they’re far from perfect. Regardless of which AI model you use, the quality of its output depends on the clarity and structure of your input. Whether you’re using AI for brainstorming, automating tasks, creating a workout plan, or proofreading an article, thoughtful prompting leads to better, more accurate results.
Becca is a contributor to TechRadar, a freelance journalist and author. She’s been writing about consumer tech and popular science for more than ten years, covering all kinds of topics, including why robots have eyes and whether we’ll experience the overview effect one day. She’s particularly interested in VR/AR, wearables, digital health, space tech and chatting to experts and academics about the future. She’s contributed to TechRadar, T3, Wired, New Scientist, The Guardian, Inverse and many more. Her first book, Screen Time, came out in January 2021 with Bonnier Books. She loves science-fiction, brutalist architecture, and spending too much time floating through space in virtual reality.