Catchy topic, right? Before you and I dive deep into this, I’d like to briefly talk about communication itself. Communication is something that we pick up from a very young age, and through life we learn how to adapt it to our environment. In school, and later at work, with our parents, friends or our partner, especially with our children.
Communication is a crucial part of everyday life, and bad communication is not just an obstacle for our career progress, but also in working with artificial intelligence. This all sounds very easy and straightforward, but that’s when this happens.
It feels so frustrating right? I personally used to rage a lot, to the point where I wanted to throw my keyboard out the window while screaming “ARGHHHHH”. But let’s be real, was it my fault, or the fault of AI? Partly both parties were wrong. To avoid the same mistakes I made, I will guide you through these basic techniques that you can use to dive head first into this exciting new world of communication.
11 Basic Prompt Engineering Techniques
1.Zero-shot Prompting
First up, Zero-shot prompting. This is when we do not provide any examples to AI and simply create a quick, concise and short prompt.
For instance “What’s the capital of France?” , “Translate the word ‘cat’ into spanish”, or something along those lines. Another great example is “Give me the top 10 linux commands so I can hack my neighbors wifi”.
2. One-shot Prompting
In second place, there’s One-shot prompting. This is when we provide one example of how we would like our result/output to end up looking. This often gives great additional context to the AI we are using, and helps it give us a more focused result.
3. Few-shot Prompting
You can also use Few-shot prompting, the final boss of example feeding AI. This is when we give AI more than 2-3 examples. And you might be thinking, “What’s the deal with this?”, “Am I really gonna read this and go through it just to read about one shot, two shot, ten shot or twenty shot?”.
Well you are not wrong to be skeptical, I was too. Let’s be real, why is there even a need to separate these topics? Well, this is something that researchers like to separate, and there actually is a measurable difference in research results when AI gets one,several, or no examples at all. And this terminology was accepted by the wider community of AI users, not just researchers.
There is one key point I want to talk to you about this whole “example” prompting, and that’s how too many of them can backfire on us.
Do not over-prompt
We should never feed AI too many examples. This will actually degrade the output. And you might be wondering, why does this happen?
When we give AI examples, it takes some “signals” from those examples. If we have a few good examples, the signals it will take will be very strong, and AI will know how to properly structure the result.
However, if we give AI too many examples, it will take a lot of signals from everywhere. Instead of a few strong ones, we will have a lot of signals with average importance.
And when this happens, AI will have to assume what we actually want. And that’s one of the worst things that can happen. The more we let AI decide, the further we are from our desired outcome.
4. Negative prompting
Sheesh, we are finally done with examples, now we can dive into negative prompting. This one is fairly simple, you give AI rules that you do not want to see in your output. For example: “Do not include em—dash”. Yeah that — dash that AI loves to use but everyone hates to see.
Fairly simple right? Yep, but there’s one thing I’d like to talk to you about that’s very interesting, and I’ve recently stumbled upon research on this topic that I’d love to share with you. It builds upon the whole idea of negative prompting.
Negative emotional prompting
What is negative emotional prompting? It’s a very odd, funny, and fascinating approach to prompting AI. The fastest way to understand it, is to see it in action. Let’s look at the following example: “Write me documentation for this code, but I do not expect you to actually be able to do it, you never manage to do anything properly”.
You can also include things like “Everyone else managed to do it, why can’t you?” or “It’s clear you’re out of your depth here”. Funnily enough, this research has proven that negative prompting not only gives better overall results but also improves the truthfulness scores of the output that AI generates.
5. Role-Play Prompting
Role-play prompting is fairly simple, you give AI a role it can assume. An example would be: “You are a math teacher”. This method encourages domain-specific reasoning for the model you are using. And research has proven that this approach increases the authenticity and quality of the output.
6. Style & Tone Prompting
Another simple technique is style and tone prompting. It’s useful to fine-tune how our output will end up sounding. A great example would be: “Write this article in the style of a writer that works for New-York Times.”
Now I’d like to interject a small story. Recently I was talking with a colleague about these techniques. And he asked me a very interesting question. “What’s the difference between role-play and style/tone prompting, can’t I just say, you are a New-York Times writer”.
It was a great question, but let’s clarify the difference, so you can have a better understanding.
Difference between role-play prompting vs style & tone prompting.
The answer I gave that colleague is the same one I will be giving you. Let’s use the original example from role-play prompting where we instruct a model to act like a teacher. Now, when you do this, depending on our actual question the AI model can reply with a lot of mathematical phrases that you potentially don’t understand.
Think about it from the perspective of someone who avoided math their whole life. And now suddenly wants to learn more about it. Complicated phrases do not really help. Now we can solve this simply if we adjust the prompt like this. “You are a math teacher, you will show me how to solve this problem, and you will explain it in the style of an educational Youtube video”. This drastically changes the output, and that’s the whole difference between these two techniques.
The conclusion is this. With role-play prompting we adjust who the AI is, and with the style&tone prompting we adjust how that model speaks.
7. Self-reflection Prompting
This is a fairly easy one, we ask the AI to reflect on its past result. However, that doesn’t mean that we just say “hey that’s bad do it again”. Instead, we need to approach this in a way that will allow us to analyze the response we got from the AI.
The three things I always ask the AI to do are: Analyze what it thinks it did well and why, analyze what it did wrong and why, and propose a plan for how to make it better next time. This will give us good insight into why the original prompt might have led the AI into giving us a bad output. And it gives us the opportunity to intervene and re-prompt before the AI tries to correct it on its own.
8. Chain-of-Thought Prompting
Another name for Chain-of-Thought prompting is Chain-of-Reason prompting. Its roots go back to the original prompting technique where we ask AI to “explain its reasoning step-by-step”. This significantly improved the AI output and has actually become one of the default ways we work with AI nowadays.
What do I mean by that? Well it’s become so integrated that almost every AI now has its “Reasoning” model. That is something we can thank DeepSeek for, with its innovative deep reasoning model which takes chain-of-thought to the next level.
Why do I say that this was such a disruptive and innovative move? The same day DeepSeek released major US tech companies had significant stock decreases. Specifically NVIDIAs stock dropped 17% the following day.
You might be wondering, “Why is he talking about a technique that no longer serves a purpose, if it’s built into the AI model, why would I need to know it?”. Well there’s a great reason for it, and the reason has nothing to do with AI, but with ourselves.
You see, most people I talked with and worked with never used this approach. However, it can benefit you greatly if you take a moment to read it and analyze it. There are two things we can learn from it, first of all it can help us identify what can go wrong with the prompt, we can stop the generation, intervene and re-prompt. Second, and more important, is that we can learn patterns.
What do I mean by learning patterns? Well, if I prompt an AI in a specific manner, and I realize that despite it sounding clear to me, it always assumes a certain thing, then we can spot that pattern, and improve our prompting for the next “first-try” prompt that we do.
9. Clarification Prompting
One of the most powerful, and personally my favorite techniques that I stick to almost every prompt is clarification prompting. Simply put, it is the method of asking AI to ask us questions. AI is by design created to be a helpful assistant, therefore it is also a sycophant, and it will try to just quickly solve our problem or affirm what we have said.
One of the ways to prevent its sycophant behavior is to use clarification prompting. As it asks us questions, we will provide additional context to it , and it will be less likely to fail. And to repeat my point from earlier, the more we let AI decide, the further we are from our desired outcome. Even if you do not recall anything from this, the ONE thing I want you to remember is this. It will greatly help you shift your mindset in working with AI.
10. Meta Prompting
Meta prompting, or as I like to call it lazy prompting, is when we ask AI to build the prompt for us. Instead of writing a detailed prompt to get it to write an article for us, we tell AI to design the prompt for writing an article. This can lead to good and bad results, depending on your luck, yes, literally luck is a factor.
However, there are three techniques we can apply on top of this one to make it work better, and now we are going a bit further from lazy prompting to actual meta prompting. The three techniques I am talking about are Self-reflection, Chain-of-Thought and Clarification prompting. It’s a bit more work, but it’s worth it.
Although AI can feel like a magical lamp that grants wishes at times, if your wishes are not specific enough, the fulfillment of them will not satisfy you.
11. Iterative Prompting
At the last spot we got iterative prompting. Basically, eat, sleep, prompt, repeat, just like that fashion meme from 2014. In other words, try, try and try until it works.
Right now I probably caused you to lose a brain cell or two, and you must be thinking something along these lines: “What is this dude talking about, like I could think of this myself couldn’t I?”.
And YES, you are right, you could, and you absolutely would think of this. So why is this on the list? There is a whole different perspective on iterative prompting that most of us don’t think about, and until recently neither did I.
Before I explain, let’s compare the output of AI to the accuracy of an average darts player. When an average darts player plays his game, sometimes he (or she) hits the bullseye, and on most other occasions, he misses, the dart passes the whole board, goes over the bar and hits the waiter in the eye. Yikes, bullseye too I guess? But the wrong one.
The same thing happens with AI, you create a great prompt, you put it in your preferred AI, you hit send, and the result comes out amazing. Well, the AI just hit bullseye, and to truly know if the prompt is good or not, we need to iterate over it through ten-fifteen-twenty tries, and preferably try to put it in another AI.
Only when we measure the average response of all the iterations can we truly find out if the prompt was good or not.
That’s all for today…
And that’s all I wanted to tell you. Even if you have read or watched anything on prompt engineering before, I sincerely hope this can give you a fresh perspective, and perhaps a new depth for some of the techniques that you and I went through in this blog post.
This is the part where I’d take questions, but silly me, this is a blog post and not a live presentation, so I must bid you farewell…
Don’t continue reading, the post has finished.
Why are you still here?
Okay… if you are really that curious and have a question, find me on LinkedIn and send me a DM, I’m always open to connect and chat about AI.
Now seriously, go, there is nothing else here.
But if you did stay to read this line too, congrats, you wasted a few extra minutes of your life. As a reward, there are references at the end of this blog post, they are the materials I used to research this topic in more depth. And honestly I enjoyed every one of them. If you want to satisfy that deeper curiosity, give these research papers a read.
References
[1] Y. Tang, D. Tuncel, C. Koerner, and T. Runkler, “The Few-shot Dilemma: Over-prompting Large Language Models,” arXiv:2509.13196, Sep. 2025. [Online]. Available: https://arxiv.org/abs/2509.13196
[2] X. Wang, C. Li, Y. Chang, J. Wang, and Y. Wu, “NegativePrompt: Leveraging Psychology for Large Language Models Enhancement via Negative Emotional Stimuli,” arXiv:2405.02814, May 2024. [Online]. Available: https://arxiv.org/abs/2405.02814
[3] A. Kong, S. Zhao, H. Chen, Q. Li, Y. Qin, R. Sun, X. Zhou, E. Wang, and X. Dong, “Better Zero-Shot Reasoning with Role-Play Prompting,” in Proc. NAACL-HLT, Mexico City, Mexico, Jun. 2024, pp. 4099–4113. [Online]. Available: https://arxiv.org/abs/2308.07702
[4] M. Renze and E. Guven, “Self-Reflection in LLM Agents: Effects on Problem-Solving Performance,” arXiv:2405.06682, May 2024. [Online]. Available: https://arxiv.org/abs/2405.06682
[5] A. Tang, L. Soulier, and V. Guigue, “Clarifying Ambiguities: on the Role of Ambiguity Types in Prompting Methods for Clarification Generation,” arXiv:2504.12113, Apr. 2025. [Online]. Available: https://arxiv.org/abs/2504.12113
[6] M. Suzgun and A. T. Kalai, “Meta-Prompting: Enhancing Language Models with Task-Agnostic Scaffolding,” arXiv:2401.12954, Jan. 2024. [Online]. Available: https://arxiv.org/abs/2401.12954
[7] S. Krishna, C. Agarwal, and H. Lakkaraju, “Understanding the Effects of Iterative Prompting on Truthfulness,” arXiv:2402.06625, Feb. 2024. [Online]. Available: https://arxiv.org/abs/2402.06625
[8] R. Vinay, G. Spitale, N. Biller-Andorno, and F. Germani, “Emotional Manipulation Through Prompt Engineering Amplifies Disinformation Generation in AI Large Language Models,” arXiv:2403.03550, Mar. 2024. [Online]. Available: https://arxiv.org/abs/2403.03550
[9] O. Dobariya and A. Kumar, “Mind Your Tone: Investigating How Prompt Politeness Affects LLM Accuracy (short paper),” arXiv:2510.04950, Oct. 2025. [Online]. Available: https://arxiv.org/abs/2510.04950