Once I was watching an interview of a famous Tamil poet who has written many songs for successful movies. In that interview, he mentioned that he always makes sure the words used in his songs don't carry any negativity.
He had a bad experience that taught him this lesson. What he says is that every word has power, which is why he ensures he doesn't use any negative words in his poems.
I had a similar thought process when I learned something interesting about system prompts.
The Problem with Negative Instructions
Here's the problem: When you use negative words to restrict AI models, they sometimes forget the "don't" part and just do what you said not to do!
Why? Because models tend to focus on the action word itself. When you say "don't use complex words," the model's attention is drawn to "complex words" — and it might end up using them anyway.
The Solution: Frame Your Instructions Positively
The solution is a simple thought process change — frame your instructions positively.
Example 1: Word Complexity
Instead of: "Don't use complex words"
Say: "Use words a fifth-grade student can understand"
Example 2: Response Length
Instead of: "Don't be verbose"
Say: "Keep responses under 100 words"
The Positivity Coach Mindset
So whenever you write a system prompt, wear the hat of a positivity coach — a person with abundant positivity — and guide the model in a proper positive way rather than using negative words for restrictions.
The Wisdom of Words
The Tamil poet understood the power of words. Turns out, it applies to prompts too.
This isn't just about prompt engineering — it's about understanding how language shapes behavior, whether in humans reading poetry or AI models parsing instructions. The same principle that makes songs memorable and uplifting also makes prompts more effective and reliable.
Have you noticed this pattern in your prompt engineering experiments?