Master AI Prompting With The Help Of Our Tool

Most people struggle with generic AI content because they treat an AI prompt like a Google search. And why wouldn’t we. We’ve been trained for over 20 years to use the internet as a vending machine; you put a keyword in and you get a result out.

It is only natural that when most people start using AI, they treat it the same. This is why we have a lot of lazy content filling our feeds. The difference is that AI performs best when you treat it like a colleague, not a search bar.

You wouldn't expect a new hire to deliver a board-ready report based on a five-word shout across the office. You would sit down, provide context, define the tone, and set clear boundaries. The key thing we have learned at KINTAL is that you shouldn't approach it as a prompt, but as a brief.

From Search Bar to Briefing

Rarely should you use the first response an AI provides. The real value is found in the second, third, and fourth turn of the conversation.

This is where you push back, refine the logic, and add the human nuance that a model cannot invent on its own. This is "prompt engineering". It is one of the most valuable skills humans can learn right now.

Don’t let the term "engineering" put you off. It is just a human skill that helps you refine what you ask.

Think of it like ordering a sandwich at lunch.

  • The Search approach: You ask for "lunch". The response will be random because you gave absolutely no context.

  • The Basic approach: You ask for "a sandwich with salad". You get closer, but it might still be on the wrong bread or have a filling you hate.

  • The Engineered approach: You ask for "a toasted sourdough club with chicken, extra mayo, and no tomatoes".

You get exactly what you wanted, and you get it faster.

Get Our Prompt Guardian

There are plenty of courses on prompt engineering available and given the UK government has launched an AI Skills Hub to help - rather than create more learning content we’ve create a search to signpost you to some useful courses on there instead.

However, we feel hands-on practice is much better than theory. So, we created a free tool that handles the "heavy lifting" for you and helps you learn in context.

Our Prompt Guardian is something we’ve been using for a long time and knows the KINTAL "AI Red Flag" list. It automatically strips out corporate fluff, enforces British English, and structures your request so the AI helps you get better results.

Five steps to get you started with enhancing your prompts:

  1. Click the link above.

  2. Paste your prompt or give it a rough idea. (e.g., "Write a blog about video production").

  3. Review the audit. The Guardian will give you tips and point out where the AI is likely to "hallucinate" or use clichés.

  4. Get the upgrade. It will provide a rewritten, structured prompt including Role, Context, Constraints, and Task definitions.

  5. Run it. Copy this engineered prompt into your main AI workspace and watch the quality of the output shift.

That’s it. It is free to use and structured to give you two things: hands-on tips and a prompt you can use straight away.

Access our Prompt Guardian

Privacy & Your Data

We know how nervous people are when it comes to AI and safety, so we wanted to be clear about where your data goes.

  • We cannot see your chats. Your conversation is private to your own ChatGPT account. KINTAL has no access to your history or your inputs.

  • We do not train on your data. KINTAL does not use your inputs to train our own models or tools. We don’t capture any data from you.

  • Safe for business. Because we don't store your data, you can use the Guardian to refine briefs for sensitive projects without worrying about leaks to us. Just remember not to share anything confidential in the prompt itself.

(Note: Your general data privacy is governed by your own OpenAI and ChatGPT user settings).

We’d love to hear what you think

This tool is a living project and we’ve been using it for a while. We are constantly updating the logic based on what we learn in the field and as models and the overall approach evolves.

If you find our Prompt Guardian helpful, or if it misses a nuance you think is important, please let us know.

Email hello@kintal.co to share your experience.

We’ve been doing this for a while now, so take advantage of the hours we’ve invested in building out this skill.

Questions we’re asked all the time about prompting

  • As useful as it is, you don’t need to worry about having to learn something technical. "Prompt Engineering" is just a scary technical term for clear communication.

    If you can explain a task to a junior member of staff without them coming back five times to ask what you meant, you can use AI. The problem isn't that the tech is hard; it's that we are used to Googling things with keywords.

    AI needs context, constraints, and examples. The Guardian handles the structure so you can focus on the intent.

  • Because you are not asking the right questions.

    Large Language Models (LLMs) are designed to predict the most likely next word. If you give them a generic instruction ("Write a blog post about leadership"), they give you the most statistically average blog post on the internet.

    To fix it, you need constraints. You need to tell it who it is (Role), what it cannot say (Negative Constraints), and exactly who it is talking to - and don’t forget your own tones of voice.

    If you don't, the model defaults to "helpful American corporate assistant".

  • You can never stop it 100%, but you can reduce the risk by changing how you ask.

    Hallucinations happen when a model tries to fill a gap in its knowledge to please you.

    To fix this:

    1. Provide the source material: Paste the text you want it to work from.

    2. Set a "Refusal" rule: Explicitly tell the model: "If you do not find the answer in the text provided, state 'I do not know'. Do not guess."

    Our Prompt Guardian adds these safety rails automatically.

  • Because AI is probabilistic, not deterministic. It is rolling a dice for every word it chooses. This is a feature, not a bug, it’s why AI can be creative. But it is annoying when you want consistency. If you need the same output every time, your prompt needs to be rigid. You need to lock down the structure and the format. The more open the prompt, the more random the result.

  • Think of it as a pre-flight check for your ideas. It is a custom tool we built to stop us from being lazy with AI. Most people type a vague sentence into ChatGPT when we girst get started, and hope for the best.

    The Guardian forces you to slow down. It reviews your draft, strips out corporate fluff (like "delve" or "leverage"), and rewrites it into a structured brief with a clear Role, Context, and Task.

    And it doesn’t just write the prompt; it teaches you how to structure your thinking so you get a result you can actually use.

  • We know this is the biggest worry for businesses. For the Guardian Tool:

    • We (KINTAL) cannot see your chats.

    • We do not store your inputs, and we do not use them to train our models.

    For your general AI use:

    • Never paste sensitive client data, PII (Personally Identifiable Information), or unreleased financial figures into a public LLM.

    • If you need to edit a sensitive report, replace the client name with [CLIENT] and the numbers with placeholders before you start.

    Top tip, python is brilliant at redacting your documents. And you don’t need to be a software engineer to do it these days, ask AI to walk you through how to do it.

  • Yes. In fact, it is specifically for you. The "Tech Bros" already have their own complex coding workflows.

    We built the Guardian for the rest of us; creatives, writers, and operators who just want to get a job done on a Tuesday morning without learning Python.