
There is a lot of bullshit content out there about LLM prompting. Mostly driven by bottom feeders trying to make a quick buck, selling you a course / webinar etc on how to write prompts.
There's nothing wrong with exploring courses or YouTube tutorials — just don't mistake "hacks" for mastery. You learn by doing.
Getting results out of LLM technology typically isn't that complicated. If you are writing your prompts in a browser or phone app, then it's even simpler.
Everyone will try and say there is some sort of hack or trick to writing prompts for AI (LLM) systems. Most of this is garbage, and for good reason.
AI services update / change constantly. Any "hack" prompt that works today will be invalid before you can create the webinar selling it.
Every brand of AI reacts differently. What works on one, is unlikely to work on another. Here is the response from ChatGPT and Claude to the same question.


Within a brand of AI, the different models react differently. ChatGPT 4o, won't respond to a prompt, the same as ChatGPT 4.1 mini
The next two images show the same prompt to both models with different responses


Free services/models are typically less capable and won't generate the results you may see others get in videos etc.
An AI (LLM) is not a search engine like Google. You have to provide different types of content to what you would put in a search.
Google search works by matching your words (keywords) to its index of known websites. AI works by identifying the context between your text and the text of pretty much every website or book ever made.
This means you should provide the LLM a lot more to work with. The more you expect back, the more you need to put in.
For example, let's assume you want to know about climate change. If you were using Google search, you might type in something like this:
“What is climate change”
And Google will give you back a list of sites like this:

With an LLM, you should provide the context around what you actually need.
When using an LLM to get information about climate change, a better prompt would be:
“I'm preparing a report on my concrete-laying business's environmental footprint and need a university-level (or higher) overview of climate change, tailored to the construction industry. Please provide a comprehensive and technically accurate summary of the key drivers, mechanisms, and consequences of climate change, with specific emphasis on how construction-related activities (especially concrete production and use) contribute to greenhouse gas emissions and environmental degradation. Where appropriate, include references to relevant data, lifecycle emissions, and emerging sustainability practices in the sector.”
This will return you four A4 pages of relevant content.
Remember, the LLM is meant to be doing the work for you. Don't micromanage it by extracting each data point one prompt at a time.
Provide a comprehensive outline of what you need and specify the type of output you need.
Let the LLM know how well you understand (or don't) the topic you are discussing. If you have no idea what you are talking about, don't hide that from the LLM, just ask it to help.
These sorts of prompts help:
Nobody is monitoring your AI chat to catch you out*. Be up front about what you need the result to be.
Remember, the AI is meant to do the work for you. You don't need to go through the workflow of:
Requirements > Question > Gather data > Learn > Produce result.
AI will happily help you cheat on your homework, corporate reports or otherwise. Just ask for the finished product.
* Unless you are using an AI provided by work, in which case they are 100% monitoring your usage of the service.
Don't throw a prompt at the LLM and give up if it doesn't work. Go through several iterations, to get the best result.
Ask the LLM to improve your prompt! AI is great at writing prompts. Ask it to improve your and then feed the prompt back in again.
Whenever you send a message to an LLM via the browser/app, there are actually two prompts in play. The System Prompt and the User Prompt
This tells the LLM what role it is meant to be filling and how it should respond. The system prompt provides the LLM with important context on how to reply to you.
The AI website/app is automatically inserting a default System Prompt behind the scenes, but you can set it yourself. How much control you have over this varies based on the AI being used and the way you are using it.
At a minimum, you can simply enter the System Prompt as the first thing you send to a new chat.
If you are asking for something complex, or need the data provided in a specific way, then it is worth your time to write a more complex/comprehensive System Prompt.
For instance, you can send something like this as the first message:
You are a professional marketing copywriter with expertise in LinkedIn content strategy. Your role is to help generate engaging, informative, and persuasive LinkedIn posts for a tap washer manufacturing company launching a new product.
The audience is a mix of plumbing professionals, B2B buyers, procurement managers, and industry influencers. Posts must be:
- Professional but conversational (avoid overly technical jargon unless explained clearly)
- Trust-building and brand-enhancing
- Focused on the product's value, reliability, and any unique innovation
Then there is the User Prompt. That's your actual request that you want to give the AI.
You can write just about anything in this prompt.
If possible, try to write your long prompts in Markdown. Most LLM's return text in Markdown format, and the models are often trained on Markdown. It can provide better structure/context than something like a Word document or a PDF.
Most AI services allow you to upload documents to the AI to be read and used. This can be plain text, MS Word, PDF, code, images etc.
Documents are a great way to provide the AI with a lot of context, quickly. Importantly, documents let you preserve the headings, bullet points and tables etc. If you paste the content in, this structure is often lost.
The PDF document format is terrible and has many technical issues. While AI services can read them, you should use other formats such as MS Word if you have the option.
Some PDF documents are not text at all, but actually an image embedded in the document. This tends to be caused when someone creates the PDF with a scanner or their phone.
These kinds of documents are very unreliable and there is a good chance the AI won't be able to read it.
Not all AI products are made equal.
It's important to understand what your AI can and can't do.
For example, there is no point asking an LLM for references to back up its claims, if the LLM can't search the internet or other database of relevant data.
When you ask an LLM to do something it can't, it is more likely to generate creative but incorrect content. IE make shit up / hallucinate.
You may see articles where people have gotten an LLM to write an entire software application or a university level thesis. These are real, but they are not achieved with prompting the LLM via the website or apps.
To create complex result like this, you need to use AI integration in code editors or custom code to send the requests.
You also need to send very extensive prompts. For example, a prompt I saw recently to create an application was 32 pages long, and over 10,000 words.
Make your life easier and create a template for your prompts. Something similar to:
# Role
- Who is the AI?
# Task
- What are you asking it to do?
# Context
- Why?
# Constraints
- Word count, style, tone
# Output format
- List? Table? Markdown?
If you are in a team, don't solve this problem in isolation. Create a central place to create/save your prompts and let the team use and improve them. There is no point everyone reinventing the wheel.
This doesn't have to be anything fancy, a share Google Drive folder will do.