Cutting Through the AI Hype | Common Mistakes to Avoid [ Part 2 ]
Secret LLM hacks? No, just avoid these simple mistakes.
About a month ago, I published the first part of this article, Cutting Through the AI Hype | Getting Started Using AI Chatbots [ Part 1 ]. It’s a quick start guide I wrote to give you my top tips to get you started using LLMs. If you haven’t read it, check it out! This article is a follow up to that where I will be giving you the top 10 most common mistakes I see people making when they are first getting started with LLMs.
Common Mistakes to Avoid
Everyone should keep the following list in mind because it’s so easy to get complacent, even if you’ve been using LLMs for a couple years now. It’s worth noting, I regularly see people who consider themselves seasoned pros making these mistakes, so even if you don’t consider yourself a beginner, give this a read. You might find you regularly make one or more of these mistakes. Even if you don’t, it’s always good to remind yourself!
Let’s begin.
Using an LLM as a replacement for Google Search
LLMs are trained to find the average of all data on the internet. Just because something falls into the average doesn’t mean it’s correct or factually true. It just means people said it repeatedly on the internet.
AI chatbots only repeat information they have seen, they do not know anything. If the model was trained on a lot of data that says “Max Auger received an MBA from Wharton” it might repeat that information to you if you asked it about who I was, but as flattered as I’d be, that statement would not be true.
A lot of popular LLMs have internet search functions now, so if you do want to run a Google search, but you want to use your favorite LLM instead, make sure you’re using its internet search function and even then, be suspicious of what it says.
Being too trusting of the LLM’s output
LLMs are very good at writing things that sound very plausible, so it can be difficult to fact-check sometimes. Additionally, LLMs are programmed to please you above all. This means they are often trying to produce a response they think you will want to hear, which can confirm your own bias. In my last article, I mentioned how I use ChatGPT’s image analysis to help me understand if I’m dressed properly before I leave the house. Regardless of what ChatGPT tells me, I still make the final decision about what I’m wearing when I leave the house. If it ends up being the wrong thing, I have to own the ouotcome, no one cares if ChatGPT told me to wear that outfit.
When you talk to LLMs about topics that you are very knowledgeable about, it’s much easier to decipher what is correct and usable vs. what is not.
The further away you get from your knowledge areas, like if I’m asking about physics (which I know little about), it’s going to be pretty difficult for me to separate what is correct from what is not accurate and discard the bad information.
Think about the stakes in which you are using the information you are producing. The higher the stakes, the more diligent you should be about fact-checking.
Not sharing enough context
I recommend you practice becoming the office gossip when you’re chatting with LLMs. The more context you can share, the better. You should include both the what and why in your prompt.
While LLMs are getting better at understanding nuance, do not assume they understand unstated information or your personal background knowledge. Explicitly state everything that's relevant to your request.
Avoid short, ambiguous prompts that don't give the LLM enough information to generate a useful response. For example, asking it to "Give me a presentation outline" is far less effective than "I need you to create a presentation outline on 10 most common mistakes people make when interacting with LLMs. My presentation is going to be 12 slides long (intro, one slide per mistake, conclusion. Start with the outline. I will ask you to expand each slide as needed."
It’s not uncommon for my prompts to be a couple or few paragraphs long.
Not using clear and concise language in prompts
Opposite to my point above, while you need to provide context, try to avoid overly verbose or convoluted prompts that can confuse the LLM.
This one will come with time. I recommend oversharing to start. You’ll eventually find the line between what is too verbose and convoluted vs. what is just enough.
If you have a particularly long prompt, or even if it’s not long, use markdown headers and bullets to give your prompt structure.
Not iterating
The first response from an LLM might not be perfect. Don’t be afraid to try and refine the response you received by asking follow-up questions or by rephrasing your initial prompt.
If the response you receive is way off, start a new thread and try again.
Keeping track of the prompts you use and the outputs you receive can be helpful for learning, reproducibility, and refining your approach.
Not providing constraints
If you have specific limitations that need to be considered, like a project timeline, budget, a word count, specific keywords to include, a particular style to emulate, or the output is being made for a specific social media platform, clearly stating these constraints and considerations will significantly improve your results.
Not harnessing the probabilistic nature of LLM output
LLMs generate text based on probabilities. This means the same prompt can produce different outputs, and there's always a chance of unexpected or nonsensical results.
This is an advantage for you to harness. LLMs are not deterministic, this is why they are so bad at math right now.
Not being aware of potential biases in the training data
LLMs, and really all AI, can reflect biases present in the data the model was trained on, leading to outputs that are discriminatory or unfair. Be mindful of this and critically evaluate the generated content.
Not integrating LLMs into existing workflows effectively
Don’t be the hammer looking for a nail. Don’t force LLMs into tasks where they are not the best tool, rather, find ways to integrate the tools strategically into your processes.
Doing this should make your team’s lives easier by either saving them time or increasing accuracy.
Over-relying on LLMs and neglecting fundamental skills
While LLMs can be incredibly helpful, it's important not to become overly reliant on them and neglect developing your own critical thinking, writing, and research skills.
Fact-Checking is Really Important
In today’s world, everyone needs to become a fact-checker. It behooves you to become a rather suspicious person when evaluating LLm output. Don’t get me wrong, these are wonderful tools, but they don’t think critically about what they write, so we need to double time. Here are some ideas of how you can fact-check LLMs:
Cross-check with trusted sources: Verify information provided by AI against credible websites, academic databases, and government pages. For statistics, locate the original reports to ensure accuracy. Google searches shine here.
Use independent fact-checking tools: Employ fact-checking websites and tools like factcheck.org, Google Fact Check Tools, or Snopes to quickly verify information in the LLMs response.
Examine key points: Make a list of important details in the AI's response, including names, dates, statistics, and quotations, and verify each one individually.
Scrutinize all claims: Be alert for implausible statements or outlandish claims.
Consult subject matter experts: For complex or niche topics, reach out to qualified individuals and colleagues to address any remaining uncertainties.
It’s far too easy to take information in the output at face value and not investigate further, but not doing so can be consequential.
Never Share Your Sensitive Data
When you’re interacting with a free tier subscription of an LLM, it’s almost guaranteed that it will be using your interactions to train itself. Paid tiers of most LLMs generally offer more protection options, but whether those options are enabled or not is another question.
Be very careful about the information you share with an LLM, because once information goes into a model, it is very difficult, if not impossible, to get out. Once your data is consumed, you risk it being used in a response to someone else. This is exactly why Samsung banned employees from using ChatGPT back in 2023.
Overall, be very cautious about the personal and company proprietary information you share with an LLM and be sure to follow your company’s policies. Elon Musk actually asked people to upload their medical records to GROK with the sole intent of training GROK to be better at interpreting medical images. I do not recommend doing this. I would never do this, because you don’t know how that information might be used in the future. Will it be used in a response to someone else? Will it be added to a massive data profile about you and sold to a third party and used to serve yoou targeted marketing campaigns?
Even if you have the option for the LLM to train on your data disabled, I recommend still using caution, particularly with information about yourself. Never upload credit card information, medical bills, or medical data (including medical images). There is generally no need for sharing this kind of information with an LLM and the risks far outweigh any potential benefits. Maybe one day when we have a dedicated medical AI will I trust it, but it may take some convincing. I never signed up for 23andMe for a reason after all.
Conclusion
Don’t worry if you’ve made one or more of these mistakes before. We all have at some point. The beauty of Gen AI is that it’s very easy to start fresh by starting a new conversation.
Let me know what you think in the comments. Have I missed anything? How do you scrutinize the output you’ve generated?


