Large Language Models (LLMs)
There are a few different types of AI, but in this article, I’m specifically looking at the type that are known as LLMs (the ones you chat to) like:
There are many more and Wikipeadia has a list of them.
These will wait for you to ask a question, but will not question back, which is interesting because I’d expect these things to ask for clarity with some of the prompts they get.
The LLM will not contradict and will always try to find a solution to the prompt, even when there is no question and you have simply made a statement for a follow-on.
LLMs use the data they have access to and sometimes that data is wrong or out of date. AIs are trained, over time to do a job and these LLMs have been trained using the Internet and everything you might find on there. The bad stuff is stripped out so they don’t swear or say anything inappropriate (you could probably force them to if you wanted).
The interesting point is they can be wrong.
Next Level
These LLMs are great, they can help with all manor of issues, from changing broadband provider to coding an app to health problems and solutions. They are the next level of search engine, in fact, once you start using these things, search engines will become either totally redundant or an afterthought.
I know today I’m looking at part of a large project that deals with security and I’ll be working with Google Gemini to make sure I get everything done efficiently, but what with that efficiency what do we lose? Now we come to it, the real issues with LLMs.
Money makers
LLMs are great, but they have a few issues we need to be aware of before using them. Firstly, they are just 1 opinion. It sounds strange that LLLMs may have an opinion, but they are coded and at some point soon (if not already) they will be used to sell. If we use an LLM by Google and an LLM by Microsoft, do you think they will be impartial or push their own products? I know if I was with the Microsoft marketing team I’d be pulling my hair out I thought our product was out there telling people why Google is better or vice versa.
Search Engines added ads to the top of there searches and sprinkle ads over everything they do, how long before AI is telling you about the grammar for your email, then tells you “You look hungry, have you been to McDonalds lately and tried the £50 big mac deal”? And these ads will be extremely personalised. The AI will know who you shop with, where you like to holiday, your mortgage provider, health problems and all sorts of personal info that can help target you specifically.
Now, this isn’t too bad and will happen soon, if it's not already happening, but maybe less intrusive than the ads at the top of the search engines and banner adds. The main issue with these systems is when they’re wrong, but we don't know.
Wrong, wrong, wrong!
LLMs are not infallible, they get it wrong, but how do we know they got it wrong? They know all the answers, they are confident in everything they say, how do we see when the AI we are conversing with has got it wrong? We only have 1 source, the LLM we are working with. We don’t have anything to compare answers to.
For example, I was working with Google Gemini and it told me to add a file “in the pages directory” and the system would see it and use it. I did this and it failed.
We started debugging (‘we’ as in myself and Gemini. It’s amazing how quickly these become like co-workers), went through all sorts of tests and changes and spend 2 hours on this issue. Gemini was telling me this was ‘exasperating’ and ‘strange’ (imagine an AI becoming exasperated?!), but it got to a point where Gemini told me to start a new project and start moving my code to that project.
This was the point when I thought “This isn’t right, I’ll do a search” after being instructed to delete folders and bits of code for 2 hours because the AI I was speaking with seemed to know the answers.
So I searched and the first result (after the ads) was the same question I had about the file and the fix. The file was in the wrong place. Simple fix. The new version of the system seemed to want the file in a different place, but even when Gemini knew the version I was using, it still got it wrong. It was so sure of itself, recreating the project was the next step, not fact checking. I did a little more digging and found the actual documentation had this change.
This wasn’t the first time something like this had happened, everything from telling me to paste API passwords to animations that didn’t work correctly to filters that would hit the database when it shouldn't had me re-question and check what Gemini was saying. Things that someone without experience would simply do and make real, costly mistakes.
What to do?
Today and in the near future there will be a LOT of insecure and buggy apps going live because the person developing has simply followed what the LLM has told them to do (I’m fixing a project now I’m sure was built “vibe coding” with ChatGPT). Companies will go bust because API keys have been stolen and used in places they shouldn’t and wracked up huge bills, people will lose websites because they are open to the most basic hacks, businesses will have data stolen and up for ransom because databases are left open. This is all happening now.
So, what do we do if we want to use these AI systems?
- Understand the problem you are asking about and have a solution in mind. We still have search engines, do some research before you start.
- If the solution the LLM is giving you seems off, question it again, the LLM will change the answer if it was not correct.
- If you are developing and you are pasting some code in and the IDE (Integrated Development Environment) gives a warning, ask about it, the LLM will make changes.
- Once you have an answer from the LLM, check search engines and other LLMs, you might find the answer was wrong. Get a second opinion. You can go back to the LLM and ask about the answer you found.
- If coding, always ask if this is secure, you will get a reason it is secure or a change to make it secure.
In summary
LLMs are great, we simply need to be aware they don’t know everything.
If you are interested in AI, LLMs, Chat bots or any other topic here, leave a comment and get a conversation going. If you’d like any information on websites or Web applications, contact us, we're always looking for interesting projects and long-term clients.