The integration of artificial intelligence into modern society has skyrocketed in the past three years, and one of the main reasons for this is the increased accessibility to large language models (LLMs).
An LLM study on Forbes found that 55% of corporations were piloting or releasing LLM projects, which will increase year-on-year. In the broader public sphere, LLMs are also becoming more popular, with a national survey by Elon University reporting that 52% of American adults have used LLMs like ChatGPT. In order to remain relevant in this rapidly evolving digital society, LLMs need to constantly get better, or they will quickly become defunct and out of date.
One way they are able to improve in different ways is by using AI to keep making them better and in line with developing needs. This article will examine what an LLM is and how AI makes them better through their ability to train, test software, and address ethical concerns.
An LLM Defined
LLMs are much more than content creation applications like ChatGPT and offer a wide range of functions, such as translation, coding, performing automated tasks, and providing detailed analysis. An article on AI by MongoDB explains that they are able to do this because they are a foundation model that trains on petabytes of data that includes large books, conversations, and text content. This allows LLMs to be used for natural language processing (NLP) tasks like answering questions, text generation, text classification, and summarization, as well as solving specific problems depending on the dataset it has been trained on. However, in order for these LLMs to stay up-to-date and get better, they need to utilize AI.
How AI is Improving LLMs
Constant Training
In order for an LLM to get better, it needs to constantly update itself to give accurate answers. An example MongoDB gives is “if you ask the model, “Give me a list of good comedy movies in the last 6 months,” the model would not be able to do that if it was trained six months back.” Instead of having to reboot the LLM from scratch, AI can access a database, usually a NoSQL or vector database, that has been created specifically to train that LLM and automatically use these external knowledge sources to improve their accuracy and relevance.
Training LLMs for Software Testing
As we noted in a previous article, AI is becoming the standard for software development and testing, with a specific emphasis on review. One way AI is becoming the standard is by using LLMs to test new software. While NLP technology is best for developing chatbots, voice-activated interfaces, and virtual assistants, it is also opening up new opportunities for software development. This year, Meta introduced a new LLM-based test generation system called the Automated Compliance Hardening (ACH) tool, which is designed to enhance software reliability and security. The ACH employs three key LLM-based agents: a fault generator that introduces simulated faults into the code, an equivalence detector that prevents redundant fault generation, and a test generator to test cases specifically designed to catch the introduced faults. By training LLMs to perform these tests, software designers can improve code reliability and efficiency.
Improving the Ethics of LLMs
As more people use LLMs, the question of how ethical they are is constantly being asked. The key concern is algorithmic bias, where the LLM unknowingly uses data that introduces a bias against certain groups. LLMs are also vulnerable to intentional unethical actions, where humans use generative AI to create harmful or false information. The good news is that AI is being used to improve LLMs by acting as a domain expert to spot and alert the LLMs to unethical practices. A Scientific Reports paper on AI and perceived moral expertise noted that trained LLMs were perceived to explain their moral judgments slightly better than the average American and equal to ethical experts. While it is still too early to be 100%, this shows how AI is able to help LLMs make moral judgments on what is ethical and, therefore, ensure that fewer biases are present in the content they produce.
With LLMs now a fully integrated part of our society, AI is making them better through constant training, software testing, and improved ethical decisions.

Software Testing Lead providing quality content related to software testing, security testing, agile testing, quality assurance, and beta testing. You can publish your good content on STL.