An article written by Kacper Matelski and Monika Wójtowicz
Large Language Models have gained a significant wide world interest in the past few years with the rise of groundbreaking Large Language Models, such as GPT-3 (Generative Pre-trained Transformer 3) and BERT (Bidirectional Encoder Representations from Transformers). Large language models have fundamentally altered the way computers interact with human language, which lead to a huge progress in artificial intelligence (AI), machine translation and virtual assistants. The advantageous solutions provided by LLMs have been spreading for the last few years across multiple industries around the whole world, from healthcare to finance, logistics, energy management and real estate.
In a very short time they revolutionised business operations and customer interactions within many industries, changing the way we work and look for information. There are, of course, still some limitations, but the new solutions are being developed everyday and seeing how much we already depend on Large Language Models, we may as well assume that they are not only here to stay, but also that we should excel in their usage and implementation.
In this article, we will talk about
Large language models are massive deep learning models that are trained to understand and generate text as close to human text as possible. They learn context and meaning by tracking relationships in sequential data, like the words in this sentence, and it then forecasts what the next word should be.
In its learning stage LLM was fed with a lot of data from the internet and it generalised patterns from the texts that were provided to it. That’s why they’re so sensitive when it comes to context, and that’s why the manner in which you ask it a question is very important — if you approach it with a scientifically phrased sentences, it will use more sophisticated language to answer, and if you speak to it in a more laid back, colloquial way, it will generate an answer in the same style.
To make LLM more functional and beneficial they used human feedback on all stages of development and implementation which in the end made it even more accurate and effective.
Thanks to our experience in developing LLMs. Our team can enhance the search process by creating vector databases based on provided documents and data. This approach will help in developing solutions that can find information more intelligently than a simple search. This can work in insurance, real estate companies, wealthtech, City Halls or big organisations that have large databases of documents and searching through these documents may be crucial for them. In fact, this type of solution can be created for numerous Industries and businesses. How do we ensure the desired quality and implement it in a manner suitable for each business individually?
Our team develops solutions based on LLMs that allow for intelligent searching of large sets of unstructured data. Imagine millions of documents of the most diverse content and format, and a solution that enables you to search for the right document not only based on its content, but also through inquiries. You can ask the search engine a question and it will return with a specific answer dependent on the content of the documents in the database. This approach offers limitless potential in exploring extensive knowledge repositories and developing intelligent assistants founded on them. By utilising RAG-type applications and vector databases, we can perform semantic searches within the data and, leveraging LLM, generate responses that are straightforward and comprehensible for the user.
The more information you provide to the LLM, the more accurate it becomes, but sometimes it’s not that easy. There are two different ways in which we can expand LLM with closed data. One of them is a classic Retrieval Augmented Generation (RAG for short).
What does it entail?
We upload all documents that we need LLM to know to a vector database and everytime we want to use them we send a query to it. Then we transfer the result of search to the LLM. That enables LLM to operate on the closed knowledge base.
What is it best for?
This method is ideal when there’s a database of documents from which one needs to retrieve information quickly and efficiently. For instance, using a RAG as an alternative to traditional search methods, we can get responses based on the materials we have analysed and uploaded to the database. This significantly shortens the process of accessing the knowledge base within our organisation. If LLM is trained with the information useful for our organisation specifically, it can easily generate content based on our information. For example using a company’s databases or codes of conduct.
To sum up, the RAG process looks like that: we pass the documents to the vector database → we have a query for LLM → we find the most matching content in our vector database → we pass it to the LLM for context.
AI is trained in 99%, but there is 1% that is available for us to train with our own data. Thanks to this we can adjust LLM to our preferred style. That’s what Fine Tuning is generally about: we take a ready to use model and we train AI with our own data.
Integrating new data directly into the model with its training gives us more possibilities when it comes to making LLM use the language and style we want it to. During Fine Tuning we can change the manner in which the LLM answers our queries and adjust it more to our specific needs. Using fine-tuning makes it possible to introduce shorter prompts.
In one of our latest cases we used OpenAI GPT models to autofill the profiles of various facilities, such as restaurants, that were registering in a web service. It creates content out of the information about establishments, such as restaurants or hotels. However, it can be implemented in any other facility with a set up profile and employed for many purposes, including, for example, consulting. It’s an extremely useful feature that may be integrated in a very time and cost efficient manner.
In one of our recent cases, we used GPT family models to create functionality that supported the collection of data in a larger service. The process involved generating relevant text and information based on data from other sources. It also included enhancing the text written by the user to edit and clean up the information submitted to the site. Then the user was able to influence the generated content through a feedback loop with the model. The solution helped to increase the quality of the content and speed up the process of completing information by users, significantly enhancing the end-user experience.
We created that feature, among other things, for our Hospitality project but restaurants and hotels are only one of the multiple possibilities when it comes to the facilities that could benefit from this kind of solution.
We’re also using LLM and other AI solutions in our Shareholder Management software for document analysis purposes. We chose to implement AI-Powered Data Extraction, which automatically extracts data from financial reports, regulatory filings, and other documents, making it easier to populate the shareholder register and generate reports. We also decided to introduce a feature that uses AI to automatically classify and organize documents based on their content and relevance and provide search capabilities powered by NLP to help users find specific information within documents more efficiently. A step further is installing a predictive Analytics for Shareholder Engagement that uses AI to analyse historical data to predict and recommend actions to enhance shareholder engagement. This feature can be employed for identifying shareholders likely to attend meetings or those who may need further communication.
Advanced document analysis that combines OCRs with LLMs can be implemented, as most of our solutions, in various projects, not only the ones created for Shareholder Management. We’ve used the power of AI for the analysis of complex and non-standardized documents uploaded by app users in one of our recent aviation projects. The process was divided into two stages. One was data extraction and the second was analysis. Using OCR (Optical Character Recognition) tools, we processed the data from the documents and then, after initial analysis, fed it into LLM models for key data extraction. The service was responsible for the core functionality of the application, which is why the whole system was also optimised in terms of performance in order to ultimately handle very large document packets.
Thanks to our vast experience in the matter, we know that we can implement these solutions in every project that needs to speed up the process of analysing documents. As of today, we’ve already implemented these kinds of solutions for Shareholder Management, Equity and Board Management Software, as well as for the Aviation Industry. We have our teams ready to take up new projects from other Industries and shape our proven solutions in a way that works for them specifically.
They can be used for completing various tasks, such as performing code reviews. How? By connecting to the repository a webhook that responds to code review requests, formats them in a proper manner and sends them along with the prompt. We asked it to not respond with text, but with JSON, which allows for a more detailed structure for further programmers’ usage. It also offers an extension with vector search capability, which enables searching and checking the documentation on the Internet. This feature allows the system to view the change and search the internet for necessary information, as well as check other related parts of the code.
The best part is: these solutions are constantly developed, tested and worked on to achieve the most accuracy and efficiency. All the changes need to be properly formatted for the LLM to run smoothly with them, and the prompt that we’re basing our inquiry on has to be polished as well. Here is an example of such prompt:
Having this exact phrase at the end of the prompt improves results:
Let's work this out in a step by step way to be sure we have the right answer
A tool like this one makes code reviews more time-efficient for Tech Leads and other professionals in need of a code review. It acts as a second pair of eyes, running tests on the whole code and pinpointing the parts that require improvements.
As we know, Large Language Models aren’t one hundred percent reliable. Because they are fed data from the limited databases constructed by people, there is always a possibility of either an error or lacking the information that we need. That’s why it’s so important to focus on expanding those databases and teaching AI constantly in order to improve its accuracy. There are a few ways in which we can achieve that. Let’s take a look at them now.
Even though most of the people think about ChatGPT and Midjourney when the subject of LLMs arises, there is much more to them than that. As proven above, Large Language Models can be utilised in document searching and verification to speed up processes within organisations or make information searching more time-efficient for the outside users. There are a lot of areas within various Industries that allow for atomization and all we need to do is to look for the right solution. We can also use them for consulting and improving our work with feedback provided, based on the databases that we structured for them. Writing content curated for our needs, providing reviews of various kinds, such as code reviews, and searching large databases, are only some of the amazing capabilities of Large Language Models.
LLMs offer new possibilities that are time and cost efficient and therefore can turn your vision into a SaaS solution in an easy and fast manner, enabling you to gain more revenue and get your product on market faster than ever. ChatGPT and Midjourney are just a tip of an iceberg and we should not restrict our views to them alone. As proved in this article, there are many useful applications for LLMs. One just needs a right set of experts to facilitate these solutions and ensure that they are impeccable. Technology is there, we just have to reach for it.
With ChatGPT and Midjourney becoming so widely used in work and academic environments that they arose loud fear-induced discussions about the possibility of AI taking our jobs away from us quicker than we’ve ever anticipated. However, just as quickly as those discussions arised, the others appeared: suggesting that the human element is still crucial for each technology to run smoothly and the we should strive for even grater automation solutions. AI and LLMs are not only reshaping the manner in which we perform our jobs, but they are also offering wide range of possibilities when it comes to Software Development. This is why we need to look beyond ChatGPT and Midjourney to see other emerging successful products and tools, that serve as a better example of LLM usage.
One such mainstream product that is based on Large Language Models is Github Copilot. It’s an LLM combined with RAG, which is used daily by Software Developers around the world. It’s an automatic code generator that auto-fills your code during writing. As argued by researchers, Github Copilot shows us that LLMs are not only fascinating but that they can be truly useful when applied to specific tasks. And as proven by our text, it goes beyond Github Copilot as well.
What is going to happen with LLM now? Where do we go from where we are with them? We can agree on one thing: we will see a further growth of Large Language Models. Companies will try to develop even more accurate, more efficient ones, such as Google’s Gemini 1.5. This model has been given a mixture of experts with their own range of competences. What does it mean for anyone who wants to use it? If you send a query, the model will then move it to the specific expert that was taught about the very specific subject of your query. This is supposed to minimise the probability of a mistake and ensure the best possible accuracy. But that’s not the only thing that Gemini offers. It can analyse audio and video, which can be of a huge advantage in the future for the services with audio and video.
There’s also a significant focus on Light Multi Models as opposed to Large Language Models. In the future we may see a growth in models that are designed to be more lightweight and efficient and at the same time still capable of handling multiple tasks or domains.