Is Large Language Models (LLM) the future of AI?

Parthiban Marimuthu

3 min read

People have needs for information and communication, both business needs and social needs, they largely achieve their goals by using the most marvelous invention of human people which is language. For computers to be able to communicate easily with us and for computer systems to be able to access and understand the information that’s available to humans who make decisions then we need to have computers that are able to understand human languages.

The Story behind LLM

There is a common statistic in the Big data industry that states that 80% of business-related information is in unstructured form and primarily text. It’s not entirely clear who came up with this statistic; it may have been created by Merrill Lynch Consultant or it may have come from an IBM study in the 1990s. However, As we can all agree that email is the primary form of business communication today.

Scientists, businesspeople, and tech enthusiasts are fascinated by Large Language Models (LLM), machine learning algorithms that can recognize, forecast, and generate human languages based on very large text-based data sets. Although the technology can make automated question-answering, machine translation, and text-summarization systems more effective and efficient and even enable super-intelligent machines, some preliminary studies have already suggested that LLMs may suffer from the same drawbacks as other types of artificial intelligence (AI)-based decision-making systems and digital technologies.

Feature-based machine learning models

For almost 30 years, feature-based machine learning models—which are effective for a wide range of tasks—were the primary method used to extract information from the fundamentals of natural language processing tasks, such as those involving finding people and business names in a text message or implementing sentiment analysis whether a text is constructive or negative.
The Stanford NLP Community has developed a number of useful toolkits for these kinds of operations, including core NLP and stanza.

To understand the human language, the traditional answer was that you had to code building a lot more structure and meaning behind the text which might be done by handwritten grammars mapping to formal semantics and then perhaps to SQL or it might be done more by probabilistic grammar rules with machine learning from data which has been hand-annotated for logical meaning representations, either way, building systems or resources to do that was extremely costly and attempts to build such systems weren’t very robust they weren’t very scalable and they weren’t very widely deployed.

NLP breakthrough with Large language models – 2018

There’s been an enormous revolution in natural language understanding in the last five years driven by the development of a new generation of large language models which learn about the structure but also about the meaning of the text by using self-supervised learning on huge quantities of text data, in particular, the BERT and GPT-3 models are well-known large language models there are now many others and more coming out each week while the first small attempts to do this making use of existing long-standing neural network models all of the new recent models make use of a brand new neural network technology that was invented in 2017 transformer language models.

So we’re given a bit of context and what the model is trying to do is predict the word that appears next or within a certain context in a piece of text you just train it by rewarding correct guesses and punishing it for wrong guesses by the usual kind of stochastic gradient descent now, at first sight, this doesn’t exactly seem like it could possibly be the path to artificial intelligence however it turns out that this is a very effective task precisely because predicting words can depend on any aspect of the text meaning and any knowledge of the general world around you and so it works very well as a universal pre-training method that gives models wide understanding that can then be deployed for all sorts of particular tasks.

Self-supervised vision model training

The use of transformer models and the technique of self-supervised learning by prediction has now been so successful that they’ve started to be applied to other modalities as well, For example, we have vision transformers and where the models can now learn visual representations just unsupervised from filling in blanked out parts of images rather than requiring huge hand-labeled data sets like the traditional models like ImageNet this development of powerful reconfigurable models that can be trained by self-supervision on unannotated data across modalities and multimodally.

Great results transferring from large pre-trained iguage models on GLUE Language Understanding

Great results transferring from large pre-trained language models on GLUE Language Understanding

The Future of LLM

Language’s importance cannot be emphasized. It’s how we learn about the world and make contributions to it (e.g., agreements, laws, or messages). Language also helps people connect and communicate. Although the software has improved quickly, machines still have limited linguistic capabilities. The software does a great job of matching words in the text word for word, but it struggles with more subtle linguistic devices that people employ on a daily basis. There is unquestionably a need for more advanced tools with better verbal comprehension.

Artificial intelligence (AI) has advanced significantly with the development of language processing technologies, allowing us to build smarter computers with a deeper understanding of human language than ever before. Large, sophisticated, and cutting-edge language models are always getting better thanks to ongoing research, but they still have a long way to go before they are widely used. Despite their value, these models need data, processing capacity, and technical know-how to be trained and implemented effectively.

Conclusion

Large language models are a particularly fascinating case study since they have manifested extremely distinct emergence-related traits. LLMs are very large transformer neural networks that have been trained on hundreds of terabytes of text input and frequently span hundreds of billions of parameters. They can be used for many different things, including text production, answering questions, summarizing information, and more.
LLMs should not be compared to chatbot development frameworks because they are not the same thing. Conversational AI has specialized LLM use cases, and chatbot and voice bot implementations can unquestionably gain by using LLMs.

Originally published at machinehack & machinehack blog.

Related posts:

Leave a Reply

Your email address will not be published. Required fields are marked *