Note: This article is present in on authors DOI:10.13140/RG.2.2.32564.05768
Generative Pre-trained Transformers (GPT) are neural networks with specific transformer architecture to perform various kinds of applications here we are concerned with language modeling. We all are familiar with Large Language Models based on GPT architecture. This is based on the Encoder Decoder principle, several outputs can be produced for a particular input and one that seems most accurate is produced as the result.
This article aims to present a two-tier client-server novel technique for a GPT-based chatbot in which the server learns about the client while interacting with its transformer architecture-based language model. Yes, this aims to create a language model on the client side. This is completely different from my previous article wherein I proposed a client sieve to perform query expansion for better results of the LLMs interactions. There are drawbacks in the client sieve model as the sieve filtering requires feedback and the user may not always have time to provide feedback, as in the case of supervised query expansion. While there are ways to do query expansion without feedback too, however, such models may not perform that well and are not competitive, as would be the model proposed in this article.
Hence, in this article, we present a model in which the client-side maintains its own transformer-based language model. It learns what the user writes, if the user allows it, what the user browses, the clicks per second, the feedback, and the browsing history, the cookies, all are fed on the client-side language model.
The benefits of this technique over previous techniques are:
1.This client side language model does not go to the server side.
2.Privacy of the user is maintained. More personalized results can be obtained when a client-side language model interacts with LLMs. The client-side language model would work along with user query.
3.One language model can interact with another language model. Imagine two friends’ language models negotiating on expenses to be made for dinner while both users are busy swimming in a pool with their young children and laughing at what machines are planning for them.
4.The second language model can be a friend of the person or it can be a server model such as a LLM in use in popular chatbots such as chatGPT or any other model. The server model can be used to get answers to questions as big as Universe.
5.It may be loaded on messengers and may allow for personalized auto-fills. These can be loaded on choice based on emails, to allow more robust automated emails without the server knowing about you. It can be stored on local system and can be carried in a digital locker.
6.It can be loaded in a personal and professional robotic alliance of a person for better results dealing with robots. Robots that are more close to humans, robots that are more personalized than anyone else.
7.It can help in case of any dire need that comes with age such as dementia. Patients can remember their things with the help of these client-side language models based on the Transformer model. This may help in the cure of the disease as an interface can be built in which can always remind the patient of the auto fillers and hence fill his life with all the person needs.
8.It can be used in case a person is out of Earth, be it be any reason, and he/she can’t communicate, the person’s language model can be used then if given the permissions.
9.It can be used on pets in a house as well in your absence.
10.It can be used to interact with Large Language Models at servers to improve itself, gather more information, be robust, draw, paint, solve problems, and more. All these results would be updated in the client-side language model as well.
11. However, we do say getting feedback form users in any kind of LLMs is good.
12.These are language models, not LLMs, and hence do not require as large energy as LLMs do nor as large a space as well.
13.These language models can be transferred in the family of personal and professional networks after the person is no more.
14.One day when language models are free from carbon footprints even client side models can be turned to LLM, personalized LLMs! But that is too far to think of now.
For now, we need client-side personalized language models to interact with LLMs, emails, and people, and manage things.
More on this in upcoming articles…