Note: This article is also present on authors’ other social media accounts.
A recent article by a lead data scientist on LinkedIn (in reference [1] below) states the convergence of results to the same outputs irrespective of the size of data presented to LLM. Well if these are the experimental results of experiments conducted one needs to see what is next! Yes, what’s next! Here are some points on what can be next..
- Now, having said that, there is no point trying same things, given substantial experimental proofs.
- Either some data do change the results, difficult as reference [1] states that.
- What is now needed is a diversion on the road to this research.
- The point is where to bifurcate the current road of LLM model building and use.
- And another question is what to do and why we still need to go on?
- Well, we need to go on since there are still many improvements that are needed, for example, as said in the article ([1] below), asking to make cap green does not make it green, means there lacks an understanding concept in this model.
- We need models to understand and execute, till that is done, we can’t close a research.
- Given the aim is not to take over humans but to assist humans in places where humans can’t do it.
- Not everyone can get in photoshop to do such artwork. And no one can either replace the photoshop editor, I can explain that later.
- Here the point is to make hybrid models, now, for specific aims, may be making one model with basic LLM and another using graph theory or something more relevant and hybridizing it!
- It shall take some time, to frame the new hybrid models that can understand language.
- This would change the structure, code and implementation of how LLM is used now. But it is time we put understanding in these models.
I shall be sharing more of my views in the coming articles.
Have a great day!
Reference