Here are some points
- GPT can be understood.
Many people say they can’t understand a GPT model. Well then consider it as a black box, which gives answers. Even the black box here can be understood if given time and resources to understand. The aim to understand it is the fact that the inputs are all predefined text articles from internet. And the output is a new generated text based on compositions on input and producing an apt output from pre-trained models. Pre-trained models learn relationships between words based on composition on data.
2. What is this composition of input
There are two kinds of inputs, one, on which the GPT is trained and other input is the query that is given to be expanded or answered. The query is fed into the transformer and the pretrained model interacts with the query to produce right outputs. These are compositions which are put on the query and with help of pretrained model produced output.
3. Then why different answers each time?
There is some level of randomness present in the model, this can be understood by implementors of the model, where is the randomness, is it in initializing weights or in embedding layers? The engineers who developed it are the best fit to answer for it.
4. For most of us its a Black Box, and even black box can be evaluated so why people are not evaluating this?
Yes, for most it is like a black box, there is an input some black box mechanism and then an output, which amaze most people. These models need right testing which is documented.
5. Is it dangerous? GPT modules.
It is as safe as internet and it is as dangerous as dangerous data on internet. Yes at times it produced output which is not form internet and mixes several data into on text but that can be corrected too. Everything here is based on computer programs, and hence can be dealt with programmatically.
6. How can dangerous GPT text be corrected?
By cross-verifying it. Cross-verify what GPT produced is correct and take out even some references for the validity of the output. This can help.
7. Is GPT still dangerous?
Point 6 above makes sure that dangerous contents are not there. Other content can be verified through a look up in database of pre trained model again and then looking in the input from which it was trained.
8. Age limit of children important for GPT use?
Yes, it can be put till GPT improves in quality of outputs.
9. Is it easy to understand and predict next outputs?
No these are done programmatically, require huge computations, huge look up to pre-trained models. Pre-trained models learns interrelations between words and when an input is presented it produces an output that is learned.
10. How can we look into the GPT and predict answers?
These can be simulated when a white box type of testing is performed on the models. More debugging of GPT needs to be learned and outputs can be simulated with help of white box kind of testing.
11. Can wrong relations learned in GPT training be unlearned?
These can be understood with editable GPT models. This would require understanding of how GPT pre-trained model are saved. Once we know how models are saved, editing ability can be enabled in future GPT models. This can make models suitable to certain age group of children as well, if done correctly.