Let’s find solutions with Sentiment Analysis; not just criticize AI chatbots

There are articles after articles, claiming that AI is anti-woke, says lies, and makes its own fake stories. Well, that is good to point out what are the exact mistakes AI can make. As till we know what the bug is how can it be corrected it ? These things need to be corrected.

Think of the positive side of the AI chatbot problems, and how the problems originated, how can it be resolved by the provider.

Origin

Origin of problems Musk said can be not right is: These models were trained on textual data found on the web. We all know the internet is full of all kinds of data, good and bad. If we do a google search with some controversial words, the output by Google search shall be same. Though search engines do not reproduce and paraphrase the contents just provide the same contents as it is. Google search is a retrieval engine not a generative pre-trained model this is the difference. Hence the question answering in GPT need to be modified as well.

Problem

What the GPT was fed the similar result would be reproduced. However, there cant be retraining of the GPT but postprocessing can be performed.

How to solve it?

Solution is post-processing! Sentiments! One Such Solution!

This can be done with help of sentiment analysis, reject the output and regenerate output till right sentiment score is reached.

Here one needs to train the sentiment of all words as per usage! First step is detecting what the users find wrong, manually or by actual user feedbacks. And then a positivity, negativity and neutral score shall be computed for most words. Word pattern related to a bad user feedback would be learned in this way.

Sentiment of “bad fruit” wont be more negative than a dangerous hurtful word.

This seem simple, but keep space in sentiment analysis to add more words, this can be man made file as well to start with. Yes, start with a man made word files, man made sentiment phrases to start with.

This will go a long way.

Google search is a retrieval engine not a generative pre-trained model this is the difference. Hence the question answering in GPT need to be modified as well. This shall be discussed in coming articles.

On the top AI is in the youth years only and can become full-fledged Science in coming years.

AI is in youth of becoming a full-fledged Science

Published by Nidhika

Hi, Apart from profession, I have inherent interest in writing especially about Global Issues of Concern, fiction blogs, poems, stories, doing painting, cooking, photography, music to mention a few! And most important on this website you can find my suggestions to latest problems, views and ideas, my poems, stories, novels, some comments, proposals, blogs, personal experiences and occasionally very short glimpses of my research work as well.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: