Not just Facebook, any news application can intentionally or unintendedly spread mis-information or wrong content. Even as in my previous article, there has been mention of foreign forces in playing with the news contents, just an example. Further, this is not just about social media, it’s about anything that is intended to spread unbiased information.
The following are different things and need to be taken up separately:
- Intentional news spread
- Wrong content
- Hacked accounts
- Fake accounts
Here is short description of each of them.
1. Intentional news spread
This can be perpetuated for some intended concern or to publicize certain information. In case AI is used in such things, the intentional contents can be reduced a lot. The reason is the statistical nature of AI learning. The use of Deep Neural Networks, can in most cases present the learned content from Artificial Intelligence genre for personal preferences as a well-developed Recommender System. And there has been well developed research on recommender system. A doubt of this means a doubt on high quality published research.
2. Wrong content
Again, this can be intentional or may be caused by a hacker for a deliberate purpose. My own WhatsApp account was hacked recently and hacker was trying to misuse it. Why I don’t know.
Other is intentional wrong content showing non-ethical contents. The companies should hold right to publish only ethical content. These can be provided with help of following AI tools.
a. Natural Language Processing
For Textual Contents- Adult contents should be banned from social media. People have right to express the dislikes and likes equally. Hence saying that you don’t like an spinach with lentils, is not a hate content. And people have rights to express their voices. This is “Right to Free Speech” which any adult democracy should possess. Being an adult one should be having conscience and maturity to choose to read a person’s negative/positive content and help him/her or avoid it. It is an adult’s decision. For non-adults, yes, the platform needs to maintain child-lock system. And Natural Language processing can handle a child’s text contents for hate speech. Children are sensitive to such talks and should avoid hate and negative things. A parent’s fight in family can negatively impact the ideas a child has and the way a child grows to a mature adult. How much can it impact the child to read such negative responses. Save our children from wrong contents. Over emphasis of negative contents to children can hamper brain positive growth of a children, as we always heard, fighting parents have sad children. Then, what about fighting media contents?
b. Image Processing
For image and Video contents. Social media should have AI based software to block any edited video. Also, to block any non-appropriate image or video files, be it be adult account or a child’s account. No adult imagery should be available on social media. Many adults’ life is destroyed by former relationships breaking when their images were used in malicious way on social media in revenge. There should be personal safety laws regarding all this.
c. Speech Recognition, for speech contents
Same holds as in (a) and (b) above. The contents for adult and children should be separated with child locks for children for any negative speech, debarred form children, as it effects a child brain growth in wrong way. Children should grow up in tension free mode, not in a mode of taking care of the negativity going around. This is not their work, nor a child brain is so strong to be injected with burden of handling adult responsibilities.
d. Machine Learning, for mixed contents
Further, machine learning should be used for any combination of above (a) , (b) and (c), to make sure no wrong content goes out to children and even to adults.
3. Hacked accounts
Even if a hacker tries, he should not be allowed to use a person’s account. Measures should be taken to safeguard people’s accounts for our own use, when a phone or system is hacked. I have personally been a victim to hacking and hence I know how bad it feels.
4. Fake accounts
Finally, a lot of negative information, wrong videos are openly put on social media in name of fake accounts. Fake accounts need to be monitored for such malicious activities, which are not about “Right to Free Speech” but about spreading intended contents. Intended contents means contents that need to be spread by the person who wants to promote certain topic in disguise. This if not stopped can be a parallel cause of intended spread of perpetuated messages.
Hence there is need for rise if Ethical Contents and AI can help in this.