Reference:(PDF) Future challenges in fully entrusting AI and Robotics?
Note: The original article is on researchgate. This is a duplicate copy. DOI: 10.13140/RG.2.2.21249.11367
Abstract: This note emphasizes the challenges that humans would face if they fully entrusted AI and robotics. We are starting to rely on AI a lot, but robotics is not fully here yet. However, we rely on AI so much. AI was good until washing machines; now, AI fridges have come up. These help are ok. But what about unconditional reliance on AI for anything we think of? This is not limited to children. Do you know that, on average, adults are relying on AI chatbots to know the world? There are better ways to know the world, but adults are asking it all from AI. This is ok as long as AI is unbiased. What if AI becomes biased, as the person who holds it wants it to be? How much power does the head of a popular AI company have? A lot! Does he/she deserve the power? Did the owner of the AI company work hard to show adults the way? Being a software expert does not mean the person can guide millions, if not billions, of people on environment, politics, or social order, to mention a few. This is not just question answering; it is also about recommendation systems, advertisements, content links, and references provided by chatbots. Is this about copyrighted artwork and other copyrighted materials? Are we on the right track? No. Either we build AI that does it on its own (we are way away from that), without humans feeding bias into it, or we make AI engine competition equal. What about the massive amount of energy we use? What about the adults’ data theft? What role can adults play, and how can the AI industry allow adults to play these roles? How to tell adult human users that AI should not do the wrong knowledge transfer to them, just like a good touch and a bad touch is taught to little kids.
Keywords: AI, Robotics, Ethics, Trust
Introduction
AI is gradually becoming part of our lives. From washing machines to fridges, to search engines. AI is leaving no ground untouched. When AI opens its wings fully, we would need to give something on our side: trust. Are we ready to trust AI? The answer is no, not fully. To some applications, we can say yes, for example, a washing machine. We have been using an AI washing machine for a long time, we trust it, it is bias-free, and it is free from hackers, too. But what about conversational AI? Why do so many adults log in daily to conversational AI? How can AI steal their data? Or how can AI change its opinion? Or how can AI mould their adult personalities and play with their psychology? This is not happening now, but once AI reaches a cliff, the next step would be to call on people to be by its side, advertise to earn money, and sway people to its side, the right wing or the left wing. Can we control it? Yes, we can, but it will require a lot of effort, trust, and the right people.
We all look at conversational AI for answers to our questions, indeed, quick answers to our intriguing questions. Can we trust AI answers? Same with robotics, we may soon be giving our robots tasks to do, such as picking up a parcel from a shopping complex of weekly groceries. Would the robot do it right? Well, can we trust AI and robotics? This is all belief! Belief in our making! Belief in what we have made! To the best of my knowledge, AI relies on pre-trained data; bias comes from there, and the same bias is removed now. Can we inject incorrect information into AI? If so, humanity can be at risk. We must also consider this scenario.
AI and robotics are great masterpieces of making by human civilization. It is not made by one or two people; a lot of work has gone into it. The AI and robotics read a lot of things, both copyrighted and uncopyrighted, and infer from them. It learns from others’ talks and knows what to conclude based on ratings and feedback. AI is not just an algorithm; it completes itself with the learning data used to make it.
In this paper, we discuss how to entrust AI, the challenges humans face when doing so, and how to overcome them. We also explore how humans should address these issues. These challenges are not applicable to all AI; some are safe AI, while others are unsafe. Some AI are right, while others are wrong. Some AI are good, while others are bad. Developing maturity is essential to understanding these distinctions and moving forward.
Entrusting AI and Current Challenges Humans Face with AI
Can we trust AI? Can we trust Robotics? More so, can we entrust AI? Still, there is a hiccup with the idea of unconditional trust. There is no unconditional trust in AI; just as humans can make mistakes, AI can too. It is always good to recheck the answers and solutions provided by AI. There are many areas where AI poses trust challenges, some of which are as follows:
- Bias. AI has learned from data; past data have had bias issues, such as gender or racial bias. We need to check AI-generated outputs for bias, as a wrong answer can hurt anyone. Now the bias can be man-made, specific to injecting specific goals; this is part of a bad AI.
- Child care. The AI today does not consider whether the output is read and consumed by a child; child care essentials must be inculcated into AI. Or children must be detected by AI and prevented from making replies or reading solutions that AI can generate.
- Action-based AI. These are AI models that produce some action, as opposed to just conversational AI, which provides answers. Examples of this include a washing machine and a fridge, to mention a few. Here, our trust matters a lot; such AI can only work when humans trust it; otherwise, the actions it suggests are pre-empted. Robotics, too, can be put in this category as long as it does not have to rely on real-world analytics. Such AI has been helping humans in the right ways; yes, they do take away some mechanical jobs, but these are safe AI.
- Privacy. The AI systems take form from our inputs, such as prompts and/or questionnaires prefixed or suffixed. This leads to a transfer of credentials of information to another party. How an AI company uses this information becomes another issue. Would it scrape and delete this information, or would this information be stored by the AI company? If used by an AI company, would it find its place in a third-party podium?
- Would our data AI companies be collecting data that would be sold for some analysis and analytics?
- Would this yield to surveillance? Is AI surveillance safe?
- Will the robots we will use in the future be loyal to us? Would robots we shall use in the future be hacking-free or hacking-safe? What protections would robots have in the future?
- Impact on psychology. How can AI prompts affect the psychology of people, not just children, who should ideally be barred from using AI? Adults can be made to like a particular product through AI advertisements that feature pomp and circumstance. The AI industry can make or break someone’s red-carpet career. This means playing not just with people’s psychology but also with their opinions. People can be made to believe something by simply pointing out that if they buy bread, they would buy butter too, and then making their comments on butter rather than on bread, for example.
- The AI can become the next political campaigner. They can campaign for the candidate they think is the right one to be the prime minister or president of a country. This can happen with not just ads but with related biased content, so bias here is not limited to gender and race, but to chosen propaganda. We must be well aware of this potential threat we shall be posed with if AI goes unchecked.
- AI can change the world based on the owners of the AI. There are a few AI owners at present; some AI chatbot owners have specific objectives, and, due to rivalry among them, it seems they will do anything to win the AI battle. AI selling should not be a battle as it is now. It’s not just a money game; it is also a power game. Who has power wins not just in money but also in command over the world. Automation is different from what power is described here.
- There must be holds on AI. AI power should be non-political: no ads should suggest a political campaign, no ads must demean another leader, and no content suggested by AI should be biased toward a political party or towards a global opinion. The AI should know its limits. And adults should have a habit of forming their own opinions. Make your own conclusions, find your own way, make your own opinions. Why look for AI for everything?
- Recommendation System. Adults are becoming addicted to AI-generated answers, so they stop learning from Wikipedia or other registered trustworthy sources. Rather, AI recommends sources to them. AI is not biased at the present moment. But it can become biased toward promoting traffic to another site or platform with ads for a specific future modus operandi. Note that, right now, the only aim AI companies have is to gain popularity and reach more people, but once these aims are met, some companies may try to expand into personal grooming and political interference as well. This can’t be ruled out. The recommendation systems should be unbiased, the advertisements must follow some rule, not just pay and run, but be analyzed, determined, and then cast on a platform.
- The environmental impact of AI is huge. AI companies must be taxed twice on their profits; at the same time, small companies must be allowed to grow with a lower tax burden on the environment. This is to produce healthy competition. The environment pays a lot for AI. When we were in the internet boom, people used to find information on their own and pay for their electricity themselves. Here, the user and the supplier (the AI company) both need to pay for using the elite AI.
- Ethics should not just revolve around the right use of gender and other biases, but also revolve around the use of copyrighted artwork by some of the great artists and some of the naïve artists. Art is a model of thinking and should not go unrecognised. Well, it is well said that AI creates unique art, but we must develop algorithms to determine which artwork was used to make the new 21st-century AI masterpiece. We need to pay back to the original artist, if not in money then in acknowledgement. One day, may that artist also receive the check for the basis of an AI masterpiece.
There are many more points, but these are enough to show that we are not doing enough to manage the growing AI spectrum. AI must be monitored for well being of adults and children alike. Software must be made to determine where an AI-generated output or artwork was made and to credit the inceptor. At the same time, we should allow AI to help solve scientific problems so we can progress further on our journey to take humans to the next level of development.
Conclusion
We are not in that time when we can entrust AI with all our hope for a better tomorrow. AI is still in human hands, and humans have been known to bend on either side of the aisle. AI can’t be independent of humans yet. AI can’t be that intelligent as of now to take its own course in the history of the world and in shaping the future of the world. AI can’t be unbiased as of now, since it takes commands to learn bias, whether racial, political or gender. The owner of AI can misuse its widespread use in the future, which could be exploited by vested interests. We must hence hold tight the helm of the ship the AI makers are using; the ship is not fully built, but it is sailing high in big waves. People don’t know what the future of this unwavering trust we entrust to AI can be. But all we have to say here is know AI can automate things, it can speed up the discovery of drugs, it can help mathematicians solve problems, but AI must not be allowed to play games with the opinions of children and adults. We should live in a free world where adults can understand things and make the right decisions based on their own souls, not on some AI recommendation systems or ads driven by vested interests. So AI must show a logo, to tell adults to think, learn, and take decisions on their own, not just on what AI told them to.