Abstract– The article focuses on AI having a component of being human. Ethical and Bias Questions in Facial Recognition Tasks are an example that seeks answers for exceeding human expectations. This shall make AI the masters of humans. This defeats the very purpose of AI, which was made to assist humans, not to rule humans. Hence, there is a need for both algorithms and Robotics to analyze the AI it uses with human factors in computing any results. Otherwise, we may land up in algorithms and Robotics systems that rule humans, making them more of a machine rather than humans. A short example of humans is taking snacks when feeling hungry; this component can never be understood by a Robot algorithm as it can never understand the concept of hunger and cravings for certain foods. It is time we made human-friendly AI and robotics. The article also focuses on bias and image processing tasks, which is the motivation behind this article.
1. Introduction
Machines perform on par with humans now in terms of accuracy in verification tasks. However, identification tasks still require some more improvements (Dooley et al., 2021), at least in Image Verification tasks. When a task is near its finish goal, the accuracies have been reached to human comparison levels. What remains now is the next set of challenges the problem has, which needs to be discussed and closed. Hence, this is a very logical step to now ask the question of bias in the image verification task, given accuracies have reached humanly computed values. However, the question addressed in the article Dooley et al, (2021) “Whether machines perform on par with humans in bias based problems and can machine exceed its computations for bias sensitivity more than humans?”, this question is answered in a more generalized fashion in this article. “Why only bias in image verification tasks?”. There are major areas where machines compete with humans. This problem needs to be solved with all those problems where humans and machines differ in evaluations. And ways to control the growth of machines over humans have to be controlled. There should not be a dominance of machines over humans, and at least the average human decision should be taken into account. This is as good as an election submitted by humans. If we don’t stop the wrong intrusion of algorithms and Robotics using AI to overtake and provide its dominance over humans, it will make humans Robots in the hands of AI. Hence bias should behave like a perfect AI algorithm and humans following AI is not acceptable, for rest verified online elections alike techniques need to be devised and incorporated to know what the average mankind wants out of an AI system. For this, one must not forget why machines were made to assist mankind, and an essential feature of mankind is to be sensitive to being human. Hence, we need to focus on human-AI as well. This task is different from the Artificial General Intelligence (AGI) task. There are tasks where the machine algorithm needs to be humanly. The next Section discusses some contrasts between AI algorithms and humanly choices.
2. Human-Friendly AI Some Examples
The AI which is used either in algorithms or in Robotics needs to follow the fact that AI needs to be humanly if it is dealing with some humanly topics. Consider the following example of setting up an alarm clock using AI for a basic example. If the AI machine waking up the human is not a friendly AI, then it would shake up the human to make him/her rise up from the bed exactly at the alarm time of 6 am. Most of us like to switch off the first alarm and laze around for more minutes, maybe half an hour more on some days and even two hours more on a Sunday. Hence Humanly AI is another need of Robotic AI. An example of this task is bicycle rides. An AI gadget implanted in the bicycle can be taken in the riding humans instruction — to reach “Place B” from “Place A” through the shortest path. All the way AI gadget guides the human riding the bicycle for not only directions but also of speed. But in between, the human riding needs rest. This is the case in which an AI gadget cannot understand, and then the human wants a break to have water from a water bottle. Hence, the AI gadget is not trained to be humanly. We need human-friendly AI. Another example is examination preparation, given a lot of AI tools shall be guiding the coming generation of students. But how the students prepare amidst all the deadlines posed by AI tools. This needs to be analyzed well. These AI agents need to understand that we are humans, and AI, machines, and Robots are here to assist us, not to dominate us. Humans made machines to help them, not to rule them. The only way out is to make humanly friendly AI. As per Dooley et al. (2021), commercial APIs are more accurate in face recognition tasks and do not show many differences in performance in gender or racial aspects. Hence, this problem of bias in AI cannot be acclaimed to all algorithm-but to funding that is put in AI algorithm learning. Till the time datasets are more refined to perform on par with commercial AI models, we need to cope with bias and use some human techniques to deal with bias manually, but the suggestions to deal with this are given in Section 3 below.
3. Conclusion & Future Work
The article was motivated by the paper on bias in image verification tasks (Dooley et al., 2021). Future work should primarily focus on AI algorithms, which are more humanly in making human-like decisions. This does not mean that one would sometimes make calculation mistakes like humans. But this holds only for those algorithms that need human sensitivity information. Such as examples given in Section 2 above. For image processing bias tasks, certain things need human-like sensitive algorithms. And when AI can learn so much, it can also learn human sensitivity.
References
[1]https://www.csis.org/blogs/technology-policy-blog/problem-bias-facialrecognition#:~:text=Bias%20in%20facial%20recognition%20algorithms%20is%20a%20prob lem,about%20how%20the%20technology%20is%20used%20and%20governed.
[2] Dooley et al, 2021, Comparing Human and Machine Bias in Face Recognition.
[3] Nidhika Yadav article on Linkedin. https://www.linkedin.com/pulse/reading-face-usingartificial-intelligence-ai-nidhika-yadav/?trackingId=pQKZ1rQn9T82BlOmkrbtzw%3D%3D