Abstract: AI is based on the aim of mimicking the human brain, but the human brain can think about fiction too, such as the Terminator and Transformers Hollywood movies seen by millions. So not all AI-generated content can be useful in real lives. We need to stop certain things in the world’s AI automation. We don’t want AI automation of the world where some Terminator AI comes in between. The leaders of AI providers say they don’t know if AI can cause harm, so how can we trust such AI? How much in AI should be continued, and what in AI must be stopped? We live in the modern era, and we have seen AI-driven disruptions across almost all professions. But some professions, such as research and scientific development, need AI, while other fields, such as analytics and music, do not. How to differentiate what to allow AI in and what to stop AI from? AI can be dangerous to youth, as AI can be dangerous to jobs, and AI is not always correct. We need to find a balance and a way to authenticate that AI is used in the right way. Here comes the role of AI licensing.
Introduction
Thinking of Terminator and Transformers, the science fiction movies that are part of human weird imaginations. Another example is Maze Runner; these things are in the minds of AI. AI has read these stories, AI has gone through them. AI knows it all. Human thinking is out of this world at times. For example, there is nothing like Terminator or Transformers, but humans created them and sold them to millions of viewers. So, human thinking is now in AI. AI can create such scenarios. AI can be harmful; it is meant to work under human control. AI must not be the leader on its own. AI is the provider of the knowledge and summaries we need; AI is the medium. AI is not the end we are searching. The end we are searching for is the right humans to take over the world’s affairs. AI is once again just a facilitator. AI cannot be a leader. So we need licensees of all kinds. There are various licenses we want: some for research, some for analytics, some for the daily use of “AI as a right”.
AI is everywhere; whether educated in AI or not, everyone uses it. Everyone out there wants to be an AI prompt engineer; everyone wants to be an AI engineer. There is nothing bad in it; everyone wants to earn money for themselves and their families, everyone wants to be famous and AI is the mode, as all jobs need AI now. As some say, “you would not be laid off to AI but to a person who knows AI”. It’s been a decade now that online learning has been booming, now with AI courses. How long will this go? Jobs are changing. The youth do not know what to do. Youth today are confused about what to specialize in. Some think of doing Maths, others think of Quantum Computing, and others think of doing AI itself. The youth feel lost in this muddle. So, what do we, as adults, need to do? This article describes the same in short words.
The Problem
The AI giants, when asked in the famous documentary, “The AI Doc: Or How I Became an Apocaloptimist“, are lost themselves. The AI giants are interviewed in this documentary, and they say they don’t know if AI can harm humanity. So why do we need such an AI when even the makers of AI are confused? When the people who make AI can’t confirm the safety of AI, why allow it? Why give it the license? We should grant an AI license based on the AI maker’s surety! If you are not sure of what you are selling, why are you selling it then? AI in future must come with a license! The license is the surety that AI would be right, that AI won’t cause harm. If you can’t give that, why should your AI be used outside of your testing labs? It can be dangerous like a leaked virus! It can hurt people, stop infrastructure, and harm people’s basic needs.
Yes, I myself wrote a lot on AI, but all that with the surety that the maker of AI would make AI right. Do we want AI in bureaucracy? Yes, but with a license, it would work well. Doing it right and doing it fast are two different things! Today’s AI companies are just doing it fast, not thinking of doing it right!
We don’t want a world run by AI, where our taps go dry once AI goes wrong, and why AI goes wrong, as it heard someone say “drought” and it comes in water saving mode. Yes, this is the level of AI right now. I am not saying to stop all AI, but be cautious that AI is still a kid. And it is receptive to users’ naivety, since users are, too, right now. We must define clear lines for AI development and where AI must be stopped. This is described in the next section.
Which AI to continue?
We should continue AI, which works under the guidance of mature human scientists, to advance research, including scientists who want to find cures for cancer, develop medicines for humans, and find ways to prevent species extinction. For example, a mathematician can now solve age-old problems in a short time with AI, which can help human missions such as the Mars Mission or Space Exploration. At the same time, we need to protect this AI from the wrong scientists who could use it to create biological weapons. This AI must be licensed for use by registered practitioners only. Other AI tools that people around the world can use, such as chatbots, would continue to function as they do. “AI as right” would continue, but the AI replacing humans in jobs must be looked at.
As long as China does not sit at the table to stop harmful uses of AI, none of us can stop. This is because even if we ban AI for wrong uses, and China keeps its AI labs open without licensing, it can create harmful bots. There would be no use in our stopping it. So global coordination must be achieved so that China, too, stops as we all stop.
We don’t want a system where even our water supply is AI-based. Do you want AI to tell when to send water to our homes? No right? So why not wake up? We need licenses for each type of AI specialist! Say “No” to such AI.
Future Work
We must know that with each job AI has taken over, the salaries must be put in a piggy bank for those who lost jobs to AI, only to know that AI would lose jobs to humans to some extent. What I meant is that AI would be a facilitator in the job market, too; humans would use AI to reach a conclusion, say, in analytics jobs. Not AI telling humans what it is. So, this secures our jobs. The jobs would involve verifying what AI has found out and authenticating it with human skills. Same in AI in government, AI should provide reports; these reports would be analyzed by human minister teams. Once verified, they must be presented to the minister and then analyzed for the reason they were sought. It would improve the speed at which a problem can be solved. Same with the Department of Justice, AI must analyze the case, provide all legal arguments to lawyers, and then the judge must sit and understand the lawyer and the AI. Not all things should be in the hands of AI. Once again, humans have developed a model that mimics the human brain, but that does not mean humans should place their safety in the hands of this AI. AI must be coveted and protected from misuse; various licenses must be provided for different kinds of AI. There should be a generic AI that everyone can use, there must be an AI that researchers can use, and this must go to licensed researchers with licensed AI. Once again, the human brain can perceive many kinds of things, but not all can be accomplished. Examples of this can be seen in movies like Terminator, Transformers, and The Maze Runner, to name a few. We must protect our AI from scenarios like Terminator and Transformers by making a stop gate in the AI thinking machine.