Regulating AI- time is now, as we missed it out in past, but it is never too late. Here are some points on How, Why, What and When to regulate in AI!
- There is a need to regulate AI of all kinds.
- These AI include Robots with AI such as autonomous vehicles, smart cleaners, smart fridge, smart AI-IOT based healthcare, AI based medical devices, and planes to mention a few.
- Cybersecurity is a must and with an increase in AI usage cybersecurity is under threat. Recent attack on chatGPT is an example. However some cybersecurity threats and attacks can be fatal.
- Producers along with consumers should join hands in making initial regulation a success. As its consumers who can find flaws more as they experience threats in privacy, cybersecurity or data/process breaches.
- Privacy, cybersecurity, AI-based miscalculations, and data breaches are not the kind of bugs software engineers are used to be getting, these are the modern-day bugs data engineers, AI engineers encounter, these henceforth can be called the AI-bugs.
- How do people enter areas in AI where there are threats of above kinds? The answer is lack of knowledge about product, lack of information about the processes followed by the product.
- Hence, essential workflows must be elaborated to users, the dataflow and workflow which involves risks should be put in red-colored ink.
- Yes, all software’s comes with a terms and agreement document, but rarely anyone looks at it. This is so in AI products as well, including AI Robot of kinds.
- Hence, regulations should be put in measure, such as the most critical and weak points of the AI Product must be written explicitly as red and bold words.
- This should be responsibility of producers, as they know the exact codes that made the product.
- While consumers should be encouraged to point out any weak point so encountered while exploring, using, or analyzing AI products.
- Patches must be sent to consumers once an AI-bug of the kinds described above is encountered.
- The typical software bugs goes to software engineer, while AI-bugs goes to AI engineer and all the way up to product manager and engineering manager of the AI product.
- The escalation of AI-bugs is crucial as AI products are linked to data security, cybersecurity, privacy, and many more such issues. Hence even a small concern in AI products seeks the eyes of the head of AI in a company.
- These issues must be understood by users while choosing an AI product over its rival product.
- One such example of an AI system is connected Smart Robots.
- Some of the prior regulations on AI in Europe include autonomy through sensors, exchange of data, workflows, self-learning Robots and AI systems for a recommendation, and auto-generation of relevant content. These include security and transparency of products throughout the whole life cycle.
- AI-based miscalculations such as in autonomous cars can’t be taken lightly, even if there is a minor error.
- A major area of problem occurs when a company stops supporting a product and brings into the market a new version or enhanced version of the product. At this time the company stops providing updates and patches to the old version. Typically this happens within 3–5 years of deployment of a product to market. At this point, the users are concerned about bearing the cost of buying a new product or being more susceptible to problems as no more support is provided. Even if the company wants at times support is difficult for an old product the reason can be a change in programming language, third-party tools unavailability for similar reasons, lack of trained support, and so on. The onus of this lies on head of the consumers then, to buy a new version of the product or to be susceptible to AI-bugs. Or to the company who can ask money for paid assistance to the select consumers.
- Another area is code legacy. As time passes by the amount of code required to do the task increases unless these are framed into bundles of checked packets. New programmers may find it difficult to maintain these legacy codes, as also the language of code may be obsolete by the time new patches are sent. Open-source software can allow multiple countries to work together on a patch. The responsibility of code, product, and its security lies with these member countries participating in open-source products.
- Hence, regulating AI is a good step in this area.
- Shortcomings must be understood for AI systems to improve the next and similar products.
- The responsibility lies with the whole lifecycle of AI products not just the programmers and producers as stated in [1]. The full life cycle is affected if an AI bug is to be removed and hence whole life cycle should be responsible for this AI bug, which can lead to data theft to privacy concerns.
- Some basic protocols that must be cleared before the AI product is set to be used by general use outside the AI-Labs !
- These regulations are still in nascent stages given the speed at which AI is progressing is immense and the work that needs to be done is more.
References
[1] Nobile, C. G. (2023). Regulating Smart Robots and Artificial Intelligence in the European Union. Journal of Digital Technologies and Law, 1(1), 33–61.