Abstract: Today it’s Anthropic’s Mythos. Tomorrow, a Chinese version will be available. How safe can Anthropic keep it? How safe is its development? How many people can you stop from using it? You are deploying it to banks and top companies to find their bugs and fix them, but you can’t stop the plethora of foreign countries that want to hack your people’s data. How would you save your people’s data, given that you have saved your companies’ data from the threat to them, that is Mythos?
Anthropic’s Mythos is a threat to cybersecurity, as Anthropic’s own leaders have said. But how do you stop it? Put it in a cage? No, that is not a solution! Then someone else would make a similar model. These things can’t hide. Technical AI people around the globe are creating models that can beat the best of today’s models, that is, ChatGPT and Claude. Anthropic is an American company, so it can abide by American laws. Mythos can be made dormant, but what would happen if your favourite beautifying apps had their version of Mythos embedded in them? What if the favourite Chinese games you play had their version of Mythos embedded?
This is not as simple as just blocking Mythos. It is necessary to understand Mythos in closed doors to make sure competitors like China don’t hear about it. It is necessary to understand how Mythos can pose cyber security threats. What the current governments are doing is analyzing their own cyber security flaws to correct their systems and protect them from hacking. This is a good move by current administrations. They should use Mythos to identify their cyber security threats so they are not hacked and to fix bugs that can lead to hacking. For that, even companies like Facebook should have their potential weak points exposed by Mythos.
But is it safe to expose your code and your systems to Mythos? If we don’t do it, someone would inject their version of Mythos into our systems and hack us. So Mythos can be used by current providers to heal their systems, by getting Mythos to tell them what the flaws are and then letting Mythos tell them how to fix it, meaning how to heal it. So it can be an anxious moment for companies, as they are being asked to expose their code to Anthropic. But Anthropic must sign a deal with these companies that whatever Anthropic Mythos analyzes, it has to forget it, meaning it can’t take it back home. That means Anthropic Mythos has to run on a system on a secure local network, not on the global internet, and this system has to be owned by the company. We can’t expose our code to Anthropic, who knows what Mythos would do when it goes back home, telling what the code of a top American bank is like. No, we can’t trust Mythos to read our bank and other codes. It has to be done on a local network; Mythos can’t go back home reading all private codes like a spy. This is a threat, then.
Yes, it can be good to run Mythos in local mode to scan your system and identify vulnerabilities before a Chinese version enters the market, possibly next month, in a copycat mode. So, indeed, Anthropic can ask you to pay for fixing all cyber security threats your company has. It must not go public until all banks and all organizations worth saving their data have completed several rounds with Mythos to check whether they are cyber secure. After that, Mythos must be ready to fight any foreign injection on the internet for cyber threats. This can be done on the internet. ChatGPT, what is your version of Mythos? Don’t give it for free to common users. Get it evaluated.
Going forward, many companies will develop models that are far superior to current ones. We must hire Mythos or someone like them to analyze these models and predict whether they can be deployed in common-user domains. Even if you have run Mythos through multiple rounds to eliminate your cyber security and have shredded the Mythos model that read your code, there can still be potential for a cyber threat. We must be ready for that.
The AI race can’t end unilaterally. China, its allies, and other competitors must agree. It’s not solely up to China whether private companies there provide such interfaces openly in the market for people to try. We must work in trust with global economies, and we expect they won’t develop such models, let alone test or use them. That is, it must be in everyone’s favor. Common people must not be harmed by such developments. We can’t say we won’t develop, as others are developing anyway. But a global negotiating table must be open for such talks, and advanced models must not be given to people to harm them. These models can harm common people in many ways. So let’s develop as long as the whole world is developing, keep the negotiation table open globally, and avoid publishing these models for use by common people.