|
Reader From the EditorWelcome to the first issue of Signals & Soapboxes. Every week covers a signal shaping our future, a candid opinion, a risk on the horizon and a question you should be asking. Short, sharp and straight to the point. 📡 The SignalOn Tuesday 7 April, Anthropic announced Claude Mythos Preview, its most capable model to date, and simultaneously declared it won't be released to the public. The reasoning: Mythos Preview has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser. Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. Rather than a commercial launch, Anthropic formed Project Glasswing consisting of 12 founding partners including Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA and Palo Alto Networks to attempt to secure the "world's most critical software". The announcement also claimed the model will reshape cybersecurity as we know it, stating a stark fact: AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities. Advantageous when deployed safely, in the hands of bad actors, the repercussions of this level of capability could have grave global consequences. Sources and Further Reading: 🎙️ The SoapboxCredit where credit is due; Anthropic deserves it for not releasing Mythos to the open market. That's a given. It signals responsible deployment, however self-appointment as the authority on which organisations get access to a model capable of finding exploits in critical global infrastructure raises serious concerns. The organisations chosen to access Mythos are not regulators or independent oversight bodies; they are commercial partners. The criteria for selection, the oversight mechanisms and the accountability structures haven't been made public (at the time of writing). When AI capability intersects at this level, who has access and on whose authority, are governance questions, not technical ones. The organisations best placed to scrutinise the decision: regulators, independent oversight bodies, (dare I say) governments are not on the list. Although, Anthropic has shared there are: Ongoing discussions with US government officials about Claude Mythos Preview and its offensive and defensive cyber capabilities. In the current geopolitical climate, that doesn't instil much confidence. Not to mention, discussions going on, after the fact, about a decision that's already been made. It feels like a crisis waiting to happen. If this model is as significant as Anthropic says it is, the governance framework should have preceded the announcement. Capability without accountability is still a risk, irrespective of the intentions behind it. Do you have a soapbox you stand on? Or an opinion worth airing? The soapbox is open to contributors. Get in touch: soapbox@kerrybknight.co.uk ⚙️ The Losses and Liabilities BriefSomeone on your team just handed in their notice. Do you know which AI agents they built and whether any of them are still running? Unmonitored AI use is a risk in its own right, but the management of AI tools used by exiting employees needs separate consideration. Most employee offboarding processes cover devices, credentials and system access; how many in your organisation cover AI workflows? If your AI governance framework doesn't cover this, it should. An orphaned AI agent running on stale instructions and outdated context is a liability - even if nobody meant it to be. The second risk, that receives even less attention is: the knowledge that leaves with the person. For long-tenured employees that can mean decades of institutional memory, client context and undocumented process walking straight out of the door. AI now makes it possible to capture it before it does: structured knowledge interviews, prompt-driven documentation, and context libraries can be built whilst the person is still part of the organisation. Governing what AI is left behind and using AI to capture what walks out the door are risks most organisations haven't thought about yet. Source: Julia Porter's LinkedIn post on AI workflows and AI governance. 🤔 The QuestionIf your organisation can't account for the AI workflows an employee built before they left, what else is running that nobody owns? The next Signal & Soapbox hits your inbox in: |
Subscribe for weekly intelligence and candid opinion briefings on the risks most organisations aren't ready for (yet).
Reader From the Editor Welcome to the fifth issue of Signals & Soapboxes. Every week covers a signal shaping our future, a candid opinion, a risk on the horizon and a question you should be asking. Short, sharp and straight to the point. Kerry Knight Chart.PR 📡 The Signal Finland became the first EU member state to activate full AI Act enforcement powers in January 2026. Every other member state is required to have equivalent infrastructure in place by 2 August 2026. Penalties become...
Reader From the Editor Welcome to the fourth issue of Signals & Soapboxes. Every week covers a signal shaping our future, a candid opinion, a risk on the horizon and a question you should be asking. Short, sharp and straight to the point. Kerry Knight Chart.PR 📡 The Signal On 15 April 2026 (updated 22 April), two Cabinet ministers published an open letter to UK business leaders following fundamental changes to the cyber threat landscape. Signed by the Secretary of State for Science,...
Reader From the Editor Welcome to the third issue of Signals & Soapboxes. Every week covers a signal shaping our future, a candid opinion, a risk on the horizon and a question you should be asking. Short, sharp and straight to the point. Kerry Knight Chart.PR 📡 The Signal It's believed that Lloyds Banking Group has become the first FTSE 100 company to bring an AI agent into its boardroom. Reported by The Sunday Times and confirmed across multiple other outlets this week, the bank is using a...