Infosec professional for almost 30 years here. I can confirm that the latest iterations of AI models are finding high quality bugs and vulnerabilities in the code we work with. If Daniel has access to Mythos, I suspect his experience would be even more shocking.
The problem I have is that the AI tools can find bugs faster than they can be patched, which is eventually going to prompt companies to use AI to patch bugs found by AI. Before long, no living being will be able to make heads or tails out of the code we run. Just my 2¢.
AI tools can find bugs faster than they can be patched
Not a security expert but wasn’t that the case already? It feels like before AI there were already a lot more bugs, security related or not, on backlogs. That’s precisely why there are metrics like severity.
no living being will be able to make heads or tails out of the code we run.
Which is fine, because somebody will just vibe code a replacement when it gets too unwieldy and then we’ll start the cycle of unmaintainability all over again. Welcome to the era of disposable, limited-use software.
While you’re all working on dealing with that, don’t mind me, I’m just going to be over here admiring all this artisanal, hand-crafted software running in a carefully arranged and manually curated legacy virtual machine with loving attention to detail and thoughtful Feng Shui, where it will be safe and protected from the horrors of the open internet until someday NetWatch finally fires up the blackwall to protect us.
Infosec professional for almost 30 years here. I can confirm that the latest iterations of AI models are finding high quality bugs and vulnerabilities in the code we work with. If Daniel has access to Mythos, I suspect his experience would be even more shocking.
The problem I have is that the AI tools can find bugs faster than they can be patched, which is eventually going to prompt companies to use AI to patch bugs found by AI. Before long, no living being will be able to make heads or tails out of the code we run. Just my 2¢.
Not a security expert but wasn’t that the case already? It feels like before AI there were already a lot more bugs, security related or not, on backlogs. That’s precisely why there are metrics like severity.
Which is fine, because somebody will just vibe code a replacement when it gets too unwieldy and then we’ll start the cycle of unmaintainability all over again. Welcome to the era of disposable, limited-use software.
While you’re all working on dealing with that, don’t mind me, I’m just going to be over here admiring all this artisanal, hand-crafted software running in a carefully arranged and manually curated legacy virtual machine with loving attention to detail and thoughtful Feng Shui, where it will be safe and protected from the horrors of the open internet until someday NetWatch finally fires up the blackwall to protect us.