A set of perspectives i hold on cybersecurity and also highly relavant in the AI world.
A set of perspectives i hold on cybersecurity and also highly relavant in the AI world.
offensive security:
A chain is as strong as it’s weakest link. modern systems have countless entry points; adversaries only need one.
Adversaries are always at advantage, because your blue team is only as strong as its weakest link, missing one critical area will break the whole chain.
Both attackers and defenders are resource-constrained. The question is rarely “are there bugs?” but “who has the time, and skill to find and exploit them first?”
For most small and medium companies, security is reactive, not proactive. I would argue same for most companies, only a few try.
Attackers have clear incentives; defenders usually don’t. What i mean is, assume resources are poured into defense-in-depth, secure coding, and hardening. Assume a attack that is avoided due to this, unfortunately no one knows, it looks like a extra expense.
The point is, If an attack never happens because of good practices, no one can see the counterfactual. It’s the “tree falls in a forest” problem. This leads to reactive measures and less incentives for defenders.
Throw shit at open source project(or set of applications) for long period of time, vulnerabilities will follow, and its highly possible a supply chain link breaks and affects one of fortune 500/
Bug bounty hunters on H1 still find vulnerabilities even after “top” pentest firms have gone through a target. Project Zero is one of the best security teams in the world, yet Chrome still pays ~$5M a year in bounties. Add more eyes and more time, and the number of bugs found keeps growing.
Most real-world breaches area basic. Misconfigurations, weak credentials, known vulns, bad hygiene. AI makes entry into these kinds of attacks easier and cheaper.
The asymmetry comes from the weakest link. Defense has to harden every system; attackers only need one weak link. AI boosts offensive productivity in that search, so I’d bet we see more breaches, not fewer. Defenders should start using AI capabilities.
LLMs are as strong as the person yielding it. Somone who doesn’t know anything about v8, won’t start finding 0-days in chrome. It doesn’t magically create skilled new adversaries but rather accelerates them. The acceleration is proportional to model capabilities.
cybersecurity market
In cybersecurity, the value of well-modeled risk and proactive measures is non-obvious. The signals are noisy or completely absent.
As i said, it’s hard to measure the value of an attack that doesn’t happen. [Read Repenning, Sterman CMR]
In the cybersecurity market, both buyers and sellers often have poor information about how good a product actually is, thanks to uncertainty in risk modeling and the lack of clean feedback loops. [Read market of silver bullets from ian grigg]
So companies end up buying or selling without really knowing what they’re getting. They: a) spend the security budget to “do something,” b) adopt tools because the CISO already invested in them, or c) buy whatever product the CEO’s friend is selling.. …and similar variants, you get what i mean.
Everyone wants to keep playing the game. Compliance demands certifications, so you game security. SOC 2 compliance pentest reports and companies that provide are decadent shit, when adversary meets you have to pay the price chance with higher chance.
essentially it is easy to be a snake oil than something that works, thats the reason we see lot of these hyperbolic marketing lately.
anything i missed? lmk. will add more as i learn