Skip to content
Security audit effectiveness analysis
researchFebruary 25, 20263 min read

What $10.77 Billion in Hacks Reveals About Audit Effectiveness

Security Audit Report

Alex Rybalko
Alex RybalkoCo-Founder

Updated on February 25, 2026

Between 2014 and 2024, the 100 largest security breaches in decentralized applications totaled $10.77 billion in losses. Of those, only 20% occurred in protocols that had undergone a professional security audit. Audited protocols accounted for just 10.8% of total value lost.

The data is from Halborn's Top 100 DeFi Hacks Report (2025 edition), the most comprehensive public analysis of protocol breaches by dollar volume.

The headline conclusion: audits work. A protocol that undergoes a professional security audit is dramatically less likely to appear on this list, and when it does, the losses are proportionally smaller.

But the 20% that were audited and still exploited share a pattern worth examining — because it reveals what audits currently do well, what they consistently miss, and what the next generation of security assurance needs to cover.

The Numbers

Annual losses from crypto hacking (Chainalysis):

YearTotal StolenIncidentsSource
2022$3.0B+214+Chainalysis
2023~$1.8B282Chainalysis
2024$2.2B303Chainalysis
2025$3.4BChainalysis

North Korea's Lazarus Group accounted for $1.34 billion across 47 incidents in 2024 alone — 61% of total value stolen that year. In 2025, DPRK-attributed theft reached $2.02 billion, with the Bybit hack representing $1.46 billion of that figure.

Attack vectors in 2024 (Halborn Top 100 updated data):

VectorShare of IncidentsShare of Losses
Off-chain attacks (key compromise, social engineering, UI manipulation)56.5%80.5%
On-chain attacks (contract exploits)43.5%19.5%
Compromised accounts specifically47% of total losses
Flash loan exploits83.3% of eligible on-chain exploitsLower dollar volume

Audit status of exploited protocols (Halborn Top 100, 2014-2024):

StatusShare of Hacked ProtocolsShare of Value Lost
Audited20%10.8%
Not audited80%89.2%

Additional infrastructure findings: only 19% of hacked protocols used multisig or MPC wallets. Only 2.4% used cold storage for private keys.

What Audits Get Right

The Halborn data makes a clear case: audits substantially reduce both the probability and severity of a breach. Unaudited protocols account for 80% of hacks and 89.2% of losses. The 20% of exploited protocols that were audited lost proportionally far less.

This is consistent with what code-level auditing is designed to catch. Static analysis tools like Slither (built by Trail of Bits) detect 40+ known vulnerability patterns — reentrancy, integer overflow, access control misconfigurations. Manual code review identifies implementation errors that automated tools miss. Fuzz testing surfaces edge cases in complex state transitions.

For on-chain code correctness, audits work.

The problem is that on-chain code correctness accounts for only 19.5% of losses.

Where Audits Fail: Case Studies

The audited protocols that were exploited share a consistent pattern: the audit covered the code, but the exploit targeted how the code interacted with the business process, the operational infrastructure, or code deployed after the audit ended.

Euler Finance

$197 Million — March 13, 2023

Euler was reviewed by six firms across ten audit engagements: Halborn, Sherlock, Omniscia, Solidified, ZK Labs, Certora, and others. The exploited function — donateToReserves() — was only in scope for one of those engagements (Omniscia).

The function was syntactically correct. It did what it was supposed to do. The vulnerability existed in how that function interacted with the broader lending mechanism: an attacker could use flash loans to create an overleveraged position, donate to reserves to make the position appear insolvent, then self-liquidate for profit.

This is a business logic exploit. No static analyzer would flag donateToReserves() as vulnerable because the function itself is not vulnerable. The vulnerability is in the economic interaction between that function and the liquidation mechanism — a relationship that requires understanding the lending business model, not just the code.

Sherlock, which had provided audit coverage, paid a $4.5 million claim — acknowledging the coverage gap.

Sources: Cyfrin hack analysis, Chainalysis

Bybit

$1.46 Billion — February 21, 2025

The largest single theft in crypto history. The Lazarus Group compromised a Safe{Wallet} developer's macOS workstation on February 4. Malicious JavaScript was injected into the multisig signing interface, masking a transaction that transferred wallet ownership to the attacker. When Bybit signers approved what appeared to be a routine transaction on February 21, they were actually approving an ownership transfer.

The smart contracts were not exploited. The wallet interface was. This is an operational security failure — supply chain compromise of a developer machine leading to UI manipulation — that falls entirely outside traditional audit scope.

The FBI attributed the attack to Lazarus Group on February 26.

Sources: Halborn analysis, Chainalysis

Radiant Capital

$4.5M (January 2024) + $50-53M (October 2024)

Radiant was exploited twice in the same year for completely unrelated reasons.

The January exploit was a flash loan attack exploiting a rounding error in a new USDC market on Arbitrum. The pool was drained six seconds after activation through repeated deposit/withdraw cycles. This is a code-level bug — the kind audits are designed to catch.

The October exploit was a social engineering attack attributed to North Korean group UNC4736 (Citrine Sleet). A Radiant team member received a spoofed Telegram message from someone impersonating a former contractor. The attached file deployed INLETDRIFT macOS malware, compromising the signer's device. Three of eleven multisig keys were obtained, enough to authorize transactions.

One exploit was a code bug. The other was a business process failure — specifically, a multisig threshold (3 of 11) that was too low and an operational security process that did not prevent social engineering. An audit that reviewed only the smart contracts would have been relevant to the first incident and irrelevant to the second.

Sources: Halborn analysis, Rekt, CoinDesk

WazirX

$234.9 Million — July 18, 2024

The attacker manipulated WazirX's Liminal custody interface so that multisig signers approved a transaction that appeared benign but contained a malicious payload replacing the wallet's smart contract implementation. The smart contracts were not the vulnerability. The signing interface was.

Attribution: Lazarus Group, confirmed by a joint US/Japan/South Korea statement.

Sources: Halborn analysis

Nomad Bridge

$190 Million — August 1, 2022

Audited by Quantstamp. The critical vulnerability was introduced in a code change on May 26, during the audit period, and deployed on June 21. A routine upgrade set the trusted root to 0x00, causing the process() function to accept any message as valid.

Only 18.6% of the critical contract file (Replica.sol) matched what auditors had reviewed. The deployed code diverged from the audited code.

This is a deployment process failure. The audit was conducted on one version of the code. A different version was deployed. No post-audit verification confirmed that the deployed code matched what was reviewed.

Sources: Zellic audit drift analysis, Nomad postmortem

CertiK Audit Cases

Three CertiK-audited protocols suffered rug pulls:

Merlin DEX ($1.8 million, April 25, 2023): A rogue developer used private key privileges to drain liquidity. CertiK had flagged the centralization risk but did not treat it as a blocking finding. CertiK subsequently froze ~$160K and launched a $2 million compensation plan. Source: CoinDesk

Swaprum ($3 million, May 18, 2023): The deployer upgraded the audited MasterChef contract to an unaudited malicious version and drained LP tokens. CertiK had flagged proxy upgradability as a major risk. Funds were laundered via Tornado Cash. Sources: Decrypt, CertiK postmortem

Arbix Finance ($10 million, January 2022): The exploited contract — containing onlyOwner mint functions — was not included in the audit scope. CertiK itself flagged it as a rug pull after the incident. Source: Cointelegraph

The pattern across all three: CertiK's audits identified the technical risk (centralization, upgradability, scope limitations) but treated business-level threats — operator intent, admin key abuse, post-audit contract replacement — as informational rather than critical. The code was reviewed. The business risk was noted and not acted on.

The Comprehension Gap

Across every case above, the failure is the same: the audit evaluated the code without sufficiently evaluating the business process the code implements.

  • Euler: code was correct; the economic interaction between functions was exploitable
  • Bybit: contracts were sound; the operational signing process was compromised
  • Radiant (October): contracts were sound; the multisig threshold and social engineering controls were inadequate
  • WazirX: contracts were sound; the custody interface was compromised
  • Nomad: audited code was sound; deployed code was different
  • CertiK cases: code risks were identified; business risks were deprioritized

An auditor who does not understand how a lending protocol calculates liquidation cannot identify when that calculation is economically exploitable. An auditor who reviews a smart contract but not the custody infrastructure, the multisig configuration, the deployment pipeline, or the governance mechanism is examining one component of a system and calling it security.

The 80.5% of losses that came from off-chain attack vectors in 2024 cannot be addressed by code review alone. The business process — key management, signing workflows, deployment verification, operational security — is where the largest losses occur.

Firm Comparison

The following comparison evaluates six firms with emphasis on business process comprehension. Data is from published reports, disclosed methodologies, public tooling, and market feedback.

FirmApproachBusiness Process DepthServicesTurnaroundPost-Engagement
SigIntZeroManual review, static analysis, fuzzing — led by architecture and business logic assessmentArchitecture review and threat modeling before code audit; evaluates economic design, governance mechanics, operational riskAudit, architecture review, technical due diligence, compliance advisory2-4 weeksOngoing advisory
Trail of BitsCustom tooling (Slither, Echidna), deep manual reviewStrong technical depth; primarily code-focusedAudit, tool development, research4-8 weeksTool access
OpenZeppelinAutomated + manual, invariant testingDeep protocol-level understanding for EVM systemsAudit, Defender platform, ZK audits4-6 weeksDefender platform
CertiKFormal verification, AI scanning, three-tier reviewBroad coverage; business risks documented as informational rather than critical (see Merlin, Swaprum, Arbix cases above)Audit, Skynet monitoring, KYC2-4 weeksSkynet alerts
Consensys DiligenceMythril (symbolic execution), Harvey (fuzzer), Scribble annotationsStrong EVM mechanism understanding; narrower chain scopeAudit, fuzzing, tooling4-8 weeksMythX access
HalbornAudit + penetration testing + infrastructure + red teamBroadest operational scope; includes off-chain attack surfacesFull-stack: contracts, pen testing, red team2-4 weeksRetesting

What Each Firm Does Well

Trail of Bits built the most widely used static analyzer (Slither) and property-based fuzzer (Echidna) in the industry. Their audit methodology combines proprietary tooling with thorough manual review. Trade-off: smaller team capacity, longer timelines. Strength is deep code-level analysis.

OpenZeppelin maintains the most adopted smart contract library in existence (OpenZeppelin Contracts), giving their auditors unmatched familiarity with standard patterns. They have expanded into invariant testing and ZK-proof auditing. Limitation: deep Ethereum specialization means less experience with non-EVM systems.

CertiK operates at the highest volume of any audit firm. Founded by Yale and Columbia researchers, formal verification is their stated differentiator. The scale creates a documented tension: the Merlin DEX, Swaprum, and Arbix cases demonstrate that business-level risks are identified but not weighted as blocking findings. This has generated sustained industry criticism.

Consensys Diligence brings EVM depth through Mythril (symbolic execution) and Harvey (bytecode-level fuzzer). Scribble annotations allow teams to formalize business rules and verify them against code — the closest any competitor's tooling comes to business logic verification. Limitation: Ethereum-first.

Halborn has the broadest operational scope. Combining smart contract audit with penetration testing, infrastructure security, and red team exercises means they examine off-chain attack surfaces that code-only auditors miss. Their published analyses of major hacks (Ronin, Radiant, WazirX, Bybit) demonstrate strong understanding of operational failure modes.

SigIntZero leads engagements with architecture review and business logic assessment before code audit begins. The methodology is designed around the comprehension gap identified in this report: understanding what the system does as a business, mapping where that process can be manipulated, and then verifying the code against that threat model. The service extends to technical due diligence and compliance advisory — addressing the full system lifecycle rather than a code snapshot.

Competitive Audits and Bug Bounties

Traditional firm audits are not the only option. Competitive platforms surface vulnerabilities that single-team engagements miss.

Immunefi has paid over $100 million in bounties across 3,000+ reports (source: The Block), with $163 million currently available. The platform functions as continuous adversarial testing.

Code4rena paid approximately $4.8 million to wardens in 2023, running contests where hundreds of independent researchers examine the same codebase simultaneously.

Sherlock combines fixed-pay lead auditors with contest pools. Notably, Sherlock provided coverage for Euler Finance and paid a $4.5 million claim after the exploit — demonstrating accountability that traditional audits do not provide.

The strongest security programs layer approaches: a traditional audit for systematic business process review, followed by a competitive audit or bounty program for adversarial stress testing.

What an Audit Costs

ComplexityPrice RangeSource
Simple token contracts (ERC-20, ERC-721)$5,000-$15,000Softstack 2025
Protocol-level systems (lending, exchange, yield)$40,000-$100,000Softstack 2025
Enterprise / cross-chain infrastructure$100,000-$200,000+Softstack 2025
Auditor day rate$500-$1,200 per daySoftstack 2025

Published "starting from $5K" pricing typically excludes remediation review and re-audit. Budget for initial audit, remediation verification, and at minimum one annual re-engagement.

The security audit market is valued at approximately $5 billion in 2025, projected to grow at 57-66% CAGR through 2034 (Fortune Business Insights).

Regulation

The EU's Markets in Crypto-Assets (MiCA) regulation, effective 2025, introduces smart contract enforceability requirements that move auditing from voluntary best practice to compliance obligation (Hacken analysis). Protocols serving European markets need audit documentation that evaluates business logic integrity, not only code correctness — which favors firms providing compliance advisory alongside technical audit.

How to Evaluate an Audit Firm

Ask how the firm learns your business before reviewing your code. The first question should not be "send us the repo." It should be "walk us through how the system works, who uses it, what happens if each component fails, and who holds the keys." A firm that starts with code and never models the business process will produce findings about code while missing the operational and economic threats that cause 80.5% of losses.

Demand methodology transparency. Trail of Bits publishes their tooling. CertiK documents their three-tier process. Consensys Diligence open-sources Mythril. Firms that cannot articulate their methodology at this level of specificity are selling a credential, not a service.

Evaluate scope against your actual threat model. If your system involves multisig governance (Halborn found only 19% of exploited protocols used multisig), oracle dependencies, upgradeable proxies, or external custody infrastructure, a pure code review covers a fraction of the attack surface.

Check post-engagement support. Nomad Bridge was exploited because deployed code diverged from audited code — 18.6% match. Radiant Capital was exploited months after its first breach for an entirely different reason. Ask whether the firm offers ongoing advisory, deployment verification, re-engagement for upgrades, and incident response.

Consider layering audit approaches. A traditional audit for systematic coverage, then a competitive audit (Code4rena, Sherlock, Immunefi contest) for adversarial diversity. Sherlock's $4.5 million Euler payout demonstrates that competitive audit platforms provide accountability traditional audits do not.

Review the firm's track record on both hits and misses. Every major firm has audited protocols that were subsequently exploited. The relevant question is whether the exploit was inside or outside the audit scope — and whether the firm evolved its approach in response.

Conclusion

The data tells a clear story. Audits work: 80% of the largest hacks hit unaudited protocols, and audited protocols account for only 10.8% of losses. But when audited protocols do get exploited, the cause is consistently the same — the audit evaluated the code without understanding the business process it implements.

The 80.5% of 2024 losses that came from off-chain attack vectors cannot be addressed by code review. The business process — how keys are managed, how transactions are signed, how governance decisions are executed, how code moves from audit to deployment — is where the largest losses occur.

For teams evaluating security partners, the question is not which tools a firm runs on your code. It is whether the firm understands your business well enough to know where the real threats are. SigIntZero is built on that principle.


SigIntZero provides security audits, architecture reviews, technical due diligence, and compliance advisory for protocols and distributed systems worldwide. Contact us to discuss your security requirements.

Alex Rybalko
Alex Rybalko

Co-Founder

Co-Founder of SigIntZero. Security architecture and threat modeling for protocols and distributed systems.