Choosing the wrong audit firm isn't a minor procurement mistake. It's a decision that can cost your protocol, and your users, hundreds of millions of dollars.
Euler Finance lost $197 million in March 2023 despite ten audit engagements across six firms. The exploited function was in scope for one of them. Nomad Bridge lost $190 million because the deployed code diverged from what auditors reviewed. Both protocols did what the industry told them to do. They still got drained.
This keeps happening. In September 2025, Bunni DEX lost $8.4 million to a rounding error in its withdrawal function. Trail of Bits had flagged precision issues in January. Cyfrin found 50+ issues in June and recommended more fuzz testing before scaling. Bunni ignored both, scaled TVL from $2.4M to $23.9M overnight, and the protocol died. Two months later, Yearn Finance lost $9 million from a legacy stableswap pool that remained live on-chain after the protocol had moved on. The cached storage system desynchronized from the supply counter. An attacker deposited 16 wei and minted 235 septillion tokens.
Our analysis of the 100 largest DeFi hacks found that 20% of exploited protocols had been audited. $10.77 billion in total value lost. Audits reduce risk. But they're not all created equal, and the 20% that were audited and still hacked share a pattern.
This post is that pattern, and the framework for avoiding it. If you want the data-heavy comparison of specific firms, see our audit firm comparison analysis. What follows is the process for assessing any firm, including us, before you sign.
Why the Firm Matters More Than the Audit
The industry still frames audits as binary: audited or not audited. That framing is dangerously incomplete. The real question isn't whether a protocol was audited. It's whether the firm had the right specialization, methodology, and scope for the specific risks that protocol faces.
Euler wasn't under-audited by volume. Ten engagements. The exploited function, donateToReserves(), was syntactically correct. The vulnerability was in how it interacted with the lending and liquidation mechanism: an economic exploit that required understanding the business model, not just the code. Sherlock paid a $4.5 million claim on the coverage gap.
Nomad is even more instructive. Quantstamp audited the code. Then a routine upgrade deployed a version where only 18.6% of the critical contract matched the audited code. A trusted root was set to 0x00, and the verification function accepted any input as valid. The audit may have been competent. The deployment pipeline made it irrelevant.
Bunni shows the third failure mode: auditors flag the risk, team ignores the recommendation. Trail of Bits warned about arithmetic precision. Cyfrin said to fuzz more before scaling. The team scaled anyway. The bug was exactly the type both firms had warned about.
These aren't edge cases. When an audited protocol gets hacked, the cause is almost never "the auditors were bad at reading Solidity." It's scope gaps, deployment drift, ignored recommendations, or business logic that fell outside the review. Choosing a firm that understands these boundaries, and is honest about them, is the most important decision in the audit process.
The Evaluation Framework
Ten criteria. Not all carry equal weight for every project, but skipping any of them creates blind spots.
Before we get into detail, here's the checklist version. Use it as a screening tool. If a firm fails on more than two or three, that should raise concerns.
- Can the firm explain their methodology in specific, technical detail?
- Do they have demonstrated experience in your language (Solidity, Rust, Move)?
- Do they have experience auditing your protocol type (lending, DEX, bridge)?
- Can they tell you who specifically will review your code?
- Do they publish audit reports you can review independently?
- Is the scope definition precise, with explicit inclusions and exclusions?
- Do they communicate findings during the engagement, not just at the end?
- Is re-audit of remediated code included?
- Is the proposed timeline realistic for your codebase size?
- Do they offer post-audit support or ongoing engagement?
Now the detail behind each one.
1. Methodology Transparency
The first question: how do you actually audit? Not "we use manual review and automated tools." That's a non-answer. What static analysis tools? What fuzzing frameworks? How do they handle business logic versus code-level scanning? What does triage look like?
A firm that can walk you through their process in detail understands what they're doing well enough to explain it. A firm that treats it as a black box is either protecting a competitive advantage or hiding the absence of a rigorous process. More often it's the second.
2. Language and Chain Specialization
Solidity, Rust, Move, and Cairo aren't interchangeable skill sets. Solidity's reentrancy patterns don't exist in Rust's ownership model. Solana's account model introduces entirely different attack surfaces around account validation and cross-program invocation.
Ask specifically: who on your team will review my code, and what's their experience in my language and chain? Multi-chain firms can be an advantage, especially if your protocol spans Solidity and Rust, but only if they have dedicated specialists for each, not generalists stretching across both. The red flag isn't a firm that covers multiple languages. It's a firm where the same auditors cover all of them.
3. Team Composition
The audit industry has a principal-agent problem. A firm's reputation is built on its best auditors, but your code may be reviewed by someone who joined three months ago.
You need to know: who specifically will review my code? What's their background? Have they audited similar protocols? Some firms assign auditors based on availability. The best match expertise to project type. A lending protocol should be reviewed by someone who's audited lending protocols before, not whoever has capacity this week.
4. Track Record Verification
Any firm can list logos on their website. Can you independently verify their work? Do they publish complete reports with severity-rated findings?
Check whether protocols they've audited have been exploited. Not as a disqualifier (Euler had six firms) but to see how the firm communicates scope limitations and whether they flag risks that later become exploits. Bunni's auditors flagged the exact risk category. That's actually a mark in their favor. The decision to scale before addressing the findings was a protocol-side call.
If a firm can't provide references or public reports, that's a significant data gap.
5. Scope Definition
One of the most common failure modes. Euler had ten engagements, but the exploited function was in scope for one. Nomad's post-audit deployment diverged from the reviewed code.
Before signing: which contracts are in scope? Which functions? Is the deployment pipeline reviewed? Are external dependencies included or excluded? What about cross-protocol interactions?
A good firm pushes back on vague scope. They ask clarifying questions. They document what's included, what's excluded, and the security implications of those exclusions. A firm that accepts "audit everything" without questions isn't being accommodating. They're being imprecise.
6. DeFi and Protocol-Specific Expertise
If you're building a DeFi protocol, you need auditors who understand flash loan exploitation, oracle manipulation, MEV, governance attacks, economic model failures. These aren't generic smart contract bugs.
Test them: ask the firm to explain an exploit relevant to your protocol type. If you're building a lending protocol, ask them to walk through Euler. Building a DEX, ask about the Bunni rounding attack. Their depth in that conversation tells you more than any marketing page.
7. Communication During Engagement
Some firms are black boxes: submit code, wait four weeks, receive a PDF. Others surface findings as they go and flag critical issues immediately.
If a critical vulnerability is found on day three of a four-week audit, you need to know on day three. Early communication lets you start remediation in parallel. Ask: how are findings communicated? What's the update cadence? Is there a shared channel for real-time questions? What's the escalation process for critical findings?
8. Re-Audit Policy
Every audit produces findings. Findings require fixes. Fixes require verification.
Some firms include re-audit in the price. Others charge separately. Some have no formal process at all. You need to know: what does re-audit cover? Just the changed lines, or the surrounding context? What's the turnaround? Fixes frequently introduce new edge cases, so reviewing only the diff isn't enough.
9. Timeline Realism
A rushed audit misses bugs. If a firm promises to audit 10,000 lines of complex DeFi code in five days, they're either staffing it heavily (ask how many auditors) or cutting corners.
Rough benchmarks: 1,000-2,000 lines of focused Solidity takes one to two weeks. A complex multi-contract protocol with 5,000-10,000 lines takes three to five weeks. If their timeline is dramatically shorter, ask what they're trading off.
Be honest with yourself too. If you're launching in two weeks and need an audit now, you're in a difficult position. The best outcome is a firm that's transparent about what a compressed timeline can and cannot cover. For more on how timelines affect pricing, see our audit cost guide.
10. Post-Audit Relationship
Code changes after audit. Features get added. Dependencies update. Nomad got hacked because post-audit changes weren't re-verified. Yearn's legacy pool sat on-chain long after the protocol had evolved past it.
Ask whether the firm offers ongoing engagement: retainer-based review, continuous monitoring, or a clear process for re-engaging when your codebase evolves. A firm that views the engagement as a one-time transaction isn't thinking about your security the way you need them to.
When to Walk Away
Not ambiguous. If you see these, walk away.
"We guarantee no vulnerabilities." No honest auditor says this. The state space of a non-trivial contract is enormous. Formal verification can prove specific properties but not the absence of all possible exploits, especially economic attacks that depend on context. A firm making this claim is either lying or doesn't understand what an audit delivers.
No public reports. Some engagements have legitimate confidentiality constraints. But a firm with zero public reports across their entire history has no verifiable track record.
Flat pricing with no scoping call. A legitimate engagement requires scoping: understanding the codebase, its complexity, its dependencies, the specific risk areas. If a firm quotes a price without reviewing your code, they're not pricing an audit. They're pricing a commodity.
Can't explain their methodology. The tools are open source (Slither, Mythril, Echidna, Foundry). What matters is how they're applied, in what order, and how findings are validated. If a firm can't articulate this, they're not rigorous.
Outsourcing to anonymous reviewers. Competitive platforms like Code4rena and Sherlock use this model productively. But if you're paying for a firm engagement and they route your review to unvetted contractors, you're not getting what you're paying for. Ask directly: is my code reviewed by your employees?
Questions to Ask Before Signing
These separate a team doing due diligence from one checking a box. Ask them in your scoping call.
"Who specifically will review my code?" Names and backgrounds. If the firm won't commit to specific reviewers, ask why.
"What tools do you use alongside manual review?" Specific tools (Slither, Foundry, Echidna, Certora Prover, custom tooling), not generic categories. Ask what their tools don't catch, and how they compensate.
"How do you handle findings during the audit?" You want immediate escalation of critical findings, not everything bundled in the final report.
"What does your re-audit process look like?" How many rounds? Turnaround time? Same auditor or different? Changed lines only, or surrounding context?
"Can I speak to a previous client?" Any confident firm provides references.
"What is explicitly out of scope, and what are the security implications?" Forces the firm to be transparent about what they won't cover and the risk of those exclusions.
Solo Auditor vs. Firm
Independent security researchers from competitive audit platforms have created a genuine alternative. The decision isn't about quality in the abstract. It's about fit.
Solo auditors make sense when: the codebase is focused (under 2,000 lines), the auditor has specific verifiable expertise in your protocol type, budget is constrained, you need speed, or you're seeking a second opinion after a firm engagement.
Firms make sense when: the codebase is large or architecturally complex, multiple specializations are needed (e.g., Solidity plus Rust), you need structured deliverables, accountability matters, or the protocol manages significant TVL.
The strongest approach for high-value protocols is layered: a firm engagement for primary coverage, a solo auditor or competitive audit for a second perspective. Different reviewers catch different things.
AI Tooling in Audits
Smart contract deployments hit 8.7 million in Q4 2025. Code volume has outpaced the industry's capacity to review it manually. AI-assisted tooling is the only realistic way to close that gap.
What matters is how it's integrated. AI as a full replacement for human review is security theater. Current models identify known patterns with reasonable accuracy, but they can't reason about business logic, economic attacks, or novel exploits that require understanding why a protocol exists. Our breakdown of Claude Code Security vs Codex Security covers this in detail: the best AI scanners still produce secure code roughly half the time.
AI as augmentation is different. Automated analysis handles pattern detection, coverage mapping, and known vulnerability scanning at machine speed. Human auditors focus on what they do best: business logic, economic interactions, adversarial scenarios, the gap between what code does and what it should do.
When evaluating a firm's AI claims, one question: does the AI replace human review, or enhance it?
How to Decide
The framework above is ten criteria, but the core is three things.
Honesty about limitations. The best firms tell you what they can't do. They define scope precisely. They explain what their methodology doesn't cover. They don't promise zero vulnerabilities. This is the strongest signal that a firm takes security seriously rather than treating audits as a revenue line.
Depth where it counts. The auditors reviewing your code need deep expertise in your specific language, chain, and protocol type. Whether that comes from a specialist firm or a multi-chain firm with dedicated specialists per language, the point is the same: the person reading your Rust code should live and breathe Rust.
Process rigor. Methodology, communication, re-audit, scope definition, post-audit support. Not exciting topics. They're the difference between an audit that reduces your risk and one that produces a PDF you can show investors.
Use the checklist. Ask the questions. Verify the track record independently. If you want to see how we approach any of this, we'll walk you through it.



