Skip to content
Smart contract audit preparation checklist
technicalMarch 20, 20263 min read

Smart Contract Audit Checklist: What to Prepare Before Your Engagement

Dmitry Serdyuk
Dmitry SerdyukChief Digital Officer (CDO)

Updated on March 20, 2026

Most audit delays have nothing to do with the auditors. They start on day one, when a team submits a repository that doesn't compile, points to documentation that doesn't exist, and expects the engagement to stay on schedule anyway.

We've seen it enough times to know the pattern. A team spends months building a protocol, rushes to book an audit before a launch window, then loses a week (or more) because the codebase wasn't actually ready for external review. The audit clock is ticking. The auditors are billing. And instead of hunting for vulnerabilities, everyone's troubleshooting build failures and chasing down missing context.

This is the smart contract audit checklist we wish every team had before their engagement starts. It's the difference between a smooth, high-signal audit and one that burns time on avoidable friction. If you're preparing for a smart contract audit, work through this before your auditors touch the code.


TL;DR: The Complete Pre-Audit Checklist

Bookmark this. Check every box before your engagement kicks off. Each item links to the detailed section below.

Code Readiness

  • Code freeze enforced, no feature pushes after scope lock
  • Git repository access granted with specific commit hash for audit scope
  • All dependencies pinned to exact versions (no ^ or ~ ranges)
  • Build compiles on a clean machine with documented instructions
  • Dead code, unused imports, and commented-out blocks removed

Documentation

  • Architecture overview explaining what the protocol does
  • User stories for critical flows (deposit, withdraw, liquidate, etc.)
  • Contract interaction diagram showing call flows between contracts
  • Known risks, trust assumptions, and design trade-offs documented
  • Previous audit reports shared (if any exist)
  • Deployment configuration documented (constructor args, admin roles, proxy setup)

Testing

  • Unit test suite passes cleanly
  • Integration tests cover critical user flows
  • Test coverage report generated (80%+ on critical contracts)
  • Fuzzing results included if available
  • Known failing tests documented with explanations

Access & Communication

  • Dedicated technical point of contact assigned
  • Shared communication channel set up (Telegram, Slack, Discord)
  • NDA executed before any code sharing
  • Timeline with milestones agreed upon
  • Severity classification framework aligned

Scope Definition

  • Exact files and contracts in scope listed
  • Lines of code count provided
  • External dependencies and integrations documented
  • Target chains and networks specified
  • Upgradeability patterns identified (proxy, UUPS, diamond, etc.)

If you checked everything above, you're ahead of 90% of teams we've worked with. If not, keep reading.


Code Readiness

This is where most teams underestimate the work required. Your codebase needs to be in a state where someone with zero context about your project can clone the repo, run the build, and get a working compilation on the first try.

Enforce a Code Freeze

The single most common source of audit friction: teams keep pushing code after the audit scope has been defined.

  • Stop all feature development on the audited contracts once scope is locked. Bug fixes only, and even those should be coordinated with your auditors.
  • Tag the exact commit that represents the audit scope. This is the immutable reference point. Every finding, every line reference, every severity classification ties back to this commit.
  • Communicate the freeze internally. Every developer on the team needs to know: do not merge to main until the audit is complete. If your team pushes a refactor mid-engagement, you've just invalidated work your auditors already completed.

A code freeze isn't bureaucratic overhead. It's the foundation that makes the entire engagement tractable. Without it, auditors are chasing a moving target, and your final report may not reflect the code you actually deploy.

Repository Access and Build Integrity

  • Grant repository access early, at least 2-3 days before the engagement start date. Don't let day one be spent waiting on GitHub invitations.
  • Pin every dependency to an exact version. No ^2.0.0. No ~1.3.0. No latest. If your package.json, Cargo.toml, or foundry.toml has floating version ranges, an auditor's build may pull different dependency versions than yours. That's not a theoretical problem. It creates real discrepancies in compiled bytecode and behavior.
  • Test the build on a clean machine. Clone your repo into a fresh directory. Follow your own build instructions. If it doesn't compile, fix it before submission. This catches implicit dependencies on local environment variables, globally installed tools, or undocumented setup steps.
  • Include a working build script or Makefile. Something as simple as make build or forge build that handles the full compilation. Don't make auditors reverse-engineer your toolchain.

Clean the Codebase

  • Remove dead code. Unused functions, deprecated modules, experimental branches merged into main. All of it. Dead code is noise that auditors have to evaluate before determining it's not in scope. That's time pulled from actual vulnerability research.
  • Remove unused imports. They clutter the dependency graph and create false signals about what the code actually depends on.
  • Eliminate commented-out code blocks. If it's not active, it shouldn't be in the audit scope commit. Use version control to preserve history; that's what it's for.

Think of it this way: every line in the audit scope commit should be code you intend to deploy. If you wouldn't deploy it, don't make auditors review it.


Documentation

"The code is the documentation" is a statement we hear often. It's also a reliable predictor that the engagement will take longer than planned.

Auditors are not building your protocol from scratch. They need to understand intent so they can spot where the code diverges from it. Without documentation, they're reverse-engineering your design decisions from implementation details.

Architecture Overview

  • Write a plain-language summary of what your protocol does. Two to three paragraphs. What problem does it solve? What are the core user flows? What assets does it custody or manage? This is not a whitepaper. It's an orientation document.
  • Include user stories for critical flows. "As a depositor, I deposit ETH and receive vault shares proportional to my deposit." "As a liquidator, I can liquidate positions below the health factor threshold." These give auditors the intended behavior to test against. When the code diverges from a user story, that's a finding.
  • Describe the system's trust model. Who has admin/owner privileges? What can they do? What's the upgrade path? If there's a multisig, how many signers and what's the threshold? If there's a timelock, what's the delay?
  • List all external protocol integrations. If your contracts interact with Uniswap, Aave, Chainlink, or any other external protocol, document the integration points. Auditors need to understand the trust assumptions you're inheriting from those dependencies.

Contract Interaction Diagram

  • Map out which contracts call which. This doesn't need to be a formal UML diagram. A simple box-and-arrow diagram showing the call relationships between your contracts is sufficient. Tools like Mermaid, draw.io, or even a hand-drawn diagram photographed and included in the repo work fine.
  • Annotate entry points. Which functions are user-facing? Which are admin-only? Which are called by other contracts in the system? This helps auditors prioritize their review, since user-facing entry points with value transfer are the highest-risk surface area.

Known Risks and Previous Work

  • Document known risks and design trade-offs. Every protocol makes trade-offs. If you chose a particular pattern knowing it introduces a risk (e.g., flash loan susceptibility in exchange for composability), document it explicitly. Auditors will find these patterns regardless, but if they're already documented, the team can spend time on unknown risks instead of re-deriving known ones.
  • Share previous audit reports. If you've had prior audits, whether on the same codebase or an earlier version, share them. Include the remediation status of each finding. This gives auditors critical context: what's been reviewed before, what was fixed, and what was accepted as a known risk.
  • Document deployment configuration. Constructor arguments, initialization parameters, admin role assignments, proxy implementation addresses. Anything that configures the contracts at deployment time. Misconfigurations at deployment are a real and common vulnerability class. Auditors can't evaluate this if they don't know the intended configuration.

Testing

A test suite isn't just a development tool. It's a specification. When an auditor reads your tests, they're learning what behaviors you consider correct. When they find behaviors that aren't tested, they know where to look for bugs.

Unit and Integration Tests

  • All unit tests must pass. Submitting a test suite with failures signals that either the code or the tests are out of date. Both are problems. Fix them before submission.
  • Include integration tests for critical paths. Deposits, withdrawals, liquidations, governance actions, upgrade flows: whatever the high-value operations are in your protocol, they should have end-to-end tests that exercise the full call chain.
  • Generate a test coverage report. Use forge coverage, Hardhat's coverage plugin, or whatever your framework supports. Aim for 80% line coverage or higher on critical contracts. Low coverage doesn't mean the code is buggy, but it does mean auditors have less confidence in what "correct behavior" looks like, and they'll spend more time establishing baselines that your tests should have established.

Coverage numbers aren't the goal. The goal is that your most important invariants are encoded in tests that auditors can reference and extend.

Fuzzing and Advanced Testing

  • Include fuzzing results if you have them. If you've run Foundry fuzz tests, Echidna campaigns, or formal verification, include the results, configuration, and any properties you tested. This is enormously valuable context. It tells auditors which invariants you've already stress-tested and where the remaining uncertainty lives.
  • Document known failing tests. If a test fails due to a known issue (e.g., a test for an unimplemented feature, a flaky test dependent on block timestamps), document it explicitly with an explanation. Don't leave auditors guessing whether a failing test indicates an undiscovered bug or a known limitation.

Teams that invest in property-based testing and fuzzing before an audit consistently get higher-signal audit reports. The auditors aren't spending cycles on shallow bugs your fuzzer would have caught. They're focused on the logic errors, economic exploits, and design flaws that require human reasoning.


Access & Communication

Audits are collaborative. The quality of the final report depends heavily on how efficiently auditors can get answers to questions about intent, design decisions, and expected behavior.

Set Up the Communication Layer

  • Assign a dedicated technical point of contact. This person needs to be available to answer auditor questions within a reasonable timeframe (ideally same-day for blocking questions). They should understand the codebase deeply enough to explain design decisions, not just route questions to other team members.
  • Create a shared communication channel. Telegram group, Slack channel, or Discord, whatever your team uses. The channel should include your point of contact, your lead developers, and the auditors. Avoid email for day-to-day audit communication. It's too slow for the density of back-and-forth that a good audit generates.
  • Execute the NDA before sharing code. This seems obvious, but we've seen engagements delayed because the NDA wasn't signed and the legal team needed a week to review terms. Get this done during the scoping phase, well before the audit start date.

Align on Process

  • Agree on a timeline with milestones. At minimum: audit start date, mid-point check-in, initial findings delivery, remediation period, and final report date. Everyone should know these dates before the engagement begins.
  • Align on a severity classification framework. Critical, High, Medium, Low, Informational. These labels should mean the same thing to your team and your auditors. Some firms use specific frameworks (e.g., based on likelihood and impact matrices). Discuss this upfront so there's no friction when findings arrive.
  • Have a remediation plan before findings arrive. Decide in advance: who reviews findings? Who implements fixes? What's the approval process for accepting risks vs. fixing them? Teams that figure this out after the report lands always take longer to ship the remediation commit.

Scope Definition

Scope is the single biggest driver of audit cost and timeline. Ambiguity here cascades into everything else: inaccurate quotes, misallocated auditor time, and findings that reference out-of-scope code.

Define the Boundaries

  • List every file and contract in scope. Not "the src/ directory." Specific file paths. If a file is in the repository but out of scope (e.g., test helpers, deployment scripts, mocks), state that explicitly.
  • Provide a lines-of-code count. Use tools like cloc or solidity-metrics to generate an accurate count of Solidity (or Rust, or Move) lines of code. Exclude tests, interfaces that are just imported from external packages, and deployment scripts unless you want those reviewed too. This number directly informs the engagement estimate.
  • Document external dependencies and integrations. If your contracts call into Uniswap V3 pools, read from Chainlink oracles, or integrate with a particular lending protocol, list them. Auditors need to know which external contracts are trusted and which interactions could introduce risk. This is especially important for DeFi protocols with complex composability.

Deployment Context

  • Specify target chains and networks. EVM differences matter. A contract that works on Ethereum mainnet may behave differently on Arbitrum, Optimism, Base, or zkSync due to opcode differences, gas semantics, or precompile availability. If you're deploying multichain, auditors need to consider chain-specific edge cases.
  • Identify upgradeability patterns. Transparent proxy, UUPS, Diamond (EIP-2535), Beacon: each pattern introduces its own class of risks. If your contracts are upgradeable, document the pattern, the proxy admin, and the upgrade authorization mechanism. If your contracts are not upgradeable, state that explicitly too. It changes the threat model.
  • Flag any novel or unusual patterns. Custom assembly blocks, unconventional storage layouts, novel economic mechanisms, unusual inheritance hierarchies. Anything that deviates from well-trodden Solidity patterns should be called out. These areas tend to concentrate both bugs and audit attention.

Common Mistakes That Delay Audits

We've compiled these from real engagements. Every item on this list has cost a team at least a week of delay.

Submitting Code That Doesn't Compile

This happens more often than you'd expect. The team's local builds work because of implicit environment dependencies: a globally installed library, an environment variable, a specific Node version. The auditor clones the repo, runs the build command, and gets a wall of errors.

Fix: Test your build in a Docker container or a clean VM before submission. If it doesn't build in isolation, it doesn't build.

Changing Scope Mid-Audit

Adding contracts, removing contracts, or refactoring significant portions of the codebase mid-engagement. Each change invalidates previously completed work. If a scope change is unavoidable, it must come with a timeline extension. Expecting the same delivery date with expanded scope is how you get a shallow audit.

Fix: Lock scope before the engagement starts. If requirements change, negotiate timeline and cost adjustments upfront with your auditors.

No Documentation

"The code is self-documenting" means auditors will spend the first 20-30% of the engagement building the mental model that documentation would have provided for free. That's not a cost they absorb. It's a cost your audit quality absorbs, because that's time not spent finding vulnerabilities.

Fix: At minimum, provide an architecture overview, a contract interaction diagram, and a list of trust assumptions. A few hours of writing saves days of audit time.

Expecting Auditors to Write Your Tests

An audit is a review of your system, not a development engagement. If you submit contracts with zero test coverage and expect auditors to build the test infrastructure, you're paying senior security researchers to do junior developer work, and you're getting less security review in exchange.

Fix: Write your tests first. If you're behind on coverage, focus on the critical paths: value transfers, access control, state transitions, and upgrade flows.

No Remediation Plan

The audit report arrives. Findings are classified. And then nothing happens for two weeks while the team figures out who's responsible for fixes, what the review process is, and whether certain risks should be accepted or mitigated.

Fix: Before the engagement starts, designate a remediation owner, define the fix-review-approve workflow, and pre-allocate development time for the remediation period.


Pulling It All Together

A smart contract audit is one of the highest-leverage security investments a protocol can make. But its value is directly proportional to the preparation that goes into it. Teams that prepare well get audits that find deep, meaningful vulnerabilities. Teams that don't prepare well get audits that spend half their time on surface-level friction.

The checklist above isn't aspirational. It's the baseline. Every item exists because its absence has, in practice, degraded an audit engagement. If your team works through this checklist before your auditors start, you'll get more coverage, deeper analysis, and a final report that actually reflects the security posture of your protocol.

If you're preparing for an audit and want to talk through scope, timeline, or what "ready" looks like for your specific protocol, reach out. We'd rather help you prepare properly than start an engagement that isn't set up to succeed.

For more on evaluating firms, see our guide to choosing an audit firm. For what audits actually cost, see our pricing breakdown.

Dmitry Serdyuk
Dmitry Serdyuk

Chief Digital Officer (CDO)

Full-Stack Operator | Building across security, AI, and digital infrastructure.