Argon2 vs Bcrypt vs Scrypt: Which Password Hash Should You Use?

Argon2 vs Bcrypt vs Scrypt: Which Password Hash Should You Use?

A user signs up. You take their password, hash it, store the hash. Six months later your database leaks. The question that determines whether your users are merely embarrassed or genuinely compromised is: which hash function did you pick?

If the answer is MD5, SHA-256, or any plain cryptographic hash, attackers with a modern GPU rig will recover most passwords within hours. If the answer is bcrypt with sensible cost, scrypt, or Argon2id, they'll spend weeks and still only recover the weakest ones. Same database, same passwords, very different outcomes — entirely because of which slow hash you reached for.

This post walks through the three serious contenders for password hashing in 2026: bcrypt, scrypt, and Argon2. They all solve the same problem (make password cracking expensive) with different tradeoffs around CPU work, memory pressure, and ecosystem support.

Why Plain Hashes Are Wrong For Passwords

Approximate guess rate on a single high-end GPU (RTX 4090 class) SHA-256 (no salt) ~22 billion / sec bcrypt (cost 12) ~110 / sec Argon2id (m=64MB) ~6 / sec Memory-hard schemes drop GPU throughput by 8-9 orders of magnitude versus a plain hash.
The whole point of slow / memory-hard hashing: shrink the attacker's guess rate from billions per second to single digits.

SHA-256 is fast. That's the entire problem. A modern GPU can compute billions of SHA-256 hashes per second. If your users picked any password from a leaked wordlist — and most of them did — an attacker recovers it before their coffee finishes brewing.

Password hashing functions are deliberately slow. The goal is to make each guess expensive enough that brute force becomes impractical. There are two ways to add cost: CPU work (more iterations) and memory (require holding state in RAM). Modern attackers run on GPUs and ASICs that have lots of compute but limited per-thread memory, so memory-hard functions punish them disproportionately.

If you're still computing sha256(password + salt) somewhere in your stack, stop reading this and go fix that. Then come back and pick one of the three below.

Bcrypt: The Reliable Veteran

Bcrypt landed in 1999 and has been the default answer for password hashing for two decades. It's based on the Blowfish cipher and uses a configurable cost factor (rounds) that controls how many iterations the algorithm runs.

import bcrypt from 'bcrypt';

const hash = await bcrypt.hash(password, 12);
// $2b$12$N9qo8uLOickgx2ZMRZoMyeIjZAgcfl7p92ldGxad68LJZdL17lhWy

const ok = await bcrypt.compare(password, hash);

The output encodes everything you need: algorithm version ($2b$), cost ($12$), salt, and hash. No separate salt column required. Storage is fixed at 60 bytes.

$2b$12$N9qo8uLOickgx2ZMRZoMyeIjZAgcfl7p92ldGxad68LJZdL17lhWy
└─┬─┘└┬┘└──── 22 chars ────┘└──────── 31 chars ──────────┘
  │   │              │                       │
algorithm    cost (2^12 = 4096           hash (Blowfish-derived,
($2b$ = bcrypt)   blowfish rounds)          base64-encoded)
                              salt (128-bit, base64-encoded)

Strengths. Battle-tested, available in every language, dead simple to use. The output format is self-describing so rotating cost factors over time is straightforward.

Weaknesses. Bcrypt is CPU-hard but not meaningfully memory-hard — it uses about 4KB of state, which fits comfortably on every GPU shader. It also has a quirky 72-byte password length limit (longer passwords get truncated silently, which has caused real-world auth bypasses). And bcrypt's cost factor is logarithmic — going from cost 12 to cost 13 doubles the time, so tuning is coarse.

If you need a quick way to inspect or generate bcrypt hashes during development, the bcrypt generator and verifier is the fastest way without dropping into a Node REPL.

Scrypt: Memory Hardness, First Attempt

Scrypt arrived in 2009 with a sharper insight: if you force the algorithm to use a lot of memory, GPU attackers lose their advantage. A GPU has thousands of cores but only a few MB of fast memory per core. Demand 64 MB per hash and suddenly each GPU can compute fewer hashes in parallel than a single CPU thread.

import { scrypt } from 'node:crypto';

scrypt(password, salt, 64, { N: 2 ** 17, r: 8, p: 1 }, (err, hash) => {
  // hash is a 64-byte Buffer
});

The parameters are:

  • N — CPU/memory cost (must be power of 2). Higher = slower and more RAM.
  • r — block size, multiplies memory use.
  • p — parallelization factor.

A typical config of N=2^17, r=8, p=1 uses around 128 MB of memory per hash. Try that on a GPU with 4 GB of VRAM and you can run 32 hashes in parallel — versus billions for SHA-256.

Strengths. Genuinely memory-hard. Standardized in RFC 7914. Available in Node's built-in crypto module — no third-party dependency needed.

Weaknesses. Scrypt's memory usage and CPU usage are coupled. You can't say "spend more time but use the same memory" cleanly. Argon2 fixes this with separate parameters. Scrypt also has known time-memory tradeoff attacks that let dedicated hardware reduce memory at the cost of more time, which slightly weakens the memory-hardness guarantee.

Argon2: The PHC Winner

In 2013 the Password Hashing Competition ran a multi-year evaluation of new password hashing designs. Argon2 won in 2015. It was later standardized as RFC 9106 in 2021.

Argon2 ships in three variants:

  • Argon2d — data-dependent memory access, maximally GPU-resistant, but vulnerable to side-channel attacks if you're hashing something an attacker controls timing on.
  • Argon2i — data-independent access, side-channel resistant, slightly weaker against GPU.
  • Argon2id — hybrid. First half is i, second half is d. This is what you want for password hashing.
import argon2 from 'argon2';

const hash = await argon2.hash(password, {
  type: argon2.argon2id,
  memoryCost: 65536,  // 64 MB
  timeCost: 3,        // 3 iterations
  parallelism: 4,
});

const ok = await argon2.verify(hash, password);

The encoded output looks like:

$argon2id$v=19$m=65536,t=3,p=4$<salt>$<hash>
└───┬───┘└──┬─┘└──────┬──────┘└─┬──┘└──┬─┘
    │       │         │         │      │
algorithm  v19   m=memory(KB)  base64 base64
($argon2id)     t=time         salt   hash
                p=parallelism

Three independent knobs:

  • memoryCost (m) — KB of memory per hash.
  • timeCost (t) — number of passes over memory.
  • parallelism (p) — threads.

You can tune memory and time independently, which makes it much easier to hit a target performance budget on your hardware.

Strengths. State of the art memory hardness. Side-channel resistant variant available. Independent parameters make tuning easy. This is the hash function recommended by the OWASP cheat sheet and most modern security guidance.

Weaknesses. Newer than bcrypt, so library quality varies by language. The Node bindings (argon2 npm package) require native compilation, which can complicate Docker images and serverless deployments. Argon2 also has more parameters to get wrong — pick weak values and you've got a worse bcrypt.

What OWASP Actually Recommends Today

The current OWASP Password Storage Cheat Sheet ranks the options like this:

  1. Argon2id — preferred. Use m=19456 (19 MB), t=2, p=1 as a minimum baseline.
  2. scrypt — acceptable if Argon2 isn't available. Use N=2^17, r=8, p=1.
  3. bcrypt — acceptable if neither of the above works for your stack. Use cost 10 or higher.
  4. PBKDF2 — only if FIPS compliance forces your hand. Use 600,000+ iterations with HMAC-SHA-256.

The Latacora "Cryptographic Right Answers" guide reaches the same conclusion in fewer words: scrypt if you can't get Argon2, bcrypt if you can't get either, and never PBKDF2 if you have a choice.

How To Pick In Practice

flowchart TD
  Start([Picking a password hash])
  FIPS{FIPS<br/>required?}
  Native{Can load<br/>native libs?}
  Mem{Can spend<br/>~64 MB / hash?}
  Lambda{Cold-start<br/>memory tight?}
  PBKDF2[PBKDF2<br/>600k+ iterations]
  Argon2[Argon2id<br/>m=65536, t=3, p=4]
  Scrypt[scrypt<br/>N=2^17, r=8, p=1]
  Bcrypt[bcrypt<br/>cost 12+]
  Start --> FIPS
  FIPS -- yes --> PBKDF2
  FIPS -- no --> Native
  Native -- no --> Bcrypt
  Native -- yes --> Mem
  Mem -- no --> Bcrypt
  Mem -- yes --> Lambda
  Lambda -- yes --> Scrypt
  Lambda -- no --> Argon2
  classDef good fill:#1f1f1f,stroke:#4ade80,color:#e4e4e4;
  classDef ok fill:#1f1f1f,stroke:#fb923c,color:#e4e4e4;
  classDef avoid fill:#1f1f1f,stroke:#f87171,color:#e4e4e4;
  class Argon2 good
  class Scrypt good
  class Bcrypt ok
  class PBKDF2 avoid

Most decisions come down to ecosystem and operational constraints, not cryptographic merit.

You're starting a new project on Node, Python, Go, or Rust: use Argon2id. Mature libraries exist for all of them. Don't overthink it.

You have an existing bcrypt deployment: don't migrate just to migrate. Bcrypt at cost 12 is still acceptable for the foreseeable future. Migrate gradually by re-hashing on next login if you want to move forward.

You're running on a constrained environment (embedded, serverless cold starts, mobile): scrypt or bcrypt may be safer choices because they don't require native bindings or much memory by default. Argon2 with 64 MB cost can blow up Lambda cold-start budgets.

You're storing high-value secrets (financial accounts, healthcare): Argon2id at higher cost (256 MB memory, 3 iterations) and consider adding a server-side pepper — a secret value mixed into the hash that lives outside the database. If the database leaks but the pepper doesn't, attackers have nothing.

Whatever you pick, pair it with strong password requirements. The hash slows down attackers by a constant factor; password entropy slows them down exponentially. The password entropy calculator shows how much guesswork a candidate password actually represents, and the password strength checker catches the obvious problems before users commit them.

Tuning The Cost Parameter

The right cost setting is whatever makes your login endpoint take 250-500 ms on your production hardware. Faster than that and you're cheaping out. Slower and you've made yourself vulnerable to a trivial DoS — an attacker submits a thousand login attempts and your CPU drowns.

Benchmark on your actual production hardware:

import argon2 from 'argon2';

console.time('hash');
await argon2.hash('test-password', {
  type: argon2.argon2id,
  memoryCost: 65536,
  timeCost: 3,
  parallelism: 4,
});
console.timeEnd('hash');
// hash: 287ms — about right

Re-benchmark every couple of years. Hardware gets faster; your cost parameters need to track. The encoded output format makes this safe — old hashes carry their original parameters, and you upgrade to the new cost on next login.

The Salt Question

All three of these algorithms generate random salts automatically and embed them in the output. Don't second-guess this. Don't reuse salts. Don't derive salts from usernames. Don't implement your own salting scheme. The libraries handle it correctly; your custom version probably won't.

The $2b$12$N9qo8uLOickgx2ZMRZoMye... format mixes algorithm, cost, salt, and hash into a single string. Store that string. That's it. If you need to inspect the components for debugging or migration tooling, a hash generator helps you check input fingerprints, and the bcrypt generator and verifier parses bcrypt output structure directly.

What This Looks Like Wrong

The patterns that lead to compromised credentials are predictable:

  • Plain SHA-256 with no salt. Cracked by rainbow tables, instant.
  • Plain SHA-256 with per-user salt. Better, but still GPU-cracked in hours for weak passwords.
  • One round of bcrypt or PBKDF2 with cost 4. Technically using a slow hash, practically equivalent to fast hashing. Set cost properly.
  • Truncating long passwords to 72 bytes silently before bcrypt. Caused real auth bypasses where suffixes didn't matter for password equality.
  • Storing the password and hash both, "in case of recovery." If your design needs to recover the original password, your design is wrong. Use password reset tokens.
  • Encrypting passwords instead of hashing them. Encryption is reversible. That's why we don't use it here.

If you want users to pick stronger passwords without forcing them through composition rules nobody can remember, encourage passphrases. The diceware passphrase generator produces memorable, high-entropy passphrases that beat almost any human-chosen password.

The Practical Takeaway

For new code in 2026: use Argon2id with memoryCost: 65536 (64 MB), timeCost: 3, parallelism: 4 as a starting point. Benchmark on production hardware and tune so login takes 250-500 ms. Pair it with reasonable minimum password entropy (12+ characters or a passphrase). Add a server-side pepper if you're handling sensitive data.

For existing bcrypt deployments: keep bcrypt at cost 12 or higher. Migrate opportunistically on next login if you want to move to Argon2, but it's not urgent — bcrypt isn't broken, just superseded.

The one thing none of these algorithms can save you from is users picking "password123". Get the hash right, then push hard on entropy. The combination is what actually protects accounts when your database eventually leaks — and statistically, eventually it will.

FAQ

Which password hash should I use for a new project in 2026?

Argon2id. It's the OWASP-preferred recommendation, the PHC competition winner, and resists GPU and ASIC attacks better than bcrypt's 4KB-state CPU-only design. Start with m=65536 (64 MB), t=3, p=4 and tune from there. Reach for bcrypt only if your runtime can't load native Argon2 bindings.

Is bcrypt obsolete?

Not obsolete, just superseded. Bcrypt at cost 12 is still secure against current attacks and will be for years. The reason to prefer Argon2id for new code is that bcrypt's 4KB internal state fits comfortably on every GPU shader, while Argon2id's tunable memory hardness genuinely punishes GPU attackers. Existing bcrypt deployments are fine to leave alone.

How much memory should Argon2id use?

OWASP minimum is 19 MB (m=19456). 64 MB (m=65536) is a comfortable default for most web apps. Higher values (256 MB+) make sense for high-value systems but can blow up Lambda cold-starts. The right answer is whatever makes login take 250-500ms on your production hardware while not exceeding your container's memory limit.

Scrypt couples its memory and CPU costs together — you can't tune memory and time independently the way Argon2 lets you. It also has known time-memory tradeoff attacks where dedicated hardware can reduce memory at the cost of more compute, weakening the memory-hardness guarantee slightly. Argon2id was specifically designed to fix both.

Is PBKDF2 safe in 2026?

Acceptable for FIPS compliance contexts, but a poor choice otherwise. PBKDF2 is purely CPU-bound with no memory hardness, so it's the easiest of the slow hashes to accelerate on GPUs and ASICs. If FIPS forces your hand, use 600,000+ iterations with HMAC-SHA-256 minimum. Otherwise, pick Argon2id.

Should I add a server-side pepper?

For high-value systems (banking, healthcare), yes — a server-side pepper is a secret mixed into the hash that lives outside the database (e.g., in HSM, KMS, or app config). If the database leaks but the pepper doesn't, attackers can't crack any hashes regardless of password strength. For typical SaaS apps, the operational complexity often outweighs the marginal benefit.

How do I migrate from bcrypt to Argon2id?

Rehash on successful login. When a user authenticates correctly, you have their plaintext password in memory briefly — recompute with Argon2id and update the database. Mark the row's hash format so future logins know which library to use for verification. Over a few months, active users migrate naturally; inactive accounts stay on bcrypt or get force-reset.

What if my bcrypt cost is set too low (like 6 or 8)?

Bump it on next login. When the user signs in successfully, check the cost factor of the stored hash; if it's below your current target (10-12), recompute at the new cost and update. This is exactly the same mechanism as migrating between algorithms. Don't try to bulk-rehash without the plaintext — you'd just be re-bcrypting an already-bcrypt'd hash, which doesn't help.