Martin's Blog

Auditing Ghidra Server Authentication: 63 Findings Across the Auth Stack

Ghidra is the NSA’s open-source reverse engineering framework — the kind of tool you reach for when you need to understand what a binary actually does. It has been in the wild since 2019 and sees serious use across the security research and malware analysis community. When it ships with a networked server mode (GhidraServer) that allows teams to share analysis work over RMI, that server mode becomes an interesting target.

I wanted to see what Claude Code with Sonnet 4.6 would find if I pointed it at the authentication and cryptography code in the latest Ghidra release (12.0.4) and asked it to do a security audit. No elaborate prompt engineering, no custom tooling — just Claude Code in the terminal with simple, direct prompts. And yes, Opus is more capable but Sonnet costs less.

The result: 63 unique findings across 7 audited areas — one critical authentication bypass, fifteen high-severity issues, twenty-seven mediums, and twenty-one lows. None of these require a weaponised exploit chain to care about. Several are one-line fixes.

How It Worked

The entire process was three prompts.

Prompt 1: I asked Claude to list all source files in the repository that handle authentication, authorisation, or cryptography. It walked the source tree and produced a categorised manifest — roughly 60 files across five top-level categories (server-side auth modules, client authenticators, callbacks, SSL/TLS/PKI, and crypto utilities). I had it write that manifest to auth-crypto-files.md so we had a stable reference.

Prompt 2: I pointed it at PKIAuthenticationModule.java — the server-side PKI verification logic — and asked it to review and audit the file for vulnerabilities. It read the file, pulled in the supporting classes it references (TokenGenerator, SignatureCallback, DefaultKeyManagerFactory, etc.), and produced a detailed audit with nine findings. I asked it to write that to a numbered markdown file.

Prompt 3: I told it to follow the same process for the remaining sections in the manifest. It iterated through each category, reading all files in a group, cross-referencing between them, and writing a numbered audit report for each. The whole thing ran as one continuous session.

That is it. No system prompts, no few-shot examples, no chain-of-thought scaffolding. The model decided on its own to read supporting files for context, to cross-reference findings between audits (the getSigAlg() stub shows up in two reports because it affects both server-side verification and client-side protocol), and to structure each report with a summary table, severity ratings, code excerpts, and fix recommendations.

The categories it audited, in order:

  1. PKI authentication module — the server-side certificate verification logic
  2. Client-side authenticators — the five classes that handle connecting a client to the server
  3. Auth callbacks — the challenge-response objects serialized over RMI
  4. Authorization / user managementUserManager, RepositoryManager, Repository
  5. SSL / TLS / PKI transport — keystores, trust managers, socket factories, SSH key loading
  6. Cryptography — file-format AES keys, filesystem password caching, hash utilities
  7. UI password dialogs — the Swing dialogs that collect passwords from users

The Critical: You Can Authenticate Without a Private Key

The most serious finding is in PKIAuthenticationModule.java. PKI authentication is supposed to prove that the client possesses the private key corresponding to a CA-trusted certificate. The proof mechanism is a challenge-response: the server issues a random token, the client signs it, and the server verifies the signature.

Here is the verification code:

byte[] sigBytes = sigCb.getSignature();
if (sigBytes != null) {           // ← entire verification skipped when null
    Signature sig = Signature.getInstance(certChain[0].getSigAlgName());
    sig.initVerify(certChain[0]);
    sig.update(token);
    if (!sig.verify(sigBytes)) {
        throw new FailedLoginException("Incorrect signature");
    }
}

If getSignature() returns null — which it does when the client never calls sign() — the if block is skipped entirely. A client that presents any CA-trusted certificate without ever performing a signing operation authenticates successfully. Possession of the certificate (not the private key) is sufficient.

The fix is one line: invert the condition and throw on null.

if (sigBytes == null) {
    throw new FailedLoginException("Client signature is missing");
}

Broken Algorithms: SHA-1 and DSA Are Both Still Here

SSH authentication in Ghidra uses BouncyCastle. When a client authenticates with an RSA key, it signs the server’s challenge with RSADigestSigner(new SHA1Digest()). SHA-1 has been broken for digital signatures since the SHAttered collision attack in 2017; NIST deprecated it for signatures in SP 800-131A.

DSA is even worse:

else if (privateKeyParameters instanceof DSAKeyParameters) {
    DSADigestSigner signer = new DSADigestSigner(new DSASigner(), new SHA1Digest());

DSA with SHA-1 combines two deprecated algorithms. DSA requires a unique random nonce per signature — a repeated or biased nonce leaks the private key immediately. NIST deprecated DSA in SP 800-186. The fix is to remove DSA support entirely, upgrade RSA signing to SHA-256, and add Ed25519 support since that is what anyone generating a new key with ssh-keygen gets by default now.

The same theme shows up in user password storage. UserManager.authenticateUser() still accepts unsalted MD5 hashes:

// Support deprecated unsalted hash
if (entry.passwordHash.length == HashUtilities.MD5_UNSALTED_HASH_LENGTH &&
    Arrays.equals(
        HashUtilities.getHash(HashUtilities.MD5_ALGORITHM, password),
        entry.passwordHash)) {
    return;
}

And salted MD5 hashes:

if (entry.passwordHash.length == HashUtilities.MD5_SALTED_HASH_LENGTH) {
    if (!Arrays.equals(
        HashUtilities.getSaltedHash(HashUtilities.MD5_ALGORITHM, salt, password),
        entry.passwordHash)) {
        throw new FailedLoginException("Incorrect password");
    }
}

MD5 produces billions of hashes per second on GPU hardware. A stolen users file is crackable in minutes for any reasonably common password. These code paths exist for migration compatibility but there is no enforcement that users actually migrate — a server running for years may have every account on MD5 indefinitely.

And then there is MD5Utilities.getMD5Hash(char[]) — a utility method in the MD5Utilities class that wraps MD5 for use on char[] inputs, i.e. passwords. It is presented alongside getMD5Hash(File) and getMD5Hash(InputStream) as if all uses are equally fine.

Token Replay: Five Minutes Is a Long Time

The PKI challenge-response has a second problem. Tokens are valid for five minutes, and there is no server-side record of which tokens have been consumed:

private static final long MAX_TOKEN_TIME = 5 * 60000; // 5-minutes

static boolean isRecentToken(byte[] token, long maxTime) {
    long diff = (new Date()).getTime() - getLong(token, 0);
    return (diff >= 0 && diff < maxTime);
}

An attacker who captures a valid signed SignatureCallback on the wire — an unprotected network segment, a MitM position, a compromised log — can replay it within the validity window and authenticate as the victim. The fix is a consumed-token cache: a short-lived set of token hashes, cleared after MAX_TOKEN_TIME, that rejects any token seen more than once.

Algorithm Confusion: Downgrade via Client-Controlled Certificate

Immediately after the null-signature bypass, there is an algorithm confusion issue. The server instantiates the Signature object using the algorithm name from the client’s certificate:

Signature sig = Signature.getInstance(certChain[0].getSigAlgName());

A client can present a certificate naming MD5withRSA, SHA1withRSA, or NONEwithRSA. If the JCA provider accepts it, the server verifies with that algorithm. There is no allowlist anywhere in the verification path.

SignatureCallback.getSigAlg() was presumably intended to let the server communicate the expected algorithm to the client, but it is an unimplemented stub:

public String getSigAlg() {
    // TODO Auto-generated method stub
    return null;
}

This stub appears in two separate audit reports — it contributes to the algorithm confusion issue on the server and leaves a gap in the client-side protocol.

The Trust Manager Is Open by Default

DefaultTrustManagerFactory sets up the TLS trust anchor for all SSL connections. When ghidra.cacerts is not configured — which is the default, out-of-box state — it installs an OpenTrustManager:

if (cacertsPath == null || cacertsPath.length() == 0) {
    Msg.info(DefaultTrustManagerFactory.class,
        "Trust manager disabled, cacerts have not been set");
    wrappedTrustManager.setTrustManager(new OpenTrustManager());
    return;
}

OpenTrustManager accepts any certificate unconditionally:

private static class OpenTrustManager implements X509TrustManager {
    @Override
    public void checkClientTrusted(X509Certificate[] chain, String authType)
            throws CertificateException {
        // trust all certs
    }
    @Override
    public void checkServerTrusted(X509Certificate[] chain, String authType)
            throws CertificateException {
        // trust all certs
    }

Both client and server use this factory. On the client side, any server presenting any certificate — self-signed, expired, adversary-issued — passes TLS validation. On the server side, validateClient() (called from PKIAuthenticationModule) also passes through this manager, making certificate chain validation a no-op unless the operator has explicitly configured a CA bundle.

The log line says "Trust manager disabled" at INFO level. Most deployments will never see this unless they specifically tail the log at startup.

Hardcoded Password, Twice

// DefaultKeyManagerFactory.java
public static final String DEFAULT_PASSWORD = "changeme";

// UserManager.java
private static final char[] DEFAULT_PASSWORD = "changeme".toCharArray();

The self-signed keystore used when no user certificate is configured is protected with "changeme". New user accounts also get "changeme" as their initial password. Making it a public static final constant in DefaultKeyManagerFactory means it is part of the public API — visible in any decompiler, any dependency analysis tool, or a simple grep.

Secrets Turning Up in Logs

UserManager.checkValidPasswordHash() validates the format of a stored password hash before writing it. When validation fails, the error message includes the entire hash:

throw new IOException(
    "Password set failed due invalid salt: " + (new String(saltedPasswordHash)) +
        " (" + i + "," + saltedPasswordHash[i] + ")");

The hash includes the salt prefix and the full salted hash value. This exception propagates to the server log. An attacker with read access to the log file — a sysadmin, a log aggregation platform, a misconfigured log shipper — gets the hash without needing access to the users file.

A similar pattern appears in PKIAuthenticationModule, where a broad Throwable catch forwards internal exception messages directly to the unauthenticated client:

catch (Throwable t) {
    String msg = t.getMessage();
    if (msg == null) {
        msg = t.toString();
    }
    throw new FailedLoginException(msg);
}

A NullPointerException from a null certificate field, a ClassNotFoundException from a provider issue, a crypto provider error — all reach the client’s login failure message. This is an oracle for internal server state.

The Java String Problem (Again)

Java String objects are immutable and potentially interned. Once you have put a password into a String, you cannot zero it. It sits in heap memory until the garbage collector decides to collect it, which for interned strings may be never.

PasswordClientAuthenticator accepts passwords as String:

public PasswordClientAuthenticator(String password) {
    this(null, password);
}
public PasswordClientAuthenticator(String username, String password) {
    this.password = password.toCharArray();  // ← too late, the String already exists
    this.username = username;
}

The conversion to char[] happens immediately, but the original String is already on the heap. A caller passing a string literal gets "secret" interned in the JVM string pool for the lifetime of the process.

ApplicationKeyManagerFactory.getKeyManager() has the same issue — its defaultPasswd parameter is a String. The caller, DefaultKeyManagerFactory.init(), reads the keystore password via System.getProperty(KEYSTORE_PASSWORD_PROPERTY) — which always returns a String — and passes it straight through.

The pattern of char[] for password fields is applied consistently in the data model but breaks down at API boundaries, which is exactly where it matters most.

Sensitivity Not Cleared After Use

The ServerConnectTask finally block correctly zeros PasswordCallback after authentication:

finally {
    if (callbacks != null) {
        for (Callback callback : callbacks) {
            if (callback instanceof PasswordCallback) {
                ((PasswordCallback) callback).clearPassword();
            }
        }
    }
}

But SignatureCallback (which holds the signed token bytes and the certificate chain) and SSHSignatureCallback are never cleared. They sit in heap memory — and possibly in RMI internal buffers — after the authentication handshake completes.

CachedPasswordProvider has the same issue at a higher level: passwords added to the cache stay there indefinitely, with no TTL and no automatic eviction. The only way to remove them is an explicit clearAll() call.

Non-Atomic ACL Write

Repository.writeUserList() updates the per-repository access control list in two steps:

userAccessFile.delete();
temp.renameTo(userAccessFile);

Between the delete() and the renameTo(), neither the old nor the new ACL file exists. A concurrent readAccessFile() during this window finds no file. On Linux, POSIX rename(2) atomically replaces the target — if Ghidra just called renameTo() without the preceding delete(), the operation would be atomic on same-filesystem paths. The explicit delete() is what breaks atomicity by creating a window where neither file exists. There is also no fsync before close, so a crash during the write leaves a truncated file that then gets renamed over the live ACL.

Files.move(temp.toPath(), userAccessFile.toPath(), StandardCopyOption.ATOMIC_MOVE) is the correct tool here.

AES Keys in XML, Without Any Integrity Check

The file format decryption subsystem stores AES keys and IVs as hex strings in XML files under the crypto/ module data directory. The loading code reads these files, parses the XML, and loads keys into a static HashMap — no signature check, no HMAC, no hash of the file against a manifest:

SAXBuilder sax = XmlUtilities.createSecureSAXBuilder(false, false);
Document doc = sax.build(is);
Element root = doc.getRootElement();
...
byte[] key = NumericUtilities.convertStringToBytes(keyString);
byte[] iv  = NumericUtilities.convertStringToBytes(ivString);
CryptoKey cryptoKey = new CryptoKey(key, iv);

An attacker who modifies a key XML file (filesystem write, supply-chain compromise of the Ghidra distribution) causes the analyser to decrypt firmware images with the wrong key. AES-CBC does not authenticate ciphertext, so the decryptor produces garbage without indicating anything is wrong. An analyst reverse-engineering the output sees scrambled data but may not immediately distinguish “wrong key” from “packed/obfuscated binary.”

Once loaded, the key lives in CryptoKey.key — a public final byte[]. Any code with a reference to the CryptoKey object has direct read access to the key bytes. There is no destroy() method. The key persists in heap memory until the object is garbage collected.

The Numbers

Across all seven audit areas (after deduplicating one finding that appeared in two reports):

Severity Count
Critical 1
High 15
Medium 27
Low 21
Total 63 (unique)

The breakdown by area (raw report counts — one finding, isMySignature() non-constant-time comparison at DefaultKeyManagerFactory.java:509, appears in both 001 and 005 with different severities):

Area C H M L
PKI authentication module 1 3 3 2
Client-side authenticators 2 7 4
Auth callbacks / challenge-resp 2 3 4
Authorization / user management 3 3 2
SSL / TLS / PKI transport 3 4 3
Cryptography 2 3 3
UI password dialogs 4 3

The critical finding (null-signature bypass) and most of the high-severity issues are fixable in a few lines each. The systemic issues — broken algorithm support, the MD5 migration paths that never get enforced, the Java string problem at API boundaries — require coordinated changes across multiple classes.

None of this requires Ghidra to be exposed to the public internet to be relevant. Internal deployments on research networks, shared analysis servers, CI/CD pipelines running headless scripts — all of these are real threat models for a collaborative reverse engineering platform.

The full findings are in the audit reports at 001 through 007 in the repository root.

#Ghidra #Java #Audit #Pki #Authentication #Crypto #Rmi #Nsa #Claude #Llm #Ai-Assisted