HOLIDAY OFFER: Get the gift of up to $70 of Bitcoin. While supplies last!

Shop now

Donjon | 05/17/2024

Cargo-checkct: our home-made tool guarding against timing attacks is now open-source!

The Ledger Donjon team is thrilled to present cargo-checkct, our in-house tool designed to defend against timing attacks.

The Ledger Donjon team is thrilled to present cargo-checkct, our in-house open-source tool designed to defend against timing attacks. In this article, we’ll delve into the concept of timing attacks, explore why timing vulnerabilities in cryptography libraries are so hard to detect, and explain the role of cargo-checkct in addressing these challenges.

Timing attacks

Side-channel vulnerabilities, and timing vulnerabilities in particular, may allow an attacker to extract secret data – be it authentication data like PINs or passwords, or secret keys used in cryptographic operations – from an otherwise secure system. The exploitation of such vulnerabilities can seem a bit esoteric or theoretical at first, but it is important to realize that they in fact very much lead to practical, and devastating, real-world attacks1. This is particularly dire for embedded systems, that an attacker can manipulate and instrument to better observe leakage more easily than could be done for a remote system.

In fact the principle underpinning timing attacks is, at its core, extremely simple, once you wrap your head around it: if a secret-manipulating programs runs faster for some values of the secret than for some other values, then as an attacker, measuring that program’s execution time directly gives me information about the secret! Not all such leakages are created equal, of course, and some vulnerabilities might only allow to recover some bits of the secret – yet just as often, timing leaks can be exploited to entirely recover the secret in only a few rounds of measuring the target program’s execution.

Upon realizing that this is a serious threat, and not merely a theoretical issue, the next step is to look for constructs introducing timing differences in secret-manipulating code, so as to be able to avoid using them. Perhaps the most obvious way in which a program’s execution time can vary depending on the value of the manipulated secret, is if the program includes conditional branches dependent on the secret, as is for instance very tempting when implementing a PIN comparison function, which could look something like this:

pub fn verify_pin(candidate_pin_to_verify: &[u8; PIN_LEN], actual_valid_secret_pin: &[u8; PIN_LEN]) -> bool {
    for (l, r) in candidate_pin_to_verify
        .iter()
        .zip(actual_valid_secret_pin.iter())
    {
        if l != r {
            return false;
        }
    }

    true
}

There is another very important class of timing leakages however, that is way less obvious at first sight, and is linked to the presence of microarchitectural intermediates between a processor core and memory: caches. The purpose of caches is of course to speed up loads and stores from and to memory, by, well, caching recently used data. But if attacker-controlled programs are allowed to run concurrently (or in parallel, if at least one layer of caching is shared between several cores) with our otherwise isolated secret-manipulating program, then the attacker code’s timing will become dependent on the memory addresses accessed by our program, and vice-versa, thereby creating an observable timing channel leaking information about the manipulated secrets! Thus any secret-manipulating program running on a target with caches should be very careful never to access memory at secret-dependent addresses.

Those problems, and the attacks they enable, have been widely known for three decades now, yet they still plague modern day, widely used implementations. Interestingly enough, the situation is in many ways akin to that of memory corruption, with phrack’s article on stack smashing being published the exact same year as Kocher’s seminal paper on timing attacks. Both kinds of vulnerabilities have since evolved far beyond the boundaries of their original expositions of course, with ROP, JOP and heap exploitation on the one hand, and cache related side-channels on the other (not to mention speculative execution). But, crucially, both kinds of vulnerabilities are still commonly found in computer systems that we very much rely upon for safety and privacy.

A lot of work has of course gone into tackling the menace of memory corruption bugs, from the development of mitigations and the rise of fuzzing as a somewhat common tool to use, to the always wider adoption of memory-safe languages. The same is true for side-channel vulnerabilities in cryptography libraries and embedded systems, with, in particular, the development of many tools to detect, or even soundly rule out, timings vulnerabilities. And yet, none of these tools have really seen wide adoption by cryptography libraries developers.

The (cryptographic) constant-time policy

If you think about it, there are two conceptually simple ways to tackle the problem of verifying that an implementation does not leak secrets via timing channels. The simplest one would certainly be to run the code many times with different secret data, and measure its timing, so that any statistically significant variation can be interpreted as the tell-tale sign of the presence of a timing vulnerability. There are at least two problems with this approach, though, that severely limit its applicability:

  • First of all, while such a measured timing is a good proxy for control-flow based leaks, it is not very clear how to simulate the influences an attacker could have on the state of the caches while the program is running.
  • But even then, execution timing depends both on the machine code that gets generated by the compiler, and on the microarchitectural details of the actual hardware implementation, so that strictly speaking, measurement campaigns would have to be run for every specific targeted SoC, which would effectively shift the verification burden to the downstream users of cryptography libraries.

The other general approach one could take is to directly verify that the code does not take control-flow decision based on secret data, nor accesses memory at secret-dependent addresses, as we have previously identified those two patterns as the sources of timing leaks (this is called the (cryptographic) constant-time policy in the literature). How hard can it be? Well, it turns out that, while this is certainly doable, a very annoying problem is that general-purpose compilers (think GCC, Clang/LLVM) do not preserve those two properties while compiling programs down to machine code! And not only do they not preserve it in theory, but they do not preserve it in practice too, regularly wreaking havoc in cryptography libraries2. Fortunately, some ways of writing things have been found that most often (and currently) prevent the compiler from introducing timing leaks, as used for instance in the subtle crate – but there is really no guarantee.

So, realistically, the only sound and portable (from one implementation of an ISA to another) way to go about verifying constant-timedness of cryptography libraries would be to verify that the constant-time policy is respected at the machine code level. This is not an easy thing to do, yet amazingly, this is exactly what the checkct plugin of Binsec allows one to do! It does so using (an elaborate version of) symbolic execution, and is perfectly sound, under the assumption that instructions themselves execute in time independent from their operands3.

Where does cargo-checkt comes into the picture

Binsec is great, yet cryptography libraries developers and maintainers still do not have a simple, clear way to monitor the timing behavior of their code in CI, although helpful tools do exist, like dudect-bencher. One of the main difficulties is that verifying cryptographic libraries requires writing a verification driver or harness, that builds inputs to the function under test, before calling the latter, much like fuzzing. This is less than ideal, as it adds up to the cost of writing and maintaining cryptographic code. But it can be done, and can be made easier with some relatively simple tooling streamlining the process.

We have developed cargo-checkct with this exact goal in mind, aiming to make it easy to verify constant-timedness of Rust cryptography libraries using binsec, and with the hope of bringing the benefits of the latest research to the greatest number. With cargo-checkct you can generate all the boiler-plate part of the verification driver, and automatically generate an appropriate binsec script to run the actual verification, in the matter of a couple command line calls.

As every tool of course, cargo-checkct comes with a number of limitations, some inherited from binsec, and some its very own, yet as the examples provided in the repository aim to show, it does enable the verification of cryptography libraries from dalek-cryptography or RustCrypto, and many more. We are thus very excited to open-source it, with the hope that it can contribute to the overall improvement of the security of cryptography at large, and make a concrete dent into the problem of ruling out timing leaks in practice.

The repository (https://github.com/Ledger-Donjon/cargo-checkct) is the best place to look if you are interested in more details and examples, but just to give you a taste of what using cargo-checkct looks like, let’s quickly take the x25519-dalek crate as example. Verifying that the provided Diffie-Hellman implementation is constant-time on both thumb and riscv targets is a mere matter of running cargo-checkct init, implementing a simple harness:

pub fn checkct() {
    use dalek::{PublicKey, EphemeralSecret, SharedSecret};
    let mut public_key_bytes = [0u8; 32];
    PublicRng.fill_bytes(&mut public_key_bytes);

    let public = PublicKey::from(public_key_bytes);
    let ephemeral = EphemeralSecret::random_from_rng(PrivateRng);
    let shared_secret = ephemeral.diffie_hellman(&public);
    core::hint::black_box(shared_secret);
}

and finally running cargo-checkct run, which in addition to tunneling back some of binsec detailed output, crucially outputs (in this instance, and as of this writing) a reassuring SECURE status.

TL;DR

All cryptography libraries should consistently produce constant-time machine code on all architectures they target, under risk of complete breakage. Alas this is extremely hard to ensure in CI, allowing severe vulnerabilities to regularly surface in critical code. cargo-checkct is here to solve this issue, focusing in particular on no_std-capable Rust libraries. Try it out!


Charles CHRISTEN
Security Engineer – Ledger Donjon

  1. See for instance cachebleed for a relatively recent demonstrated attack on OpenSSL. ↩︎
  2. This happened quite recently in secp256k1 for instance, see https://github.com/bitcoin-core/secp256k1/blob/master/CHANGELOG.md#032—2023-05-13. ↩︎
  3. This can happen both at the lower end, for multiplications and divisions on very small microcontrollers, and at the higher end, as exemplified by the necessary introduction of the DIT bit in Armv8.4-A↩︎

Stay in touch

Announcements can be found in our blog. Press contact:
[email protected]

Subscribe to our
newsletter

New coins supported, blog updates and exclusive offers directly in your inbox


Your email address will only be used to send you our newsletter, as well as updates and offers. You can unsubscribe at any time using the link included in the newsletter.

Learn more about how we manage your data and your rights.