Challenge Rule
Goal
There are four test datasets (1 public & 3 private). The goal is to train a neural network that can recover the secret key for all datasets with the least number of traces as possible. Our focus will be on targeting the first byte (i.e. byte 0) processed by the S-box in the initial round of AES. We will be using ge+ntge to evaluate your score, see below.
Are you ready? Head over to Getting Started and start training those neural networks!
Attack Metric
The challenge objective is to train a neural network F, to recover a secret key byte with the least number of attack traces under the profiled setting. Formally, suppose the neural network outputs a probability score for each hypothetical sensitive value for each attack trace t, i.e. y=F(t). Then, during the attack phase, the log likelihood score is computed for each key k∈K as,
where Na is the number of attack traces used and zi,k is the sensitive value based on the key k and the public variable pti (i.e. zi,k=Sbox(pti⊕k) or HW(Sbox(pti⊕k))). The log-likelihood scores are then sorted into a guess vector G=[G0,G1,…,G∣K∣−1], where G0 is the most likely key candidate while G∣K∣−1 is the least likely key candidate. The index of G is called the rank of the key. The Guessing Entropy (GE) is defined as the average key rank over multiple experiments. In this challenge, an attack is successful when GE=0 over 100 experiments.
To assess neural network performance, the metric NTGE was proposed in [1]. It is defined as the minimum number of test traces a profiling model needs before its GE stabilizes to 0 and stays there — meaning the correct key is consistently ranked first.
In this challenge, we rank the submissions using the metric ge+ntge proposed by [2] which considers both GE and NTGE into one metric:
where c is a positive constant. In our challenge, we assign c=100,000 for unsuccessful key recovery. Your primary objective is to achieve the smallest average ge+ntge across the four datasets (1 public and 3 private).
Rules
You can play individually or as a team.
The submitted code should be following the PyTorch version 2.7.0 (https://pytorch.org/) or Tensorflow version 2.19.0 (https://www.tensorflow.org/install/pip) and the libraries in the requirements.txt given (see Getting Started).
No additional libraries should be installed. Any submission not adhering to the correct version will be eliminated.
Evaluation time of the attack is up to 4 hours (excluding profiling). The evaluation will be running on Intel(R) Xeon(R) W-2123 CPU @ 3.60GHz, NVIDIA Quadro P6000 GPU, 64GB memory. Each datasets will be evaluated with 100,000 attack traces.
We will only be targeting first byte of the key in this challenge i.e., byte 0. Submission targeting other key bytes will not be considered for scoring and ranking.
Each submission is limited to one attack. You are allowed to make multiple submissions. Please set
total_nb_attack_traces = 100000andnb_attack_traces = 100000to compare with the baseline model.The team name will be posted in the leaderboard. Code/attack will not be publicly shown throughout the challenge.
At the end of the challenge, the winners' names, along with their code/attack, will be publicly announced. Please note that to claim their cash prize, winners must consent to the disclosure of their identity.
When we receive a submission, an acknowledgment will be sent within two working days. If the attack is successful, it will be reflected on the leaderboard once the results are validated.
Submission will close on 15 August 2025, 23:59:59 AoE. Submission received after this deadline will not be considered for the challenge.
Prizes
The teams with the top three scores will earn cash prizes.
The prizes will be given as follows:
First prize: $1000
Second prize: $600
Third prize: $400
The awarded teams will be asked to send a short description of their attacks. Teams cannot win more than one award.
References
Zaid, G., Bossuet, L., Habrard, A., & Venelli, A. (2019). Methodology for Efficient CNN Architectures in Profiling Attacks. IACR Transactions on Cryptographic Hardware and Embedded Systems, 2020(1), 1-36. https://doi.org/10.13154/tches.v2020.i1.1-36
Yap, T., Bhasin, S., Weissbart, L. (2025). Train Wisely: Multifidelity Bayesian Optimization Hyperparameter Tuning in Deep Learning-Based Side-Channel Analysis. In: Eichlseder, M., Gambs, S. (eds) Selected Areas in Cryptography – SAC 2024. SAC 2024. Lecture Notes in Computer Science, vol 15517. Springer, Cham. https://doi.org/10.1007/978-3-031-82841-6_12
Last updated