FAQ
Q: What datasets are provided?
The dataset includes:
500K profiling/training traces (random plaintext and key);
100K attack traces (random plaintext, fixed key).
All traces are stored in an HDF5 file, similar to ASCAD’s structure, with metadata (plaintext, key, labels). See Datasets
Q: What is the evaluation metric?
Submissions are ranked by average (see Challenge Rule) over four attack trace sets (1 public, 3 private). Lower represents better performance.
Q: Are there restrictions on libraries or frameworks?
Yes. Code must use PyTorch 2.7.0 or TensorFlow 2.19.0 and all the libraries given in the requirements.txt file (see Getting Started). Submissions with incompatible versions will be disqualified.
Q: Can I submit multiple times?
Yes! Participants can make multiple submissions during the challenge period (from June 15 to August 15). Each submission is limited to one attack.
Q: How are submissions validated?
Organizers test submissions against:
The public attack trace set;
Three private attack trace sets;
Correctness of key recovery.
Invalid submissions (e.g., rule violations) are discarded.
Q: What should a submission include?
Submission should include,
ReadMe filed describing the GE and NTGE;
Codebase;
See Submission for more detailed explanations.
Q: How is the scoreboard updated?
The scoreboard is updated continuously. Final rankings will be announced around September 1.
Last updated