Submission Requirements
For the smooth progress of the contest, we will need every team to submit their code and trained model
checkpoint.
-
Code: Please package your code in its entirety and do not leave any files out.
Please make sure that the code is correct and can be run without any errors.
-
Model: We will need every team to submit the trained model.
-
ReadMe: The ReadMe file needs to be included in the submission.
It should include the steps that how we can smoothly run your code to evaluate the result of your submission.
The ReadMe file should be detailed and clear to read.
If we have any questions with the test, we will connect with the email you used to register.
Frequently Asked Questions for the Submission and Evaluation
Q1: Is pre-processing time (e.g., data preprocessing, model loading) included in the latency evaluation?
A1: No. The latency measurement does not include any pre-processing. It only covers the generation process,
from noise sampling to file saving.
We will measure the time taken to generate a single pairwise data for 10 times, and use the average latency
for the latency score calculation.
Q2: Data generation shape?
A2: For FID and SSIM evaluation, the required output batch size is 500, matching the original dataset
structure.
For latency measurement, we will use a batch size of 1.
Therefore, please ensure your code can accommodate generating both a batch size of 500 for evaluation metrics
and a batch size of 1 for latency measurement, especially if you make significant modifications to the output
generation process.
Q3: How to evaluate the SSIM score?
We pre-train an InversionNet using the original dataset. For computing the SSIM score, the generated seismic
data is used as the input to InversionNet, and the output is compared with the corresponding generated
velocity maps to calculate the SSIM score.
For a fair evaluation, we will not release the weight of the pre-trained InversionNet.
However, the teams can assess a similar model using the openFWI pre-trained model at
here.
Q4: How to evaluate the FID score?
A4: The FID score is calculated using the InceptionV3 (modified) model. For computing the FID score, we
randomly select 10,000 paired samples from the original dataset and 10,000 paired samples from the generated
data.
To avoid the teams using the InceptionV3 we pre-trained to directly optimize the generation model, we decided
not to release the weight to the pre-trained InceptionV3.
Thus, we provide preliminary submission opportunities to evaluate your solutions.
Q5: Additional Python package?
A5: Regarding the installation of additional Python packages, as stated previously, it is acceptable to use a requirements.txt file to install necessary libraries.
Please ensure these packages are compatible with the Raspberry Pi environment.
Q6: How to submit the code and model?
A6: We will send an email to the registered email address with a Google Drive link to upload the code and model. Please let us know if you do not receive the email after the submission opening date.
Q7: Is a preliminary submission mandatory?
A7: No. The preliminary submission is an optional opportunity for the teams to evaluate the solutions.
However, we strongly recommend the teams to submit the preliminary submission so that you can learn the performance of your solutions and your can adjust your solution accordingly before the final submission.
At the same time, this also helps us to learn how to run your code and how to evaluate your solution, leading to a more smooth final evaluation.