We just released an implementation of GemNet-dT (following arxiv.org/abs/2106.08903) on the OCP repository along with pretrained model weights. This model achieves the best results we know of thus far across all OCP tasks (see leaderboards). This was made possible by Johannes Klicpera who implemented GemNet in the OCP codebase over his summer internship with us, thank you!
Specifically, improvements (averaged across all splits) compared to the next-best entry on the leaderboard:
— IS2RE energy MAE (via relaxation), 0.4342 —> 0.3997 (7.9%↑ relative)
— S2EF force MAE, 0.0297 —> 0.0242 (18.5%↑ relative)
— IS2RS AFbT, 21.8% —> 27.6% (26.6%↑ relative)
GemNet-dT is also relatively quite efficient. For S2EF, we’re able to fit batch sizes of up to 16 (or 32 with AMP) on NVIDIA 32GB V100 GPUs, compared to 8 for DimeNet++ and 3 for SpinConv. Training these models for 24 hours on 16 x V100s gets to 0.025 force MAE on val ID for GemNet, compared to 0.035 for DimeNet++ and 0.047 for SpinConv.
Also included as part of this code release is an implementation of SpinConv (following arxiv.org/abs/2106.09575) and several other improvements. Complete details here: Release v0.0.3: GemNet-dT, SpinConv, new data: MD, Rattled, per-adsorbate trajectories, etc. · Open-Catalyst-Project/ocp · GitHub.
Note that our team will not be entering GemNet, SpinConv, or any other model in the challenge we’re hosting at NeurIPS. We encourage everyone to refer to and/or build on any of this code for the challenge (or otherwise).