On baseline models (SchNet, DimeNet, CGCNN)

Hi OCP team, thank you for the dataset and codebase! I had a couple queries re. the choice of baseline models across OC20 tasks:

  1. Did you consider any simpler baselines besides Graph Neural Networks? For e.g. some linear models specialized for such tasks, such as Moment Tensor Potentials, or some basic machine learning baselines s.a. SVMs on the input features?

  2. Within GNNs, I noticed that you chose architectures which are specialized to OC20-like tasks. Could you comment on/did you also benchmark the performance of generic GNNs such as the basic Graph Convolution, Graph Attention, and Graph Isomorphism Nets? (As I recall that the ocp-models repository used to have classes for these architectures at some point in the past…)

Best,
Chaitanya

Hey, thanks for these questions @chaitjo

1 — I believe we haven’t tried the reference you specifically pointed to (or other descriptor-based models for that matter). We’d tried a few simpler ML baselines with initial versions of the dataset, but graph neural networks intuitively made more sense and performed significantly better. Our current best-performing model (DimeNet++) only makes use of atomic numbers and positions as input features. CGCNN uses a richer set of atomic features.

2 — We’ve tried variants of graph transformers / graph attention networks in the past, but didn’t see better results than the CGCNN baseline (which in turn is worse than SchNet and DimeNet++). The continuous edge filters in SchNet / DimeNet++ seem to help a lot.

Hi @abhshkdz, thanks for the prompt response.

For DimeNet++, does it perform the best across all 3 OC20 tasks? Does it outperform ForceNet significantly? (B/c I found it very interesting that ForceNet does not come with physical/geometric constraints baked into it, unlike SchNet, DimeNet, etc. and outperformed them significantly on S2EF.)

Thank you for the pointer to the graph transformer and development branch! The model zoo looks more diverse than the main branch with the addition of GPs and 3D convolutions – it may be useful to highlight this to future users.

P.S. looking forward to analyzing the full leaderboard once its out!

Yes, our current best-performing model across all 3 tasks is DimeNet++; we’ll update the dataset paper with DimeNet++ results shortly.

ForceNet and DimeNet++ have similar performance on S2EF, but ForceNet is significantly faster since it doesn’t predict forces via gradients.

Good idea; we’ll add a mention somewhere. The dev and many other branches are mainly for one-off experiments, and not all of it is actively supported from the master branch. But yeah, hoping to broaden the supported functionality over time.

The leaderboard should be up soon! We’re at the final testing / debugging stage before we publish it.

ForceNet and DimeNet++ have similar performance on S2EF, but ForceNet is significantly faster since it doesn’t predict forces via gradients.

I see; and do you have any comments on training ForceNet for energy prediction tasks (although that’s not what it’s designed for)?

That’s doable; we haven’t tried it yet. Like when we train for forces, we’d probably need rotation augmentation to make it robust to rotations given that it’s not invariant by design.