I attended the American Physical Society’s March Meeting in Los Angeles and observed significant growth in machine learning sessions. One session drew such large crowds that a 30 minute line formed to get into the room due to fire code restrictions.

Key Observations

The talks fell into several distinct categories:

  1. Automation tasks - Using AI to analyze microscopy images and detect grain structures, automating work traditionally assigned to junior researchers.

  2. Sampling optimization - Exploring whether researchers can extract equivalent information from fewer samples, reducing beamtime and material consumption. This represents a modern take on DOE (Design of Experiments).

  3. Theory-focused approaches - Training neural networks on computational results (like DFT calculations) to create faster approximations, though with reduced accuracy suitable for screening purposes.

  4. Inverse problems - Comparing experimental data against theoretical models to classify which theory best fits observations.

  5. Materials prediction - Though some work showed clear overfitting, other efforts showed maybe some promise.

Critical Insight

A crucial takeaway emerged: physics researchers typically lack big data. As one speaker noted, there isn’t sufficient data volume for algorithms to independently learn representations—instead, researchers must construct representations using established physics models.

Machine learning in physics remains in early stages, with questions about whether such tools genuinely advance understanding.