data science – Just Another Datapoint http://williamratcliff.com a random walk through my thoughts... Mon, 30 Apr 2018 02:13:36 +0000 en-US hourly 1 https://wordpress.org/?v=6.6 Machine Learning at the APS March Meeting http://williamratcliff.com/machine-learning-at-the-aps-march-meeting/ http://williamratcliff.com/machine-learning-at-the-aps-march-meeting/#respond Tue, 13 Mar 2018 00:34:00 +0000 http://williamratcliff.com/?p=82 aps_2018_lineThis year, the March Meeting of the American Physical Society was in Los Angeles.    There were far more machine learning sessions than in the past.  In fact, one had such interest, that due to the fire code, there was a 30 minute line to get into the room!   So, what’s the status of things?There were a number of talks which were essentially using AI to automate tasks.  For example,  looking at microscopy images and automating detection of grains.  Basically doing tasks that we would now give to an undergrad, graduate student (or poor postdoc :-p).  There were also some talks on better sampling methods–can I learn the same amount of information from fewer samples?  Use less beamtime, less material?  I would classify these as essentially a modern take on DOE (Design of Experiments).  There was another track that was more theory focused.   They asked the question—“Can I look at say a collection of force fields or functionals and get a more “efficient” version?”  For example, if I run a number of DFT calculations, can I train a neural net (NN) on those results and then use the NN to do calculations faster?  The answer seems to be…maybe with lower accuracy, but good enough for screening?  There were also some variations of the inverse problem–if I’m a theorist, can I simulate a few models, train the NN on those and then feed it real data and see if it classifies things as fitting theory 1 or theory 2?  There were also some talks that looked at explorations of a model space—given this model, can I find that there is a vortex state?  There were some geared more at using AI to predict materials with properties.  In some cases, it was clear overfitting.   But, in others, maybe some promise…

One key takeaway is that for the most part, we are not dealing with big data.  As one speaker pointed out, we don’t have enough data to let our machine learning algorithms learn representations—we need to construct them based upon the rich models that we have developed to describe the world.    There were a number of good talks on choosing representations for crystal structures.  

In the hallway, I stumbled into a conversation where a theorist was complaining that he was disappointed.   That we do physics to understand something about how the world works—with the implication that machine learning was not helping there.  I would say that it’s still early days.

]]>
http://williamratcliff.com/machine-learning-at-the-aps-march-meeting/feed/ 0