Workshop on ML for materials science

Some notes to accompany the ML workshop.

Motivation

To design new materials, we need to know their properties. There are two main routes to get the properties of a material:

  1. Perform an experiment to measure them
  2. Perform a simulation to “measure” them in silico

In many cases, performing an experiment is time-consuming and, hence, expensive. Also high-fidelity simulations can be very costly.

Fidelity expresses the exactness with which a surrogate represents the truth. In the context of ML you might also see the term multi-fidelity, which means that the approach uses multiple approximations with different levels of fidelity, e.g. density-functional theory and coupled cluster theory

Therefore, there is a need for methods that can help us to predict the properties of materials with high fidelity and low cost. In this lecture, we will see that supervised machine learning (ML) is a powerful tool to achieve this goal.

Interestingly, this tool can be used in many different ways.

Where does ML fit in the design process?

Machine learning can be used in multiple ways to make high-fidelity predictions of materials less expensive.

Note that reducing the cost has been a challenge for chemists and material scientists for a long time. Dirac famously said “The fundamental laws necessary for the mathematical treatment of a large part of physics and the whole of chemistry are thus completely known, and the difficulty lies only in the fact that application of these laws leads to equations that are too complex to be solved. […] approximate practical methods of applying quantum mechanics should be developed, which can lead to an explanation of the main features of complex atomic systems without too much computation”

Machine learning (green boxes) can be used at multiple places in the material design process.

  1. Replace expensive evaluation of the potential energy surface \(U(\mathbf{X}, \{\mathbf{Z}\})\): Quantum chemistry as a field is concerned with the prediction of the potential energy surface \(U(\mathbf{X}, \{\mathbf{Z}\})\) of a system of atoms of types \(\mathbf{Z}\) at positions \(\mathbf{X}\). Quantum chemists have developed different approximations to this problem. However, since they are all kinds of functions that map positions of atoms (and atom types, and in some cases electron densities/coordinates) to energies, we can learn those functions with ML.

    Note that once we have done that, we generally still need to perform simulations to extract the properties of interest (e.g. as ensemble averages).

    There are many good review articles about this. For example, see this one by Unke et al. as well as the ones by Deringer et al. and Behler in the same issue of Chemical Reviews.

  2. Directly predict the properties of interest Instead of computing the properties of interest using a molecular simulations, we can build models that learn the \(f(\mathrm{structure}) \to \mathrm{property}\) mapping directly. The basis for this mapping might be experimental data or high-fidelity computational data.

    Also about this approach, there are many review articles. I also wrote one, focussing on porous materials.

Note that in the context of using ML for molecular simulations, it can also be used to address sampling problems. We will not cover this in detail in this lecture. For a good introduction, see the seminal paper by Noe and a piece about it by Tuckerman.

Supervised ML workflow

The supervised ML workflow.

For the main part of this lecture, we will assume that we use models that consume so-called tabular data, i.e. data that is stored in a table (feature matrix \(\mathbf{X}\) and target/label vector/matrix \(\mathbf{Y}\)), where each row corresponds to a material and each of the \(p\) columns corresponds to a so-called feature. We wil later see that this is not the only way to use ML for materials science, but it is the most common one. We will also explore in more detail how we obtain the features.

We will use some data \(\mathcal{D} = \{(\mathbf{x}_i, y_i)\}_{i=1}^N\) to train a model \(f(\mathbf{x}) \to y\) that can predict the target \(y\) for a new structure described with the feature vector \(\mathbf{x}^*\).

Feeding structures into models

Incorporating symmetries/invariances/equivariances

Learning a very simple force field

To understand what it takes to feed structures into ML models, let us try to build a very simple force field. To make things simple and fast, we will just attempt to predict the energies of different conformers of the same molecule.

We will create some data using RDkit and then use scikit-learn to train a model.

Generating data
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from pymatviz.parity import density_scatter_with_hist
from rdkit import Chem
from rdkit.Chem import AllChem, PyMol
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
import matplotx
plt.style.use(['science', 'nature', matplotx.styles.dufte])

def gen_conformers(mol, numConfs=10_000, maxAttempts=1000, 
    pruneRmsThresh=0.2, useExpTorsionAnglePrefs=True, 
    useBasicKnowledge=True, enforceChirality=True):
    """Use RDkit to generate conformers for a molecule."""
    ids = AllChem.EmbedMultipleConfs(mol, numConfs=numConfs, maxAttempts=maxAttempts, pruneRmsThresh=pruneRmsThresh, useExpTorsionAnglePrefs=useExpTorsionAnglePrefs, useBasicKnowledge=useBasicKnowledge, enforceChirality=enforceChirality, numThreads=0)
    return list(ids)

def calc_energy(mol, conformer_id, iterations=0):
    """Calculate the energy of a conformer using the Merck Molecular Force Field."""
    ff = AllChem.MMFFGetMoleculeForceField(mol, AllChem.MMFFGetMoleculeProperties(mol), confId=conformer_id)
    ff.Initialize()
    ff.CalcEnergy()
    results = {}
    if iterations > 0:
        results["converged"] = ff.Minimize(maxIts=iterations)
    results["energy_abs"] = ff.CalcEnergy()
    return results

# create a molecule
mol = Chem.AddHs(Chem.MolFromSmiles('CC(CCC)CC(C)(CCCC)O'))

# visualize some conformers using PyMol
conformer_ids = gen_conformers(mol)
v= PyMol.MolViewer()
v.DeleteAll()
for cid in conformer_ids[:50]: 
    v.ShowMol(mol,confId=cid,name='Conf-%d'%cid,showOnly=False)
v.server.do('set grid_mode, on')
v.server.do('ray')
v.GetPNG()

For those conformers, we can now retrieve the positions and energies and save them in a pandas dataframe.

# make column names
coordinate_names = sum([[f'x_{n}',f'y_{n}', f'z_{n}'] for n in range(mol.GetNumAtoms())], []) 

# make a dataframe
data = []
for conformer_id in conformer_ids:
    energy = calc_energy(mol, conformer_id)['energy_abs']
    positions = mol.GetConformer(conformer_id).GetPositions().flatten()
    position_dict = dict(zip(coordinate_names, positions))
    position_dict['energy'] = energy
    data.append(position_dict)
data = pd.DataFrame(data).sample(len(data))
data
x_0 y_0 z_0 x_1 y_1 z_1 x_2 y_2 z_2 x_3 ... x_36 y_36 z_36 x_37 y_37 z_37 x_38 y_38 z_38 energy
2043 -1.078591 0.818283 1.240628 -0.800294 0.502684 -0.201929 -1.954519 1.026058 -1.025512 -3.273669 ... 3.251420 2.714797 0.788806 3.367359 2.219022 -0.991762 0.989111 -3.501398 0.263898 57.642061
708 2.297451 -0.588703 1.198600 1.849392 -0.812041 -0.206686 2.744403 -0.182895 -1.240594 2.906312 ... -4.716954 2.169116 -1.544502 -5.228039 1.863652 0.124533 0.114421 -2.761600 -0.960946 54.244578
408 1.163540 -1.312797 -2.566840 1.551876 -0.236862 -1.563855 2.214690 -0.885194 -0.416510 2.675456 ... -4.168519 -0.135408 2.800937 -3.585246 -1.333950 3.969698 -1.691907 0.246868 -2.597852 52.567947
1002 -1.867094 -1.693097 -1.332886 -1.541937 -0.278548 -0.948931 -2.886193 0.397350 -0.687940 -2.772851 ... 4.794642 1.574557 0.373096 3.065221 1.557351 0.781498 0.572776 -0.734201 2.068011 54.796651
1321 2.928589 -1.644500 -0.316096 1.745386 -0.908216 0.368037 1.712225 0.465713 -0.229357 3.044419 ... -2.063003 2.004943 -1.485612 -3.559160 2.933577 -1.076262 -1.837432 -2.695152 -0.441605 48.166088
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
1116 -3.142345 -1.189532 -0.401893 -1.930097 -0.626832 0.346319 -1.753804 0.824601 0.262187 -2.876675 ... 4.956526 0.643734 1.010794 4.919351 2.416299 0.814133 0.552486 -1.910775 2.199388 70.883781
1955 1.685931 -2.472720 -0.940277 1.534224 -1.356805 0.238021 2.715622 -0.563858 -0.031399 3.177957 ... -5.025281 0.698320 0.708392 -3.869999 1.960012 0.214193 -1.032830 0.387742 2.688491 81.512316
842 -1.142410 0.366678 -1.588368 -1.329860 0.783292 -0.150924 -2.783483 1.104390 0.152162 -3.683336 ... 5.412591 1.012171 -0.679850 5.268181 -0.551458 0.257663 0.956310 -2.102239 -0.739920 44.487110
22 2.782766 -0.786184 -1.322582 1.828757 -0.846233 -0.084366 2.433197 -0.189490 1.062991 2.823427 ... -2.935750 1.666981 -0.663128 -4.666748 1.184414 -1.000579 -0.265753 -2.606498 0.368280 76.366987
314 -1.989624 1.571328 0.448798 -1.562195 0.547476 -0.531285 -2.366938 -0.678900 -0.674343 -3.779642 ... 4.722953 2.322387 -0.370448 6.072408 1.184723 -0.149746 -0.124138 0.430200 2.444614 54.955516

3172 rows × 118 columns

Given this data, we can build a model. We will use a gradient boosting regressor from scikit-learn. We will also split the data into a training and a test set. In later sections, we will see why this is important. But for now, let us us just appreciate that a test set—conformers we did not train on—will give us a measure of how well our model will perform on new, unseen, conformers.

positions = data[coordinate_names] # X
energies = data['energy'] # y

# split into training and test set
train_points = 3000
train_positions = positions[:train_points]
test_positions = positions[train_points:]
train_energies = energies[:train_points]
test_energies = energies[train_points:]

# train a model
from sklearn.ensemble import HistGradientBoostingRegressor
model = HistGradientBoostingRegressor()
model.fit(train_positions, train_energies)
HistGradientBoostingRegressor()
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.

Once we have trained a model, we can use it to predict the energies of new conformers. Let’s first see how well it does on the data it was trained on.

train_predictions = model.predict(train_positions)


density_scatter_with_hist(train_energies.values, train_predictions, xlabel='True energy', ylabel='Predicted energy')
<AxesSubplot: xlabel='True energy', ylabel='Predicted energy'>

This looks pretty good. But how well does it do on new conformers? Let’s see.

test_predictions = model.predict(test_positions)

density_scatter_with_hist(test_energies.values, test_predictions, xlabel='True energy', ylabel='Predicted energy')
<AxesSubplot: xlabel='True energy', ylabel='Predicted energy'>

From physics we know that (without external field) the energy of a molecule does not depend on where in space it is. That is, if we translate a molecule along \([1, 1, 1]\), the energy should not change.

# translate the molecule along [1, 1, 1]
translated_positions = train_positions + 1
translated_predictions = model.predict(translated_positions)
density_scatter_with_hist(train_energies.values, translated_predictions)
<AxesSubplot: xlabel='Actual', ylabel='Predicted'>

This is not what we expect. Our model shows completly unphysical behavior and predicts a different energy for the same conformers in different positions in space.

To fix this, and related problems, we need to use a more elaborate approach to building a model.

Mmaking predictions invariant/equivariant to transformations

Invariance and equivariance are terms that have become very relevant in ML. It is always important to mention with respect to what operation something is invariant and equivariant; if people don’t mention this they often refer to the symmetry operations of the Euclidean group which comprises all translations, rotations, and reflection. Invariant means that the property of interest does not change under those operations. Equivariant means that it changes in the same way. The energy, for example, is invariant and the forces are equivariant.
What are symmetries we would like to respect?

Before we can talk about how to build a model that respects symmetries, we need to know what symmetries we would like to respect.

In the case of molecules, we would like to respect the following symmetries:

  • translation: that is, if we move a molecule along a vector, the energy should not change (see above)
  • rotation: that is, if we rotate a molecule, the energy should not change
  • permutation of atoms: that is the order with which we put the atoms in the model does not matter

For crystals, we additionally need to respect periodicity. That is, for intensive properties, there should be no difference between using a unit cell or a super cell of that unit cell as input for a model.

Broadly speaking, there are three different ways to build models that respect symmetries.

  1. Data augmentation: This is the most straightforward approach. We can generate new data points by applying the symmetries to the existing data points. For example, we can generate new conformers by rotating the existing conformers. This approach is very simple to implement, but it can be very expensive. For example, if we want to generate new conformers by rotating the existing conformers, we need to generate a new conformer for every rotation. This approach is often used for computer vision pipelines in which you might want to detect a cat in an image independent of the orientation. In this case, you can generate new images by rotating the existing images.
  2. Features that are invariant/equivariant : This approach is more sophisticated. We can build features that are invariant/equivariant to the symmetries we want to respect. For example, we can build features that are invariant to rotation. In the case of force field such features are bond lengths and angles. This is approach is widely used in ML for chemistry and materials science.
  3. Models that are invariant/equivariant: Alternatively, one can build special models that can consume point clouds as inputs and are equivariant to the symmetries we want to respect. We will not discuss this in detail, but you can find starting points in this perspective by Tess Smidt.
Invariant/equivariant features
Symmetry functions
Fingerprints
Correlation functions
Symmetry functions
Cheaper computations

Training a model

How to know if a model is good?

Before we can proceed to building models, we need to estabilsh a way to measure how good a model is.

Interestingly, this is not as trivial as it may sound. To realize this, it is useful to formally write down what we mean by a good model.

Empirical risk minimization

Let’s assume we have some input space \(\mathcal{X}\) and some output space \(\mathcal{Y}\). We can think of \(\mathcal{X}\) as the space of all possible inputs and \(\mathcal{Y}\) as the space of all possible outputs. For example, \(\mathcal{X}\) could be the space of all possible molecules and \(\mathcal{Y}\) could be the space of all possible energies. We want to learn a function \(f: \mathcal{X} \rightarrow \mathcal{Y}\) that maps inputs to outputs. We can think of \(f\) as a model that we want to train.

To build this models we have samples of the joint distribution \(p(x, y)\), where \(x\) is an input and \(y\) is the corresponding output. We can think of this as a set of data points \(\{(x_1, y_1), (x_2, y_2), \dots, (x_n, y_n)\}\).

If we now define a loss function \(L\) we can compute the risk, which is the expected value of the loss function:

\[ R(h)={\mathbf {E}}[L(f(x),y)]=\int L(f(x),y)\,dP(x,y). \]

our goal is to find a model \(f\) that minimizes the risk:

\[ {\displaystyle h^{*}={\underset {h\in {\mathcal {H}}}{\operatorname {arg\,min} }}\,{R(h)}.} \]

In practice we cannot compute this. The reason is that we do not have access to the joint distribution \(p(x, y)\), but only to a finite set of samples \(\{(x_1, y_1), (x_2, y_2), \dots, (x_n, y_n)\}\).

Linear regression

import jax.numpy as jnp

def linear_regression(x, w, b):
    return jnp.dot(x, w) + b
def loss(w, b):
    prediction = linear_regression(x, w, b)
    return jnp.mean((prediction - y) ** 2)
def init_params(num_feat):
    return np.random.normal(size=(num_feat,)), 0.0
loss_grad = jax.grad(loss, argnums=(0, 1))
learning_rate = 1e-6
num_epochs = 1000

Bias-variance trade-off

Hyperparameters

Kernel trick

Kernel-based machine learning can be thought of expressing the property of interest via an expansion in a basis spanned by the structures in the training set. Figure taken from M. A. Caro, Arkhimedes 2018, 3, 21.

Feature importance

Permutation feature importance

Feature selection

Curse of dimensionality

For understanding the curse of dimensionality, it is useful to consider a very simple ML model, the \(k\)-nearest neighbors model. In this model, we have a set of training points \(\{(x_1, y_1), (x_2, y_2), \dots, (x_n, y_n)\}\), where \(x_i\) is a vector of features and \(y_i\) is the corresponding label. To make a prediction, we compute the distance between the input and all training points and return the mode of the labels of the \(k\) closest training points.

Clearly, in this algorithm it is important to find the nearest neighbor. In general, this is important in many algorithms, for instance also in kernel-based learning.

Let’s now ask ourself what part of the space we need to find the nearest neighbors.

For this, let’s start considering a unit cube \([0,1]^d\) and \(n\) data points \(x_i\) sampled uniformly from this cube.

The smallest hypercube that contains \(k\) out of the \(n\) points has the following edge length

\[ l^d = \frac{k}{n} \quad \Rightarrow \quad l = \left(\frac{k}{n}\right)^{1/d} \]

If we plot this for different values of \(d\) we get the following plot:

import matplotlib.pyplot as plt
import numpy as np 

def length(d, k=5, n=10_000):
    return (k/n)**(1/d)

d = np.arange(1, 1000)

plt.plot(d, length(d))
plt.xlabel('numbr of dimensions')
plt.ylabel('length of hypercube that contains k neighbors')
Text(0, 0.5, 'length of hypercube that contains k neighbors')

Clearly, for large \(d\) the length approaches 1—which means that all points are now almost equally far apart and comparing distances no longer makes much sense.

We can also check this by performing a simulation: Generating random \(d\) dimensional points and computing the distance between them. We can then plot the distribution of distances.

from scipy.spatial import distance_matrix
dimensions = [2, 5, 10, 100, 10_000]
num_points = 1000

fig, axes = plt.subplots(1, len(dimensions), sharey='all')

def get_distances(d, num_points):
    points = np.random.uniform(size=(num_points, d))
    distances = distance_matrix(points, points)
    return np.array(distances).flatten()

for d, ax in zip(dimensions, axes):
    distances = get_distances(d, num_points)
    ax.hist(distances, bins=20)
    ax.set_title(f'd={d} \n cv={distances.std()/distances.mean():.2f}')

Clearly, for large \(d\) the distances are almost the same (the histograms are much more peaked). We can also see this in terms of the coefficient of variation (cv), which is the standard deviation divided by the mean. For large \(d\) the cv is very small, which means that the distances are very similar.

Feature selection approaches

Feature projection

Principal component analysis

t-distributed stochastic neighbor embedding

Feature learning