Physics is unique among all the scientific disciplines in that it contains all possible length scales of study.  Physical phenomena is studied everywhere from subatomic particles to entire galaxies.  In my particular field of interest, condensed matter physics, we are concerned with phenomena which arises from atoms interacting with each other as well as electrons.  On this length scale, we more-or-less know the laws of physics that are obeyed: the theory of quantum mechanics.

The theory of quantum mechanics on the atomic level is not particularly novel; the formalism was more-or-less finalized before the Second World War.  What has changed recently is that advances in computing has made a way of doing quantum mechanics known as “Kohn-Sham density-functional theory ” (KS-DFT) computationally feasible.  KS-DFT is a qualitatively-accurate method for modeling quantum mechanics whose founders received the Nobel Prize in Chemistry in 1998 for their work (website).  According to a Nature survey in 2014 (website), 8 of the top 100 most cited papers in science involve technical or formal details about KS-DFT.

It is safe to say that KS-DFT (and closely related theories such generalized KS-DFT and many-body perturbation theory) has been wildly successful, and it is relatively computationally cheap compared to other methods for simulating quantum mechanics.  However, it is still a computationally expensive theory.  While calculations involving a few hundred atoms are routine in the literature, the computational expense quickly grows as the number of atoms increases, and calculations involving thousands of atoms require the usage of supercomputer.  However, many of the more interesting problems in materials science and condensed matter physics require thousands of atoms to properly model:  defects in materials, metallic glasses, absorption of molecules onto surfaces, and so on.

Electronic Structure Theory on Massively-Parallel Supercomputers

The main focus of my research is improving the computational efficiency of KS-DFT so that larger calculations involving tens of thousands of atoms may become routine.  One of the main bottlenecks limiting large-scale KS-DFT calculations is known as the Kohn-Sham eigenvalue problem, which is a major impediment to most KS-DFT packages as its computational expense quickly grows as the number of atoms in a calculation increases.  Accelerating or circumventing this computationally-expensive step is the main goal of the ELectronic Structure Infrastructure ELSI (website, GitLab), an community effort to establish a common open-source interface for solutions to the Kohn-Sham eigenvalue problem.  The ELSI team consists of developers from academia and national labs in the US and Europe, representing multiple KS-DFT codes (BigDFT, DGDFT, FHI-aims, NWChem, Quantum Espresso, SIESTA) and solver libraries (CheSS, ELPA, libOMM, PEXSI, SIPs).

ELSI unifies the community effort in overcoming this computational hurdle of KS-DFT by bridging the divide between developers of electronic structure solvers and KS-DFT codes.  ELSI gives KS-DFT developers access to multiple solvers via a unified interface, with matrix format conversion is done automatically by ELSI.  Solvers are treated on equal footing within ELSI, giving solver developers a unified platform for implementation and testing across codes and physical systems.  Solvers can work cooperatively with one another within ELSI, allowing for acceleration greater than either solver can achieve individually.  Most importantly, ELSI exists as a community for KS-DFT and solver developers to interact and work together to improve performance of solvers, with monthly web meetings to discuss progress on code development, yearly on-site connectors meetings, and planned webinars and workshops.

We are currently working on a “decision layer” for ELSI which will automatically determine the fastest solver for a particular calculation at run-time based on the attributes of the problem, e.g. matrix size, sparsity, and dimensionality.  As part of this work, we are constructing a rigorous benchmark set of materials sampling the broad class of possible KS-DFT calculations.  To our knowledge, this is the first time that such a benchmark set has been constructed, and we anticipate that its utility will extend far beyond ELSI, as it fulfills a critical need in the community for validation of code performance on a broad class of materials.  This benchmark set will be used by ELSI to profile performance on solvers on specific supercomputing architectures for real systems, which may then be used by the decision layer to make an educated guess about which solver to use.

I was one of the main developers of the FHI-aims electronic structure code, where I was primarily focused on optimizations for supercomputing resources and relativistic effects (see next section).  FHI-aims is an all-electron code targeting both highly accurate and large scale calculations.  Independent community validation efforts have placed FHI-aims among the most accurate of all KS-DFT codes (website), demonstrating its utility as a benchmark-quality code.  It is an internationally developed software project with approximately 440k lines of code1 and centers of development located in Berlin, Durham (USA), Munich, Helsinki, Heifei, and Graz; last year alone, 45 unique developers contributed to its development git repository.

FHI-aims has been designed to scale efficiently on massively parallel supercomputing resources by utilizing a localized basis set combined with a decomposition scheme for key algorithms.  Thanks to these optimizations, calculations involving thousands of atoms and/or CPU cores are routinely performed for production calculations, and calculations involving 10,000s+ of atoms and/or CPU cores have been successfully performed on leading supercomputing resources.  I maintained and optimized FHI-aims for use on Titan, Mira, Theta, and summitdev and supervised graduate students working on Cori, MareNostrum, Edison, Archer, and Sisu.  I was also active in maintaining the regression test suite used to verify compiled executables.

My favorite project while working on FHI-aims was the implementation of GPU acceleration to fully utilize current and next-generation supercomputers;  to name a few, Titan, Piz Daint, Summit, and Sierra. The GPU acceleration code I co-developed in FHI-aims (with Björn Lange) demonstrates speedups of 3x – 4x for calculations involving structural relaxation on the KS-DFT level; more information may be found on my poster for the 255th ACS National Convention here.  It is our hope that, by bringing down the computational cost of KS-DFT, calculations involving 10,000s+ of atoms may become routine in materials science and condensed matter physics, opening up the opportunity for new classes of scientific discoveries.

1 Reported number of lines of code taken for git commit d9126eba0b63cdf01d84ca48dc16d1817571d979 using the cloc utility (version 1.74) .  The number reported here excludes external libraries redistributed with FHI-aims; their inclusion would bring the total number of lines of code to 730k.

Prediction of Novel Photovoltaic Materials

Photovoltaic electricity generation, a.k.a. solar cell technology, is an attractive potential solution to the energy problems facing the 21st century.  Its main strength over other clean energy sources is its ubiquity: as its main source of energy is sunlight, it is portable and deployable in almost any environment, whereas other clean energy sources are tied to fixed region-specific resources, e.g. strong regional winds, nearby rivers, and geothermal ducts.  Advances in photovoltaic electricity generation have the potential to promote energy independence on an international scale, as they will lessen (though not eliminate) the reliance on region-specific natural resources for energy generation.

At present, there are barriers impeding the widespread adoption of photovoltaic energy generation.  Its efficiency (i.e. how much of the energy from sunlight that a solar cell can extract) is not sufficient for industrial-scale energy production at present outside of select regions.  Many of the commercially successful photovoltaic technologies rely on rare, expensive elements, e.g. selenium, tellurium, and indium, which drive up the price of photovoltaic cells and further impede industrial-scale energy production.  Most interestingly, photovoltaic technologies may not be as “clean” as they appear on first glance.  Promising photovoltaic technologies often incorporate toxic elements such as mercury and lead.  These toxic elements pose a risk for workers constructing solar cells, and it is increasingly being recognized that these toxic elements may leak out of solar cells over time and poison the local environment.  Overcoming these barriers requires the creation of new photovoltaic technologies.  However, it is expensive to create new photovoltaic technologies in a laboratory setting.  Creating a trial sample for one material can take months of preparation time, and the process requires extensive training of graduate students.

In contrast, it is comparatively easy to model photovoltaic materials on a computer.  Computational modeling of photovoltaic materials is pushing forward the state of photovoltaic energy generation due to its ability to “pre-screen” potential technologies.  Computational scientists are able to predict the properties of a large number of photovoltaic materials in a systematic fashion using electronic structure codes such as FHI-aims.  The results of an initial broad study are sifted through1, and a select set of promising materials based on their predicted physical properties are selected.  These promising materials are then handed off to a research group specializing in laboratory studies, who can create the material in a laboratory setting and further assess its suitability for future photovoltaic applications. Computational modeling of materials is now viewed to be a “third pillar” of science, and experimental and computational collaboration is increasingly the standard approach for materials discovery.

When I was at Duke, our group at Duke was in close collaboration with the experimental group of Prof. David Mitzi (website) to design efficient, cheap, non-toxic photovoltaic materials. This project provides a unifying force for our group’s research, as the photovoltaic materials that we are modeling contain enough atoms that they require supercomputing resources to accurate model, and the electronic properties that we are predicting to determine if a material is suitable for subsequent experimental study require careful understanding of a relativistic effect known as “spin-orbit coupling” (see next section).

My previous joint research with Prof. Mitzi is focused on two potential families of technologies:  CZTS-derived structures and 2D organic-inorganic perovskites.  CZTS (Cu2ZnSnS4, to be precise) was previously identified to be a promising candidate for photovoltaic applications, however it is difficult to create reliably in practice due to “anti-site disordering defects”.  It is the goal of that project to design materials similar to CZTS but without the presence of anti-site disordering defects.  2D organic-inorganic perovskites (2D HOIPs) are an interesting class of materials which incorporate both 2D inorganic layers and organic molecules.  These two components of a 2D HOIP can be  varied independently of one another, potentially enabling “tunable” electronics that can be tailor-made for a particular task.  Additionally, 2D HOIPs provide the opportunity for collaboration between two fields of study which do not ordinarily closely interact:  materials scientists/condensed matter physicists who are experienced in 2D inorganic materials, and organic chemists who are experienced in organic molecules.  Our work on 2D HOIPs was featured on the National Science Foundation homepage on October 8th, 2018 at .

1 It should be stressed that computational pre-screening should not be viewed as a way to fully automate materials discovery.  The results obtained are approximations only, which may be “basically correct” or completely wrong.  It is the task of trained computational/theoretical scientists to interpret these results and assess their plausibility, which requires extensive analysis based on years of training.

Relativistic Effects in Materials

One of the most important effects affecting accuracy in the computational modelling of materials are relativistic effects.  At first glance, this may seem strange, as electrons and atoms are extremely light, and relativity is commonly associated with gravity from black holes and galaxies (decidedly heavy objects).  Relativity enters into electronic structure theory not through gravity, but through another physical force: the electromagnetic attraction of atoms and electrons due to their opposite charges.  As electrons and atoms approach one another, the attractive force between the two grows, similar to how the gravitation force between objects grow as they approach to one another.  Eventually, the momentum of the electron becomes large enough that it becomes a relativistic particle.  At this point, the theory of relativity is needed to describe its properties accurately.

In theory, we already have the quantum mechanical equation necessary to model the behavior of a relativistic electron.  It is the Dirac equation, essentially the relativistic analogue of the well-known Schrödinger equation.  The Dirac equation is expensive to model computationally and has a number of technical details that complicates calculations.  For this reason, an alternative equation, known as the “scalar-relativistic Schrödinger equation with spin-orbit coupling”, has been used to capture relativistic effects while keeping computational costs down.

As its name implies, there are two important relativistic effects that are included in this equation, known as scalar relativity and spin-orbit coupling.  Scalar relativity has been included in FHI-aims for years and has been partially validated by previous community efforts, but no systematic validation had been made for the electronic properties critical for modeling the behavior of electrons in materials.  As part of this work, I validated the accuracy of the approximation for scalar relativity used in FHI-aims’ for electronic properties, which are of particular importance for photovoltaic materials (see next section).  Based on our results, we published a curated set of benchmark data that other KS-DFT codes can use to assess the accuracy of their scalar-relativistic implementations.  This data has been made freely available via the NOMAD repository (see the Datasets section for more information).

While spin-orbit coupling has always been appreciated as an important effect in electronic structure theory, many KS-DFT codes do not include it by default in calculations, as many of the methods used to incorporate spin-orbit coupling in materials are computationally expensive, and its effects were (incorrectly) considered to be important only for materials containing heavy atoms such as Pb and Bi.  However, the advent of 2D materials and topological insulators have created a renewed interest in spin-orbit coupling across a broad class of materials, and spin-orbit coupling is increasingly recognized as an essential ingredient for predicting quantitatively and qualitatively correct electronic properties of materials.

It was my original project at Duke to implement spin-orbit coupling throughout the FHI-aims code using a computationally efficient methods known as “non-self-consistent second-variational spin-orbit coupling”.  This project required a considerable amount of code rewriting throughout FHI-aims to handle new data structures that this approximation entailed, but as a result of this work, virtually all predictions of electronic properties in FHI-aims can be calculated including the effects of spin-orbit coupling.  While our subsequent benchmarking effort indicated that the approximation used for spin-orbit coupling yields quantitative deviations for heavy atoms, nevertheless we have found that it accurately captures the qualitative effect of spin-orbit coupling for physical properties in materials at a reduced computational overhead compared to other methods.