next up previous
Next: Molecular Dynamics Up: The Car-Parrinello Method Previous: Orthonormality

More Efficient Methods: Conjugate Gradients and Preconditioning

After the original idea of Car and Parrinello in treating the plane wave coefficients as dynamical variables there have been several methods suggested to find the ground state. For example, numerical integration of the equations of motion such as the Verlet algorithm is several times slower than evaluation after analytic integration. Williams and Soler[47] made a change to first order equations of motion of the electronic degrees of freedom, which, although no more efficient than the analytic method per timestep, requires only half the memory and input/output operations therefore increasing the speed.

Each of these methods involve an indirect search for the self-consistent Kohn-Sham Hamiltonian. These methods have further complications such as smaller timesteps are required for larger systems - too large a timestep leading to unstable evolution of the wavefunctions. Such instabilities are not encountered if the Kohn-Sham energy functional is minimized directly. This is because the energy functional usually has only one well defined minimum for a fixed set of ions, therefore a direct search for this minimum does not lead to instabilities.

In the absence of any information about a function, the optimum direction to move from a point in order to find the minimum is the steepest descent direction. The function is evaluated at steps along that direction until the minimum along that line is found. The steepest descent direction at that minimum is found at the procedure repeated until convergence to the desired accuracy is achieved. The rate of convergence of the steepest descents method is limited by the fact, that after minimisation is preformed along a given direction it does not use the information found in that calculation in subsequent iterations. A more optimum method combines the information of previous iterations so that the subsequent search direction is in an independent direction to all of the previous ones. That is, the set of search directions form a linearly independent set. This method is known as the conjugate gradients method[48] and it is a variation (as described below) of this method which is used in all subsequent total energy calculations. By using the information about previous search directions, this method is guarantees to find the minimum of an n-dimensional function in n iterations (assuming a perfect line search at each iteration).

Unfortunately, n is usually in the order of tex2html_wrap_inline5794 to tex2html_wrap_inline5796 . Each function evaluation is computationally very expensive also, and therefore each iteration could take a long time. What is required is a method that will get close to the minimum of a function very quickly. This is achieved by preconditioning the function. This is best described by assuming that the problem can be given by an n-dimensional quadratic, for example

eqnarray800

where G is known as the Hessian. In application to energy minimisation problem described here, tex2html_wrap_inline5802 would be the coefficients of the wavefunction. The Hessian is equivalent to the Kohn-Sham Hamiltonian.

If the function, f, is well conditioned ( tex2html_wrap_inline5806 ) then the conjugate gradients algorithm will converge very quickly. (In fact if G=I it will take only one conjugate gradient iteration to minimise f). In solving for the Kohn-Sham Hamiltonian, the problem is ill-conditioned because G has a wide range of eigenvalues. This is mainly due to the kinetic energy term which causes the diagonal terms in G to become large as the energy of the plane wave increases (given by Equation 3.26). A change of variables tex2html_wrap_inline5816 is required such that tex2html_wrap_inline5818 minimises tex2html_wrap_inline5820 and tex2html_wrap_inline5822 is well conditioned.

Substituting this change of variable gives the Hessian

eqnarray819

So a C is required such that tex2html_wrap_inline5826 . At first sight, this looks as if a full matrix inversion is required in order to obtain C. But, in solving the Kohn-Sham Hamiltonian, the diagonal terms tex2html_wrap_inline5830 get increasingly large as explained above. They will eventually dominate the row in which they are on so that tex2html_wrap_inline5830 approximate the large eigenvalues of G.

Now consider preconditioning with the matrix tex2html_wrap_inline5836 where

eqnarray831

and tex2html_wrap_inline5838 . If tex2html_wrap_inline5840 is an eigenvalue of the Kohn-Sham Hamiltonian, G, then tex2html_wrap_inline5844 is approximately an eigenvalue of tex2html_wrap_inline5846 . The matrix tex2html_wrap_inline5848 has a much narrower spectrum of eigenvalues than G and therefore the energy minimisation problem will converge much quicker using conjugate gradients that the unpreconditioned case.

In practice the energy convergence achieved by (preconditioned) conjugate gradients is excellent and in general it takes only a few tens of iterations to converge total energies adequately even though the electronic wavefunctions are expanded up to tex2html_wrap_inline5852 to tex2html_wrap_inline5796 plane waves.


next up previous
Next: Molecular Dynamics Up: The Car-Parrinello Method Previous: Orthonormality

Stewart Clark
Thu Oct 31 19:32:00 GMT 1996