Amber is a molecular dynamics software package that simulates force fields for molecular dynamics of biomolecules. No really! https://en.wikipedia.org/wiki/AMBER

Installation requires compiling the binary from source. There are many different ways to complile and many different ways it can go wrong. Thankfully, there is good documentation avalible and the compiler typically will give a descriptive error when it doesn’t work that will put you on the right track. This is how I install Amber 18 on CentOS 7. Almost idential steps will work for Amber 16 and will likley work for Amber 20 and future versions with little modification.

Update! Compiling for Amber 20 is different than previous versions. https://hull1.com//linux/2020/08/21/complie-amber20.html

The below details three different ways to complile: for serial CPU, parallel CPU, and GPU.

We are running as root! (I thought you GNU!)

sudo su

Patch everything before starting. Not required; but a good practice.

yum -y update

Install some tools. Some are necessity and some are conveniences.

yum -y install nano
yum -y install wget
yum -y install pciutils

Next install Software Dependencies. If you are only compiling for CPU and not GPU you can probably live without a few of these–won’t hurt to get them all.

yum -y install patch
yum -y install csh
yum -y install libXt-devel
yum -y group install "Development Tools"
yum -y install openssl-devel 
yum -y install epel-release
yum -y install dkms
yum -y install libvdpau x86_64
yum install kernel-devel-$(uname -r) kernel-headers-$(uname -r)
yum install openmpi-devel

In theory, you should be able to install CUDA with the yum package manager. Like this:

yum install nvidia-driver-latest-dkms yum install cuda yum install cuda-drivers

I usually have better luck using the rhel6 run file. You can download and install CUDA with the run file with these commands. Skip ahead if installing with yum works in your environment.

Download CUDA.

cd /usr/local
wget http://developer.download.nvidia.com/compute/cuda/10.2/Prod/local_installers/cuda_10.2.89_440.33.01_rhel6.run

Install CUDA.

sh cuda_10.2.89_440.33.01_rhel6.run

Occasionally, the CUDA install will fail becuase the machine is using the stock Cent OS display driver “nouveau.” Disable nouveau driver if needed.

cp /etc/default/grub /etc/default/grub.orig
nano /etc/default/grub

Replace:

GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=centos_coeciv-flora-09/swap rhgb quiet

with

GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=centos_coeciv-flora-09/swap rhgb quiet nouveau.modeset=0"

Reboot after editing /etc/default/grub to disable nouveau and try installing CUDA again.

For other problems with CUDA, verify hardware, OS, kernal version, and gcc compliler version.

lspci | grep -i nvidia
uname -m && cat /etc/*release 
uname -r
gcc --version

Check NVIDIA documentation to be sure your hardware and software are compatible: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html

Verify CUDA install. This command will show the installed GPUs and their status:

nvidia-smi

Check installed version of CUDA.

which nvcc

Copy Amber and Ambertools to /usr/local and extract.

cd /usr/local/
tar xvfj /usr/local/Amber18.tar.bz2 
tar xvfj /usr/local/AmberTools19.tar.bz2

Even if you are going to complile for use on GPU, compile first for serial CPU.

export AMBERHOME=/usr/local/amber18
cd $AMBERHOME
./update_amber --upgrade
./configure -noX11 gnu
source /usr/local/amber18/amber.sh

Start compile.

cd $AMBERHOME
make install

It will take some time to compile. After complie completes, test your install. Test will take a while too.

test -f /usr/local/amber18/amber.sh  && source /usr/local/amber18/amber.sh
cd $AMBERHOME
make test

Set permissions for users to have access to Amber files. If you tested with the root account test may succeed, but users may still not have access to use.

chmod -R 777 $AMBERHOME

Alternativly, you can compile for use on parallel CPU. Add openmpi to $PATH.

export PATH=/usr/lib64/openmpi/bin:$PATH

Build for parallel CPU.

cd /usr/local/amber18
export AMBERHOME=/usr/local/amber18
$AMBERHOME/configure -noX11 -mpi gnu

Start compile.

cd $AMBERHOME
make install	

Set permissions again. The complile for parallel CPU added some new files.

chmod -R 777 /usr/lib64/openmpi/bin
chmod -R 777 $AMBERHOME

OpenMPI does not like to run as root. Better to test as a standard user. Remember to set variables for standard user account after you exit the root account.

export PATH=/usr/lib64/openmpi/bin:$PATH
export AMBERHOME=/usr/local/amber18
export DO_PARALLEL='mpirun -np 2'
test -f /usr/local/amber18/amber.sh  && source /usr/local/amber18/amber.sh

This variable “DO_PARALLEL=’mpirun -np 2’” sets the number of CPUs to use in parallel.

Test for parallel CPU.

cd $AMBERHOME
make test.parallel 

Build for GPU. Do this as root again.

export CUDA_HOME=/usr/local/cuda
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
ldconfig
$AMBERHOME/configure -cuda -noX11 gnu 

Start compile.

cd $AMBERHOME
make install

Test after compile. The test can take as long as the compile.

cd $AMBERHOME/AmberTools/test
make test.cuda

May need to set permissions again for regular users.

chmod -R 777 $AMBERHOME

Test a GPU job.

mkdir /usr/local/amberTest

Download the test files and and copy to /usr/local/amberTest.

inpcrd
mdin.GPU
prmtop

Run a test GPU job.

cd /usr/local/amberTest
$AMBERHOME/bin/pmemd.cuda -O -i mdin.GPU -o mdout -p prmtop -c inpcrd &

Check job status.

more mdinfo

Check GPU usage.

nvidia-smi -q -a | grep GPU

If using more than one GPU, you can set the CUDA_VISIBLE_DEVICES variable to run on the other GPU. 0 = the first GPU, 1 = the second, ect.

echo $CUDA_VISIBLE_DEVICES 
export CUDA_VISIBLE_DEVICES="0"
export CUDA_VISIBLE_DEVICES="1"

You can use nvidia-smi command to verify the load on specific GPUs.

Further Reading:
https://ambermd.org/AmberMD.php
https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html
https://linuxconfig.org/how-to-install-the-nvidia-drivers-on-centos-7-linux
https://www.tecmint.com/install-nvidia-drivers-in-linux/