Thank you! vincentm
Sure, I'd love to use RESCU+ 1.1 and test on that, looking forward to your new release!
But in case you still want to test, my entire input file is:
from ase.optimize import BFGS
from ase.constraints import StrainFilter
from ase.build import bulk
from ase.calculators.rescuplus import Rescuplus
import numpy as np
GaN = bulk('GaN','wurtzite', a=3.186,c=5.186)
inp = { "system": {"cell" : {"resolution": 0.10}, "kpoint" : {"grid":[7,7,3]},"xc":{"functional_names":["XC_GGA_X_PBE","XC_GGA_C_PBE"]} } }
inp["energy"] = {"forces_return": True, "stress_return": True}
inp["solver"] = {"mix": {"alpha": 0.5}, "restart": {"DMRPath": "nano_scf_out.h5"}}
inp["solver"]["mpidist"] = {"kptprc": 5}
cmd = "mpiexec -n 40 rescuplus_scf -i PREFIX.rsi > resculog.out && cp nano_scf_out.json PREFIX.rso"
GaN.calc = Rescuplus(command=cmd, input_data=inp)
sf = StrainFilter(GaN)
opt = BFGS(sf,trajectory="nano_rlx.traj",logfile="nano_rlx_log.out")
opt.run(0.005)
and pesudopotentials are Ga_PBE_TZP.mat
and N_PBE_TZP.mat
. I used #SBATCH --nodes=1
and
#SBATCH --ntasks-per-node=40
on Beluga.
seff
is just a tool to summarize CPU and memory efficiency for completed jobs on Compute Canada, something like
$ seff 12345678
Job ID: 12345678
Cluster: cedar
User/Group: jsmith/jsmith
State: COMPLETED (exit code 0)
Cores: 1
CPU Utilized: 02:48:58
CPU Efficiency: 99.72% of 02:49:26 core-walltime
Job Wall-clock time: 02:49:26
Memory Utilized: 213.85 MB
Memory Efficiency: 0.17% of 125.00 GB
but it clearly didn't show the correct percentage for these calculations so I didn't know how much memory they used. I am sure the CUP efficiency for RESCU+ is around 100%! Yes, a function to output the highest memory used would be great if possible. ^^ I will try the tool you suggested for now! Thanks again!