A. Description

CM1 is a three-dimensional, time-dependent, non-hydrostatic numerical model that has been developed primarily by George Bryan at The Pennsylvania State University (PSU) (circa 2000-2002) and at the National Center for Atmospheric Research (NCAR) (2003-present). CM1 is designed primarily for idealized research, particularly for deep precipitating convection (i.e., thunderstorms). CM1 is freely available to anybody that wants to use it, though restrictions on distribution apply. See CM1 home page for additional information.

B. How to download CM1

Please follow the instructions on the CM1 website where you have to agree to the CM1 license

Once you have downloaded the source tarball, do:

tar xf cm1r18.tar.gz

C. How to build CM1

The CM1 website describes building it in detail and should be taken as authorative, this page for the most part summarizes what is described there.

On Cray XE6 platform, CM1 can be built under the Cray environments as described in the Makefile. You can enable both NetCDF or HDF5 output by uncommenting the OUTPUTOPT option in the respective section of the Makefile in src/Makefie.  Uncomment the compiler options in the Blue Waters section (search for bluewaters) and remove -lfast_mv from LINKOPTS since the libfast module is no longer supported. You will also have to fix param.F in subroutine param, where the write directive is used incorrectly.

cd cm1r18/src
patch -p2 <<"EOF"
diff -urw cm1r18/src/Makefile cm1r18-roland//src/Makefile
--- cm1r18/src/Makefile    2015-10-07 10:43:02.000000000 -0500
+++ cm1r18-roland//src/Makefile    2016-11-03 13:26:42.000000000 -0500
@@ -10,7 +10,7 @@
 #OUTPUTINC = -I$(NETCDF)/include
 #LINKOPTS  = -lnetcdf -lnetcdff
 #                         HDF SECTION
@@ -92,11 +92,11 @@
 #      (eg, NCSA's bluewaters)
 #  Remember to enter this on command line first:  module load libfast
 #      (or, comment-out the "-lfast_mv" line below)
-#FC   = ftn
-#OPTS = -I../include -O3 -Ovector3 -Oscalar3 -Othread3 -h noomp
-#LINKOPTS = -lfast_mv
-#CPP  = cpp -C -P -traditional
-#DM   = -DMPI
+FC   = ftn
+OPTS = -I../include -O3 -Ovector3 -Oscalar3 -Othread3 -h noomp
+CPP  = cpp -C -P -traditional
+DM   = -DMPI
 #  multiple processors, shared memory (OpenMP), Cray fortran compiler
 #      (eg, NCSA's bluewaters)
diff -urw cm1r18/src/param.F cm1r18-roland//src/param.F
--- cm1r18/src/param.F    2015-10-07 10:42:30.000000000 -0500
+++ cm1r18-roland//src/param.F    2016-10-18 12:30:19.000000000 -0500
@@ -4299,13 +4299,13 @@
       if(dowr) write(outfile,132) 'ked               =',ked
       if( dowr )then
-        if( td_diss   .gt.0 ) write(outfile,*),'  td_diss   = ',td_diss
-        if( td_mptend .gt.0 ) write(outfile,*),'  td_mptend = ',td_mptend
-        if( qd_vtc    .gt.0 ) write(outfile,*),'  qd_vtc    = ',qd_vtc
-        if( qd_vtr    .gt.0 ) write(outfile,*),'  qd_vtr    = ',qd_vtr
-        if( qd_vts    .gt.0 ) write(outfile,*),'  qd_vts    = ',qd_vts
-        if( qd_vtg    .gt.0 ) write(outfile,*),'  qd_vtg    = ',qd_vtg
-        if( qd_vti    .gt.0 ) write(outfile,*),'  qd_vti    = ',qd_vti
+        if( td_diss   .gt.0 ) write(outfile,*) '  td_diss   = ',td_diss
+        if( td_mptend .gt.0 ) write(outfile,*) '  td_mptend = ',td_mptend
+        if( qd_vtc    .gt.0 ) write(outfile,*) '  qd_vtc    = ',qd_vtc
+        if( qd_vtr    .gt.0 ) write(outfile,*) '  qd_vtr    = ',qd_vtr
+        if( qd_vts    .gt.0 ) write(outfile,*) '  qd_vts    = ',qd_vts
+        if( qd_vtg    .gt.0 ) write(outfile,*) '  qd_vtg    = ',qd_vtg
+        if( qd_vti    .gt.0 ) write(outfile,*) '  qd_vti    = ',qd_vti
       if(dowr) write(outfile,*)
module load PrgEnv-cray
module load cray-netcdf
make -j4

Successful compilation will create a cm1.exe binary file in the run directory, one level up from src.

The compilation was conducted on 2016-11-03 under the following environment:

D. Sample test

CM1 provides some sample namelist.input files on its website. To run the supercell case do

cd .. # now in cm1r18
cp -r run ~/scratch/supercell
cd ~/scratch/supercell
rm namelist.input
wget -O namelist.input \

To reduce runtime to about 5 minutes, you should reduce the timax setting in namelist.input to 480 to stop after integrating 8 minutes of time.

sed -i 's/timax[^,]*,/timax = 480.,/' namelist.input

Then create a run.pbs file having the following content:

#PBS -l nodes=1:ppn=32:xe
#PBS -l walltime=00:05:00
#PBS -q debug
#PBS -N supercell
aprun ./cm1.exe >cm1.out 2>cm1.err

Submit the job

qsub run.pbs

E. Known issues

For some (high resolution input), the model can hang without giving an error message, until its time in the queue runs out.  To our knowledge, it only occurs when one has chosen to use the Morrison microphysics routine.

Here’s what the output file looked like when that was happening:

   nwrite =  1
     2d vars
     s vars

It turns out that for the Morrison scheme in CM1, there is a check for numerical convergence in the saturation adjustment routine, and if that is not met, then it prints out this error message and stops the model:

print *
  print *,' Convergence cannot be reached in satadj2 subroutine.'
  print *
  print *,' This may be a problem with the algorithm in satadj2.'
  print *,' However, the model may have became unstable somewhere'
  print *,' else and the symptoms first appeared here.'
  print *
  print *,' Try decreasing the timestep (dtl and/or nsound).'
  print *
  print *,'  ... stopping cm1 ... '
  print *

For some reason, at some resolutions this bit of code doesn't get executed, and things just hang, "spinning" down time in the queue.  To fix the problem (not the code), one must just increase his/her value of the variable 'nsound' in the namelist.input file.  If the user has already maxed that out (I think a value up to 12 is allowed), then they must decrease the main timestep ('dtl') in namelist.input.