MAG code summary
Deyong Xu / SGT
last update: 10/8/2015
COMIN usage
1) Set COMIN :
a) Default COMIN: /ecfmag/ecfnets/scripts/envir.h :
export envir=test
export NWROOT=/nco/sib/magdev/nwtest # Root dir for applications (source code location)
export COMROOT=/nco/sib/magdev/com # com root dir for input / output data
# on CURRENT SYSTEM (in/out data)
export COMIN=/com/nawips/prod # com dir for CURRENT MODEL's input data
# (model input data)
export DATAROOT=/nco/sib/magdev/tmpnwprd1 # working dir (location where code runs)
export SENDECF=YES
b) re-define COMIN: ecf scripts ( .ecf files on ecFlow server )
eg:
File 31 is : ./ecf/scripts/mag/mag_processor/uair/mag_uair_processor.ecf
export COMIN=/com/mag/prod
File 32 is : ./ecf/scripts/mag/mag_processor/hwrf/mag_hwrf_nested_processor.ecf
export COMIN=/com2/nawips/prod
File 33 is : ./ecf/scripts/mag/mag_processor/hwrf/mag_hwrf_full_processor.ecf
export COMIN=/com2/nawips/prod
2) Use COMIN:
a) MAG.xml # b) use COMIN
b) gempak scripts # c) use COMIN
For majority of models except uair, skewt, hwrf*, ghm*, hrrr*, and polar
export COMIN=/com/nawips/prod
For these exceptional models, COMIN is different individually.
Eg 1: Changes in ecf files
export COMIN=/com2/nawips/prod
export HOMEmag=$NWROOT/mag.$mag_ver
$HOMEmag/jobs/JMAG
Eg 2: change in fix/MAG.xml
<tns:input-pat>COMIN/hrrr.YYYYMMDD/hrrr_YYYYMMDDCCfFF
<tns:scripts>
<tns:all>hrrr.sh</tns:all>
MAG quick log check:
MAG product wcoss file location:
source code : /nwprod/mag.v3.8.1
input : /com/nawips/prod/nam.20151221
GIFs : /com/mag/prod/gifs/nam/06
processing log : /com/output/prod/20151221
rsync / transfer log : /com/output/transfer/20151221
status : /com/mag/prod/status
transfer : /com/mag/prod/status/transfer
Login into wcoss as mag.dev
logs :
$ lr mag_gfs* # processing log
$ lr ecmag_sync_gfs* # rsync log
$ lr mag_ckquota* # check disk space usage
$ lr mag_cleanup.* # delete old files
$ lr ecmag_maintain* # update today link.
# Check for success and failures
$ grep -e "Successfully completed" mag_gfs*
$ grep -e "TERM_RUNLIMIT" mag_gfs* # timeout
$ grep -e "Successfully completed" ecmag_sync_gfs*
$ grep -e "TERM_RUNLIMIT" ecmag_sync_gfs* # timeout
# Check for success and failures
$ grep -e "Successfully completed" mag_gfs*
$ grep -e "TERM_RUNLIMIT" mag_gfs* # timeout
$ grep -e "Successfully completed" ecmag_sync_gfs*
$ grep -e "TERM_RUNLIMIT" ecmag_sync_gfs* # timeout
status:
$ lr gfs_* # being processed.
$ lr -a .gfs* # pre-processing.
gifs :
$ lr gfs*
jlog:
$ lr *
Details of exmag_processor.pl ( pic above)
1) ENV variables specified in .bashrc for testing purpose
# User specific aliases and functions
# specify env status
export envir=test
# source code location
export NWROOT=/nco/sib/magdev/nwtest
export MAG_VERSION="v3.7.0"
export BASE_DIR="/nco/sib/magdev"
export HOMEmag=${BASE_DIR}/nw${envir}/mag.${MAG_VERSION}
export EXECmag=$HOMEmag/exec
export USHmag=$HOMEmag/ush
export SORCmag=$HOMEmag/sorc
export FIXmag=$HOMEmag/fix
export PARMmag=$HOMEmag/parm
export MAG_DIR=$HOMEmag
# input dir
export COMIN=/com/nawips/prod
# output dir
export COMROOT=/nco/sib/magdev/com
export COMOUT=$COMROOT/mag/${envir}/gifs
# working directory (temporary)
export DATAROOT=/nco/sib/magdev/tmpnwprd1
export DATA=$DATAROOT
# web sync
export ncorzdm_username=mag
# 2) Example of Input dir
[mag.dev@g10a1 gfs.20150422]$ pwd
/com/nawips/prod/gfs.20150422
# Four cycles for gfs 0.5-degree grid
[mag.dev@g10a1 gfs.20150422]$ lr gfs_0p50_20150422*f000
-rw-rw-r-- 1 nwprod prod 132081152 Apr 22 04:08 gfs_0p50_2015042200f000
-rw-rw-r-- 1 nwprod prod 131431424 Apr 22 09:23 gfs_0p50_2015042206f000
-rw-rw-r-- 1 nwprod prod 131399168 Apr 22 15:23 gfs_0p50_2015042212f000
-rw-rw-r-- 1 nwprod prod 131236352 Apr 22 21:24 gfs_0p50_2015042218f000
# Four cycles for gfs 1-degree grid
[mag.dev@g10a1 gfs.20150422]$ lr gfs_20150422*f000
-rw-rw-r-- 1 nwprod prod 33142784 Apr 22 04:08 gfs_2015042200f000
-rw-rw-r-- 1 nwprod prod 32988160 Apr 22 09:23 gfs_2015042206f000
-rw-rw-r-- 1 nwprod prod 32988160 Apr 22 15:23 gfs_2015042212f000
-rw-rw-r-- 1 nwprod prod 32930816 Apr 22 21:23 gfs_2015042218f000
Configuration file : mag_processor_config (perl : define Config package)
require "${MAG_dir}/scripts/mag_processor_config" # param setting : version, dir, debug, etc..
require "${MAG_dir}/scripts/magv3-xml-library.pl" # for parsing XML only
XML-related :
my $parser = XML::LibXML->new();
# parse_file() prases a XML file
$models_doc = $parser->parse_file("${MAG_tables}/MAG2.xml");
# getDocumentElement returns the root element of the Document
$models_root = $models_doc->getDocumentElement ;
$models_doc contains the structure of XML, everything is in there.
magv3-xml-library.pl contains help functions that read $models_doc to get all the configuration parameters.
Command :
$ grep models_doc abc.pl
Should return all the functions defined in magv3-xml-library.pl
our @model_info=[]; # forecast hour range: first hr, last hr, timestep
Eg:
<tns:range>000-036, 03</tns:range>
<tns:range>042-084, 06</tns:range>
Status directory
/nco/sib/magdev/com/mag/test/status
[mag.dev@g10a2 status]$ lr rap_20150427*
-rw-r--r-- 1 mag.dev g02 11 Apr 27 01:44 rap_2015042700.go # trick for ecflow to kick off gempak (go file)
-rw-r--r-- 1 mag.dev g02 3 Apr 27 01:44 rap_2015042700 # contain last forecast hr processed for that cycle
If file rap_2015042700 does NOT exist, then last forecast hr is set to -1, which means no forecast file has been processed yet.
Log message:
our $logname="";
our $logfile="";
our $log=new FileHandle;
Eg1: For model SREF, all the fcsts are in one file.
[mag.dev@g10a1 sref.20150501]$ lr *_mean*
-rw-rw-r-- 1 nwprod prod 60226560 May 1 06:57 sref212_2015050103_mean
-rw-rw-r-- 1 nwprod prod 1005511680 May 1 07:35 sref132_2015050103_mean
Eg2: hrrr only 1 file per fcst HOUR,
[mag.dev@g10a1 hrrr.20150219]$ lr hrrr_2015021916f001*
-rw-rw-r-- 1 nwprod prod 273639936 Feb 19 16:52 hrrr_2015021916f00100
Eg3: hrrrsubh has one file per fcst MINUTE.
[mag.dev@g10a1 hrrr.20150219]$ lr hrrrsubh_2015021916f001*
-rw-r--r-- 1 nwprod prod 72165376 Feb 19 16:51 hrrrsubh_2015021916f00100
-rw-r--r-- 1 nwprod prod 72642048 Feb 19 16:52 hrrrsubh_2015021916f00115
-rw-r--r-- 1 nwprod prod 72403968 Feb 19 16:52 hrrrsubh_2015021916f00130
-rw-r--r-- 1 nwprod prod 72403968 Feb 19 16:53 hrrrsubh_2015021916f00145
Fist_fhr : xml configure (static)
Last_fhr : xml configure (static)
highest_fhr_processed ( -1 ) : record showing highest fhr being processed. (dynamical)
highest_fhr (-1) : check the input dir, highest fhr for the current cycle. (dynamical)
Should rename to “ highest_fhr_rcvd “
Sub make_command_list:
Create a list of commands
Sub Process_Command_List
Use @main::command_list and $main::max_threads to slice jobs.
Generate POE(parallel operating environment) jobs, either submit to job scheduler or run from command line if in development/test mode.
Eg 1: job submitted to job scheduler
File “run_poe_script”
export MP_PGMMODEL=mpmd
export MP_PULSE=0
export MP_CMDFILE=poe_script
export MP_LABELIO=NO
set -x
cd /nco/sib/magdev/tmpnwprd1/MAG_processor_hrrr_24370/job_1
unset MP_DEBUG_NOTIMEOUT
mpirun.lsf -cmdfile poe_script
Eg 2: run job from command line if in test mode.
File “poe_script”
/nco/sib/magdev/nwtest/mag.v3.7.0/ush/run_and_log_script.sh /nco/sib/magdev/nwtest/mag.v3.7.0/ush/hrrr.sh hrrr 16 20150219 001 east-us precip_p01 /nco/sib/magdev/tmpnwprd1/MAG_processor_hrrr_24370
/nco/sib/magdev/nwtest/mag.v3.7.0/ush/run_and_log_script.sh /nco/sib/magdev/nwtest/mag.v3.7.0/ush/hrrr.sh hrrr 16 20150219 002 east-us precip_p01 /nco/sib/magdev/tmpnwprd1/MAG_processor_hrrr_24370
run_and_log_script.sh ==> hrrr.sh ==> ${USHmag}/setup.sh ==> . $USHmag/set_up_gempak
# separate log file : dedicated to image generation using GEMPAK.
echo '==============================================================' >> $8/${model}_${cycle}_${date}_${hr}_${area}_${param}
echo output for command: $script $model $cycle $date $hr $area $param >> $8/${model}_${cycle}_${date}_${hr}_${area}_${param}
echo '==============================================================' >> $8/${model}_${cycle}_${date}_${hr}_${area}_${param}
# command
$script $model $cycle $date $hr $area $param >> $8/${model}_${cycle}_${date}_${hr}_${area}_${param} 2>&1
GEMPAK log :
1. it goes to a separate log file in mag_work directory
/nco/sib/magdev/tmpnwprd1/MAG_processor_hrrr_24370/hrrr_17_20150219_001_east-us_precip_p01
/nco/sib/magdev/tmpnwprd1/MAG_processor_hrrr_24370/hrrr_17_20150219_001_west-us_precip_p01
/nco/sib/magdev/tmpnwprd1/MAG_processor_hrrr_24370/hrrr_17_20150219_001_cent-us_precip_p01
/nco/sib/magdev/tmpnwprd1/MAG_processor_hrrr_24370/hrrr_17_20150219_001_east-us_precip_type
/nco/sib/magdev/tmpnwprd1/MAG_processor_hrrr_24370/hrrr_17_20150219_001_west-us_precip_type
/nco/sib/magdev/tmpnwprd1/MAG_processor_hrrr_24370/hrrr_17_20150219_001_cent-us_precip_type
2. It also goes into the regular log file, which has everything and is big.
3. model script hrrr.sh calls ${USHmag}/setup.sh to set up GEMPAK, load it and create tmp working directory.
Hrrr.sh ==> ${USHmag}/setup.sh ==> . $USHmag/set_up_gempak
4. gdplot2_gif inside hrrr.sh is the command to generate GIF file.
if [ $make_gif == "yes" ]; then
gdplot2_gif << EOF
GDFILE = $MODEL_HRRR/${file_type}_${date}${cycle}f${hr}${min}
GDATTIM = F${hr}${min}
GLEVEL = $level
GVCORD = $coord
ush/functions.sh : pre-define functions to be used by hrrrr.sh
cleanup() {}
rm_tmpdir_trap() {}
make_cycle_link() {}
noaa_logo() {}
JMAG
if [ "$hurr_model" == "yes" ]; then
mag_script=exmag_processor_hurr.pl # : hurricane model
else
mag_script=exmag_processor.pl # : non-hurricane model
fi
${SCRIPTSmag}/${mag_script} ${MAGPROCPL_FLAGS}
rm -rf $DATA
Log files
1. ECFLOW script
./ecf/scripts/mag/mag_processor/hrrr/mag_hrrr_processor.ecf
#BSUB -o /nco/sib/magdev/com/output/prod/today/mag_hrrr_processor.o%J
#BSUB -e /nco/sib/magdev/com/output/prod/today/mag_hrrr_processor.o%J
export BASE_DIR=/nco/sib/magdev
export GIF_ROOT=$BASE_DIR/com/mag/$envir/gifs
export VERSION_FILE=$BASE_DIR/nw${envir}/versions/mag.ver
Where COMROOT defined ?
mag_hrrr_processor.ecf
%include "/ecfmag/ecfnets/scripts/head.h" ### only available on ecflow1 system.
### where COMROOT defined
envir.h
export envir=test
export COMROOT=/nco/sib/magdev/com
export NWROOT=/nco/sib/magdev/nwtest
export DATAROOT=/nco/sib/magdev/tmpnwprd1
export ncorzdm_username=mag
export SENDECF=YES
%include "/ecfmag/ecfnets/scripts/tail.h" ### only avaiable on ecflow1 system.
Note:
1) these included files are located on ecflow1, where ecflow runs, NOT on wcoss.
2) Ecflow1 system does not have access to SVN, therefore, we need to copy these files from ecflow1 to wcoss in order to commit them into SVN.
3) Even when they are on wcoss, they are NOT used by ecflow. The files on ecflow1 are used by ecflow.
Fatal error tracking
Main body:
logmsg( $Config::fatal, "No areas defined for model $model");
$proc_error=1;
logmsg( $Config::fatal, "No parameters defined for model $model");
$proc_error=1;
logmsg( $Config::fatal, "No input filepath patterns defined for model $model");
$proc_error=1;
logmsg($Config::fatal, "Cannot change to home dir $ENV{HOME} so I can remove the temp dir");
logmsg($Config::fatal, "Exiting on fatal error\n\n");
exit $proc_error;
Signal handling
set [+abefhkmnptuvxBCEHPT] [+o option] [arg ...]
Without options, the name and value of each shell variable are displayed in a format that can be reused as input for setting or resetting the currently-set variables. Read-only variables cannot be reset. In posix mode, only shell variables are listed. The output is sorted according to the current locale. When options are specified, they set or unset shell attributes. Any arguments remaining after option processing are treated as values for the positional parameters and are assigned, in order, to $1, $2, ... $n. Options, if specified, have the following meanings:
-a Automatically mark variables and functions which are modified or created for export to the environment of subsequent commands.
-b Report the status of terminated background jobs immediately, rather than before the next primary prompt. This is effective only when job control is enabled.
-e Exit immediately if a pipeline (which may consist of a single simple command), a subshell command enclosed in parentheses, or one of the commands executed as part of a command list enclosed by braces (see SHELL GRAMMAR above) exits with a non-zero status. The shell does not exit if the command that fails is part of the command list immediately following a while or until keyword, part of the test following the if or elif reserved words,
part of any command executed in a && or ││ list except the command following the final && or ││, any command in a pipeline but the last, or if the command’s return value is being inverted with !. A trap on ERR, if set, is executed before the shell exits. This option applies to the shell environment and each subshell environment separately (see COMMAND EXECUTION ENVIRONMENT above), and may cause subshells to exit before executing all the commands in the subshell.
SIGHUP 1 Hang up detected on controlling terminal or death of controlling proces
SIGINT 2 Issued if the user sends an interrupt signal (Ctrl + C).
SIGQUIT 3 Issued if the user sends a quit signal (Ctrl + D).
SIGFPE 8 Issued if an illegal mathematical operation is attempted
SIGKILL 9 If a process gets this signal it must quit immediately and will not perform any clean-up operations
SIGALRM 14 Alarm Clock signal (used for timers)
SIGTERM 15 Software termination signal (sent by kill by default).
$ kill -l
1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL
5) SIGTRAP 6) SIGABRT 7) SIGBUS 8) SIGFPE
9) SIGKILL 10) SIGUSR1 11) SIGSEGV 12) SIGUSR2
13) SIGPIPE 14) SIGALRM 15) SIGTERM 16) SIGSTKFLT
17) SIGCHLD 18) SIGCONT 19) SIGSTOP 20) SIGTSTP
21) SIGTTIN 22) SIGTTOU 23) SIGURG 24) SIGXCPU
25) SIGXFSZ 26) SIGVTALRM 27) SIGPROF 28) SIGWINCH
29) SIGIO 30) SIGPWR 31) SIGSYS 34) SIGRTMIN
35) SIGRTMIN+1 36) SIGRTMIN+2 37) SIGRTMIN+3 38) SIGRTMIN+4
39) SIGRTMIN+5 40) SIGRTMIN+6 41) SIGRTMIN+7 42) SIGRTMIN+8
43) SIGRTMIN+9 44) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+12
47) SIGRTMIN+13 48) SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-14
51) SIGRTMAX-13 52) SIGRTMAX-12 53) SIGRTMAX-11 54) SIGRTMAX-10
55) SIGRTMAX-9 56) SIGRTMAX-8 57) SIGRTMAX-7 58) SIGRTMAX-6
59) SIGRTMAX-5 60) SIGRTMAX-4 61) SIGRTMAX-3 62) SIGRTMAX-2
63) SIGRTMAX-1 64) SIGRTMAX
$ vi /nwprod/util/ush/err_chk.sh
#!/bin/sh
#
# Script: err_check Author: Bill Facey 2 Apr 96
#
# ABSTRACT: This script checks a return code in $err. If $err=0
# a message that the program completed is put in the job outputfile and
# in a log file. If $err != 0 then fail messages are put in job
# outputfile and in the jlogfile, a failjob is sent to the front end,
# and processing is halted and the screen is turned purple.
#
# USAGE: To use this script one must export the following variables
# to the script: err, pgm, pgmout, jobid, logfile, SENDMSG, failjob,
# and ENVIR.
#
#
set +x
export SENDECF=${SENDECF:-YES}
[ -z "$utilscript" ] && utilscript=/nwprod/util/ush
if test "$err" -ne '0'
then
cat < errfile
Main process flow
As I'm sure you have noticed, many products are not made for all forecast hours. The reason for this is that the input grids are not available for all the forecast hours. So when you see that a product is missing, you need to know if that is normal.
Such as the precip_<num> products. These are for accumulated precipitation up to that hour, so they are not made for any hour less than <num>.
It is assumed that a product will be made for all forecast hours, and if it is not, it is an "exception". (Not like a Java exception, this isn't an error!) The exception rule is controlled by the exceptions tag for that parameter in the MAG.xml file:
For precip_p06:
<tns:models>
<tns:model>
<tns:name>GFS</tns:name>
<...>
<tns:name>precip_p06</tns:name>
<...>
<tns:exceptions>
<tns:all>006,240</tns:all>
<tns:polar>999,-999</tns:polar>
</tns:exceptions>
The exceptions tags can contain tags for one more areas, or one for all the areas.
This one has one of each.
<tns:all>006,240></tns:all> means that for all areas it will only be created for hours 006 through 240.
<tns:polar>999,-999</tns:polar> means that for the polar area, it will not be created for any hour. The only exception that isn't entirely controlled by an exception tag is for GFS dom_precip_type. It is restricted to 006 - 240 like the precip_p06 above, but it is also only available every 6 hours instead of the usual every 3 hours. That is handled with some model-specific code in the MAG_processor.
You'll get to know the exceptions as time goes on...
-Paula
Status sync
JSNDMAG2WEB
|| call
\/
$HOMEmag/scripts/exmag_status_sync.sh.ecf
Purpose:
>> When there is a switch in operational machines ( gyre <--> tide ), we need to copy
the status file for a model from one machine to another, so we don't need to
reprocess the data for that model on the machine, to which operation is switched.
>> However, keep in mind, each machine (gyre / tide) is made of a few hosts. When
such a switch occurs, one of the hosts will be picked to serve as operational host
for gyre / tide.
>> We want to rotate hosts so each host of gyre / tide gets equal share of usage.
This is what script “exmag_status_sync.sh.ecf” does, which is to find next host in
a host list for gyre/tide when there is a switch in operation.
ECFLOW on cpecflow1.ncep.noaa.gov
$ ssh -X Deyong.Xu@cpecflow1.ncep.noaa.gov
1) Start ecflow server
$ ecflow_start.sh # use port# assigned to username
$ ecflow_start.sh -p 22364 # specify port#
Log message:
[Deyong.Xu@cpecflow1 bin]$ ecflow_start.sh
cat: /u/Deyong.Xu/.ecflowrc/cpecf.Deyong.Xu.22364: No such file or directory
Request( --ping :Deyong.Xu ), Failed to connect to localhost:22364. After 2 attempts. Is the server running ?
Mon Jul 20 19:49:30 UTC 2015
User "20864" attempting to start ecf server on "cpecflow1.ncep.noaa.gov" using ECF_PORT "22364" and with:
ECF_HOME : "/u/Deyong.Xu/ecflow_server"
ECF_LOG : "cpecflow1.ncep.noaa.gov.22364.ecf.log"
ECF_CHECK : "cpecflow1.ncep.noaa.gov.22364.check"
ECF_CHECKOLD : "cpecflow1.ncep.noaa.gov.22364.check.b"
ECF_OUT : "/dev/null"
client version is Ecflow version(4.0.2) boost(1.53.0) compiler(gcc 4.4.7) protocol(TEXT_ARCHIVE) Compiled on Apr 25 2014 16:03:29
Checking if the server is already running on cpecflow1.ncep.noaa.gov and port 22364
Request( --ping :Deyong.Xu ), Failed to connect to cpecflow1.ncep.noaa.gov:22364. After 2 attempts. Is the server running ?
Backing up check point and log files
OK starting ecFlow server...
Placing server into RESTART mode...
To view server on ecflowview - goto Edit/Preferences/Servers and enter
Name : <unique ecFlow server name>
Host : cpecflow1.ncep.noaa.gov
Port Number : 22364
$ ssh -X Deyong.Xu@cpecflow1.ncep.noaa.gov
$ alias ecview='/ecf/ecfdir/ecflow/bin/ecflowview '
$ ecview
How to set up MAG server in ecflow?
Menu : Edit --> preference... --> server
Name: MAG
Host : cpecflow1.ncep.noaa.gov
Port: 27182
Note:
1) individual development can monitor MAG ecflow job running status.
2) We don't have password for mag.dev account, which Paula has probably.
On cpecflow1.ncep.noaa.gov
# Define ecflow job structure and global ENV vars.
/ecfmag/ecfnets/defs
admin.def
mag.def
test.def
# define ecflow job structure and global ENV vars.
/ecfmag/ecfnets/scripts
admin
mag
test
head.h
envir.h
tail.h
mag.def
# 4.0.3
suite mag
repeat day 1
edit ECF_FILES '/ecfmag/ecfnets/scripts'
edit ECF_INCLUDE '/ecfmag/ecfnets/scripts'
edit ECF_OUT '/ecfmag/ecfnets/output'
edit ECF_TRIES '1'
edit ECF_KILL_CMD '/ecfmag/ecfutils/unixkill %ECF_NAME% %ECF_JOBOUT%'
edit ECF_PASS 'FREE'
edit ECF_JOB_CMD '/ecfmag/ecfutils/unixsubmit %ECF_JOB% %ECF_JOBOUT% ibmsp'
edit MAG_TRANSFER 'OFF'
task mag_transfer_on_off
defstatus complete
edit ECF_JOB_CMD '%ECF_JOB% 1> %ECF_JOBOUT% 2>&1'
edit ECF_KILL_CMD 'kill -15 %ECF_RID%'
label info ""
family mag_processor
family gfs
task mag_gfs_processor # task name
event 1 processing
cron 00:04 23:54 00:10
endfamily
family rap
task mag_rap_processor
event 1 processing
cron 00:04 23:54 00:10
endfamily
....
endfamily
family mag_send2web
....
endfamily
endsuite
mag_gfs_processor.ecf ( /ecfmag/ecfnets/scripts/mag/mag_processor/gfs )
#BSUB -J jmag_gfs
#BSUB -o /nco/sib/magdev/com/output/prod/today/mag_gfs_processor.o%J
#BSUB -e /nco/sib/magdev/com/output/prod/today/mag_gfs_processor.o%J
#BSUB -q devhigh
#BSUB -a poe
#BSUB -x
#BSUB -L /bin/sh
#BSUB -W 04:00
#BSUB -R rusage[mem=500]
#BSUB -n 80
#BSUB -R span[ptile=16]
#BSUB -P MAG-MTN
%include "/ecfmag/ecfnets/scripts/head.h"
set -x
export MODEL=gfs
export MP_PGMMODEL=mpmd
export MP_LABELIO=YES
export MP_CMDFILE=poe_script
%include "/ecfmag/ecfnets/scripts/envir.h"
export VERSION_FILE=$NWROOT/versions/mag.ver
if [ -f $VERSION_FILE ]; then
. $VERSION_FILE
else
ecflow_client --msg="***JOB ${ECF_NAME} ERROR: Version File $VERSION_FILE does not exist ***"
ecflow_client --abort
exit
fi
export HOMEmag=$NWROOT/mag.$mag_ver
$HOMEmag/jobs/JMAG
%include "/ecfmag/ecfnets/scripts/tail.h"
%manual
[Deyong.Xu@cpecflow1 ~]$ ecflowview
[Deyong.Xu@cpecflow1 .ecflowrc]$ cat /u/Deyong.Xu/.ecflowrc/servers
MAG cpecflow1.ncep.noaa.gov 27182 # port no. Working for mag.dev
dxu_server localhost 22364 # Port no. working for Deyong.Xu
lh_27182 localhost 27182
local localhost 3141
local2 localhost 3141
local localhost 3141
Check quota
Calling sequence
File 1 is : ./ecf/scripts/admin/mag/mag_ckquota.ecf
$HOMEmag/jobs/JMAG_CKQUOTA
File 2 is : ./jobs/JMAG_CKQUOTA
$SCRIPTSmag/exmag_checkquota.sh.ecf
File 3 is : ./scripts/exmag_checkquota.sh.ecf
ABORTLIMIT=80
if [ "$localmach" = "g" ] ; then
filespace="gpfs-gd2"
else
filespace="gpfs-td2"
fi
usage=`/usr/lpp/mmfs/bin/mmlsquota -j nco-sib $filespace | tail -1 | awk '{print $3}'`
quota=`/usr/lpp/mmfs/bin/mmlsquota -j nco-sib $filespace | tail -1 | awk '{print $4}'`
percent=$(awk -v dividend="${usage}" -v divisor="${quota}" 'BEGIN {printf "%.2i", dividend/divisor * 100; exit(0)}')
echo Usage is ${percent}% of quota
if (( $percent > $ABORTLIMIT )); then
echo "Usage is over ${ABORTLIMIT}%. Abort!"
exit -1
fi
Rsync files to web farm
Calling sequence to sync files to web server
./ecf/scripts/mag/mag_send2web/polar/ecmag_sync_polar.ecf
( one of models syncing its file )
#BSUB -J jecmag_sync_polar
#BSUB -o /nco/sib/magdev/com/output/prod/today/ecmag_sync_polar.o%J
#BSUB -e /nco/sib/magdev/com/output/prod/today/ecmag_sync_polar.o%J
#BSUB -q transfer
#BSUB -R affinity[core]
#BSUB -L /bin/sh
#BSUB -W 03:00
#BSUB -R rusage[mem=500]
#BSUB -n 1
#BSUB -P MAG-MTN
export ncorzdm_username=mag
export HOMEmag=$NWROOT/mag.$mag_ver
$HOMEmag/jobs/JSNDMAG2WEB
|| calling
\/
$HOMEmag/jobs/JSNDMAG2WEB
|| calling
\/
$SCRIPTSmag/exsendmag2web.sh.ecf $MODEL $TABLEDIR/$TABLE $transfer_file $yyyymmdd $cycle
How to get to RZDM (web farm) : way 1
# 1) login wcoss as individual
$ ssh -X Deyong.Xu@devbastion.ncep.noaa.gov
# 2) sudo to mag.dev
$ sudo mag.dev
# 3) connect to RZDM, auto-key has been set up already.
$ ssh mag@ncorzdm
How to get to RZDM (web farm) : way 2
You can all log into the mag account on ncorzdm via the mag.dev account on WCOSS (have you got a WCOSS account yet, Komi)
If you set up passwordless ssh from your workstation, you can log onto mag@ncorzdm directly from you workstation, without having to go through the mag.dev account on WCOSS. It's the sudo privilege to mag.dev that allows this, so if you have a WCOSS account with sudo privilege to mag.dev, you can set this up.
You can use this procedure to create a key on your workstation, then log onto ncorzdm via the mag.dev account on WCOSS, and insert your workstation key into the .ssh/authorized_keys file on ncorzdm:
http://www2.nco.ncep.noaa.gov/sisb/webmasters/ssh/pwdless_ssh2.shtml
Please make a copy of .ssh/authorized_keys before you edit it!
-Paula
How to get to RZDM (web farm) : way 2 (cont...)
# Login to RZDM
$ ssh mag@ncorzdm
# go to data dir where operational MAG sends data to
$ prod
$ cd data
# go to data dir where parallel MAG sends data to
$ para
$ cd data
# go to data dir where test MAG sends data to
$ test
$ cd data
GEMPAK setup on wcoss
GEMPAK Resources
Local Google site - Wiki : https://sites.google.com/a/noaa.gov/nws-ncep-nco-gempak/
TRAC - Old wiki, but current tickets
http://vm-lnx-sibcf4.ncep.noaa.gov:8000/nawips/report
http://vm-lnx-sibcf4.ncep.noaa.gov:8000/nawips
Unidata : http://www.unidata.ucar.edu/software/gempak/
Setup GEMPAK
1) on wcoss:
$ vi .bashrc
export NAWIPS=/nwprod/gempak/nawips
. $NAWIPS/environ/gemenv.sh
[mag.dev@t14a2 gempak]$ which gdinfo
/nwprod/gempak/nawips/os/linux2.6.32_x86_64/bin/gdinfo
2) on local station
$ vi .bash_profile
export NAWIPS=/export-1/cdbsrv/nawdev/nawips
. $NAWIPS/environ/gemenv.sh ## ignore warning messages
PATH=$PATH:$NAWIPS/os/linux2.6.18_x86_64/bin
3) use csh ( .cshrc copied from A. Su )
$ csh # it reads .chsrc and automatically set up all the ENVs needed to run GEMPAK.
hrrr.sh ( under ush )
gdplot2_gif << EOF
GDFILE = $MODEL_HRRR/${file_type}_${date}${cycle}f${hr}${min} # 1) explicitly specify input location
GDATTIM = F${hr}${min}
GLEVEL = $level
GVCORD = $coord
PANEL = 0
# 2) We can also use datatype.tbl to specify input location as well.
last.nts
1. parameter values saved from last gdinfo session.
2. automatically load by gdinfo when it starts.
gemglb.nts : created when gdinfo exits.
MAG production trunk
I've never received one of these notices of a production svn update before.
I can't log in, the username/password I have on record for this server isn't working.
I'll put in a helpdesk ticket.
-Paula
---------- Forwarded message ----------
From: NCO WCOSS Subversion <ncep.list.spa-helpdesk@noaa.gov>
Date: Tue, Jun 9, 2015 at 7:36 AM
Subject: [Ncep.list.mag-ic] mag trunk updated to version 3.6.2
To: ncep.list.mag-ic@noaa.gov
Cc: ncep.list.spa-helpdesk@noaa.gov
The mag trunk (https://svnwcoss.ncep.noaa.gov/mag/trunk) has has been updated on the NCO WCOSS Subversion server. This revision has been tagged as https://svnwcoss.ncep.noaa.gov/mag/tags/IT-mag.v3.6.2
You are receiving this message because you have been identified as a code manager on this project. Please merge any updates in this revision with your code. To update the recipient list for these notifications, contact ncep.list.spa-helpdesk@noaa.gov.
SVN Log:
------------------------------------------------------------------------
r61 | floyd.fayton@noaa.gov | 2015-06-09 11:36:44 +0000 (Tue, 09 Jun 2015) | 1 line
Changed paths:
M /trunk/fix/MAG.xml
M /trunk/jobs/JMAG_HURR
Upgrade of MAG for HWRF/GFDL implementation at 15Z.
------------------------------------------------------------------------
_______________________________________________
Ncep.list.mag-ic mailing list
Ncep.list.mag-ic@lstsrv.ncep.noaa.gov
https://www.lstsrv.ncep.noaa.gov/mailman/listinfo/ncep.list.mag-ic
MAG setup /configuration files
1. Setup / configuration files used in mag process:
ush/setup.sh === call ==> ush/set_up_gempak
2. locally used configuration file when run Mag under personal account.
ush/set_comin.sh
MAG Web Scan
No new scan result for MAG has been posted. I have attached the last scan from April. Nothing should be different this time unless they used different parameters for the scan.
Following is what is posted as far as notifications go. That website must be onsite as I can't open it without being on vpn. Last I looked it had Ricardo listed as POC for MAG.
- Notify NCO Web POC's listed at http://www2.nco.ncep.noaa.gov/sisb/webmasters/servers/
- Notify mag.helpdesk@noaa.gov
--
MAG machines
WCOSS:
[dxu@nco-lw-dxu Desktop]$ host devbastion
devbastion.ncep.noaa.gov is an alias for devbastion.wcoss.ncep.noaa.gov.
devbastion.wcoss.ncep.noaa.gov is an alias for tbastion.wcoss.ncep.noaa.gov.
tbastion.wcoss.ncep.noaa.gov has address 192.58.1.81
tbastion.wcoss.ncep.noaa.gov has address 192.58.1.82
Web server
[dxu@nco-lw-dxu Desktop]$ host ncorzdm
ncorzdm.ncep.noaa.gov is an alias for rzdm.ncep.noaa.gov.
rzdm.ncep.noaa.gov has address 140.90.100.205
Ecflow server
[dxu@nco-lw-dxu Desktop]$ host cpecflow1
cpecflow1.ncep.noaa.gov has address 10.90.5.70
Local Linux
nco-lw-dxu.ncep.noaa.gov
MAG models
###################
#1 MODEL GUIDANCE
###################
GEFS-MNSPRD
GEFS-SPAG
GFS
HRRR
HRRR-SUBH <<< treated as hrrr
HRW-ARW
HRW-ARW-AK
HRW-ARW-PR
HRW-NMM
HRW-NMM-AK
HRW-NMM-PR
NAEFS
NAM
NAM-HIRES
NAM-SIM-RADAR <<< treated as NAM
POLAR
RAP
SREF
WW3
WW3-ENP
WW3-WNA
#############################
#2 OBSERVATIONS AND ANALYSES
#############################
UAIR
SKEWT
RTMA
RTMA-GUAM
######################
#3 TROPICAL GUIDANCE
######################
GHM-FULL
GHM-NESTED
HWRF-FULL
HWRF-NESTED
MAG product description
It's in MAG.xml on web trunk.
Eg:
>1000_500_thick):</u></strong><br />
This product contains three fields:</p>
<ol>
<li>The thickness of the 1000 mb to 500 mb layer expressed in decameters (dm). Increments are 6 dm apart. For thicknesses of 546 dm and greater, these contours will be red dashed lines. For values below 546 dm the dashed lines will be in blue. The 546 dm line provides a very rough indication of possible frozen precipitation depending on surface temperature and other conditions. </li>
<li>Accumulated precipitation in inches for the 6 hours preceding the forecast hour (for forecast hours 003-177) or 12 hours preceding the forecast hour (for forecast hours 180-384), expressed as color fill. A reference bar of values for each color is located on the left side of the image.</li>
<li>Mean Sea Level Pressure expressed in millibars. Increments are 4 mb apart. These appear as solid black lines.</li></ol>
<br>
No comments:
Post a Comment