RIKEN CCJ (PHENIX Computing Center in Japan)
RICC for CCJ users
All CCJ users can use a new computing facility RICC (RIKEN Integrated Cluster of Clusters), which
is maintained by RIKEN IT division, since October 1, 2009.
You can find several general documentations about RICC in
( English /
Actually, 20 computing nodes are available with PHENIX analysis environment as exclusive use for CCJ users.
- Current configuration (reserved for CCJ users)
||Intel(R) Xeon(R) CPU X5570, Quad-core 2.93GHz (Nehalem) x 2CPU per node
||12GB per node
|Number of nodes for exclusive use
||20 nodes (1 for interactive + 19 for batch jobs)
|Number of CPU core for batch jobs
||Scientific Linux 5.3 (x86_64)
|Batch job system
- Get Account
Please fill and modify the blue words in the following paper
( English / Japanese ),
and send it to Yasushi Watanabe (watanaby -at- riken.jp)
as an e-mail attachment.
- How to play
- Login: Since your home directory is same as for CCJ, you can login to the
interactive node at RICC by the ssh from ccjgw or ccjsun after you get account.
|[user:ccjgw]$ ssh mpc2001
- PHENIX environment: "-a" option is required to take over the default environmental variables.
|[user:mpc2001]$ source /opt/phenix/bin/phenix_setup.csh -a
|[user:mpc2001]$ source /opt/phenix/bin/phenix_setup.csh -a pro.??
- Batch job: Condor scheduler, which is much familiar with PHENIX analyzer, is available
at RICC. If you do not know about it, please refer to a general manual of Condor in this
Firstly, you have to prepare a script for job submission such as,
Executable = your-script.csh
Universe = vanilla
getenv = true
Output = your-job-$(Process).out
Error = your-job-$(Process).err
Log = your-job-$(Process).log
Arguments = $(Process)
and then, just type as follows to submit it. The line "getenv = true" is necessary
to take over the environemntal variables in current interactive node.
|[user:mpc2001]$ condor_submit sample.cmd
You can check the status of your jobs and clusters by the several command as follows.
- Working area: Each computing node has enough disk space in "/job_tmp" for the input/output
of your process, and you can get the location by a variable of "$CCJ_JOBTMP". So, please do not
access to the NFS server directly in your jobs and copy from/to the "/job_tmp" by rcpx
command for the files of large size.
It is to reduce the load of NFS servers (ccjnfs11-15 and ccjnfs20) as described in this
|Last Modified: May. 27, 2010
||Back to the CCJ Home page