PHENIX run6pp Production at CCJ

General Information for Analyzer

First of all, there's actually a webpage which describes how to write and run a PHENIX analysis module on a PHENIX nDST (or DST, pDST, etc). It's in the PHENIX offline software tutorials area at

http://www.phenix.bnl.gov/WWW/offline/tutorials.html

Look for the "Striped Analysis Module Example." I'm not totally sure that it is up to date, but if you send me any problems that you encounter I will fix and update things. Also send me any questions that you have because the goal is to have a complete set of documentation that will allow a new PHENIX member to come in and get themselves up to speed with a minimum of fuss.

I think the model we'll follow for the analysis will be a lot like what the Heavy Ion people do. Since there will 16 TB of nDST, the most I/O efficient way to do this will be to have people write their analysis modules to be added to the "AnaTrain," and there will be regularly scheduled AnaTrain passes over the full 16 TB dataset. People should develop and debug their analysis modules using ~ 10% of the full dataset The "Striped Analysis Module Example" I mentioned above should tell you how to develop the analysis module. The debugging I leave to you....

To do your code development and get access to the PHENIX nDSTs, you should log on to either CCJ or RCF. To log on to CCJ, use the ccjgw.riken.jp gateway, and then log on to linux1 or linux2. To log on to RCF, go to rssh.rhic.bnl.gov and then log on to any interactive machine (rcas2061 to rcas2079). Just try not to use rcas2078 or
rcas2079 too often since that is what I typically use :)

For Analyzer at RCF

If you are on RCF, the nDSTs are kept in "dCache" file system. Be sure to add the following in your .login on RCF if you want to access dCache:

setenv DCACHE_RAHEAD
setenv DCACHE_RA_BUFFER 2097152

This should speed up the dCache access. The physical filename of ndst were registered into FROG database. (On 2005/Dec/20, all ndsts were registered.) You can utilize the FROG file catalog in your root macro;

> setenv GSEARCHPATH .:PG

> root

root> gSystem->Load("libFROG.so");

root> FROG fr;

root> char* filename = fr.location("CNT_Photon_run6pp_v01CCJ_pro68-<runnumber>-<segment number>.root")

To open the file from the root macro, use a normal way as you're doing for a normal file,

root> TFile*f = TFile::Open(filename);

To get a listing of the "tags" in there, you can download the GOLDEN TAG list (IMPORTANT!!! Please be carefull for runs < 168705 since most of them are transverse runs. 2005/Dec/20). The selection of "golden" is 1) Oncal status is good, 2) the date is after Apr/17, and 3) longitudinal polarization. The files in lists above WILL NOT be moved and deleted, unless any further announcement.

Golden TAG at RCF

CAUTION!!! Because we run aggregation again in December, 4% of files transferred to RCF must be re-transfered again. You should not use following files; BAD file list. 2006/Feb/13 CAUTION!!!!

Also, for the batch system RCF uses condor instead of LSF, so the batch command is

condor_submit condor.job

where condor.job is a file containing something like

Universe = vanilla
Notification = Error
Initialdir = $ENV(PWD)
Executable = $ENV(PWD)/myjob.csh
Arguments = $(Process)
Log = $ENV(PWD)/myjob.log
Output = $ENV(PWD)/myjob.out
Error = $ENV(PWD)/myjob.err
Notify_user = $ENV(USER)@rcf.rhic.bnl.gov
GetEnv = True
+Experiment = "phenix"
+Job_Type = "cas"

where myjob.csh is your job script. myjob.csh should dccp the file before you run the analysis module, and delete the file afterwards. Note that I've only given a simple condor example here, and you can do more sophisticated things, but this might be enough to get started.

If you want to copy the file to your local scratch disk (now called /home) you should use

/afs/rhic.bnl.gov/@sys/opt/d-cache/dcap/bin/dccp

The base directory of the dCache is following.

/pnfs/rcf.bnl.gov/phenix/phnxreco/run6

 

For Analkyzer at CCJ

One thing I did notice on CCJ was that the phenix_setup.csh script isn't in the .login automatically, so you will want to add the following lines to your CCJ .login:

if (-e /opt/phenix/bin/phenix_setup.csh) then

source /opt/phenix/bin/phenix_setup.csh

endif

On CCJ, you can find some nDSTs to play with at

ccjbox2:/ccj/w/data02a/run6pp-ndst/AllLinks

ccjnfs11:/ccj/w/data41/run6pp-ndst/AllLinks

ccjnfs12:/ccj/w/data45/run6pp-ndst/AllLinks

The directory contains many subdirectories for each 1000 steps of runs. Folr example, a directory 'run_0000170000_0000171000' is for nDSTs of run number between 170000 and 171000. Under this directory, sub-directories for each type of nDSTs are placed. When you need to use, for example, PWG nDSTs in photon trigger, the name of the sub-directory will be 'PWG_Photon'. There isn't anything like FROG set up yet. To get a listing of the files in there, you can download the GOLDEN nDST list (IMPORTANT!!! Please be carefull for runs < 168705 since most of them are transverse runs. 2005/Dec/20) from here;

SPIN_All nDSTs

PWG_Photon nDSTs

PWG_MinBias nDSTs

CNT_Photon nDSTs

The selection of "golden" is 1) Oncal status is good, 2) the date is after Apr/17, and 3) longitudinal polarization. The files in lists above WILL NOT be moved and deleted, unless any further announcement.(IMPORTANT!!! runs < 168705 were deleted on 2005/Dec/20 from disk since most of them are transverse runs.)

Now, at CCJ the nDSTs sit on the ccjnfs12 NFS server and the way to access them is to copy them using rcpx to a local scratch directory, the /job_tmp directory. Here's an example:

rcpx ccjnfs12:/ccj/w/data45/run6pp-ndst/AllLinks/run_00001xx000_00001xx000/XXX.root /job_tmp/your_name/

You would do this rcpx from either linux1, linux2, or in a batch script on a batch computer. The copy avoids having to access the file over NFS, which is quite inefficient. After you are done, you should delete the file from the scratch directory or else you will fill up the disk.

When you're developing your code and just want to see if it runs properly, you will most likely want to keep the file in /job_tmp on the interactive machines linux1 and linux2 and run over them locally, but if you are beyond that and want to run on more of the data to see if your code does the proper thing, then you will want to send your job to the batch queue. CCJ still uses LSF, so to batch a command off you will run something like

bsub -o job.log -q queue_name "job.csh"

where job.log is the output log file, queue_name is either short or long (short is for jobs that take less than 3 hours), and job.csh is your analysis script. In your script you will need to rcpx the nDST from ccjnfs12 to the local /job_tmp directory, and don't forget to delete the file after your job is done. You can monitor your batch jobs with "bjobs".

Useful Links

PHENIX Offline
http://www.phenix.bnl.gov/WWW/offline/tutorials.html
http://www.phenix.bnl.gov/WWW/offline/tutorials/StripedAnalysisModules.html

Run05 Production
http://www.phenix.bnl.gov/phenix/WWW/run/05/dataprod/Projects/projects.html

Train
https://www.phenix.bnl.gov/WWW/p/draft/anatrain/

Recalibrators
http://www.phenix.bnl.gov/WWW/offline/tutorials/recalibration.html
http://www.phenix.bnl.gov/WWW/run/04/dataprod/Recalibrators/recalibrators.html
https://www.phenix.bnl.gov/WWW/p/draft/sickles/masterrecalibrator/recalibrators.html

CCJ info
http://ccjsun.riken.go.jp/ccj/ (CCJ Main Page)
http://ccjsun.riken.go.jp/ccj/doc/usersguide/ccjusersguide.html
(User's Guide) Interactive nodes: linux1,linux2 (avoid linux3/4 --> redhat 8)

RCF
http://www.phenix.bnl.gov/WWW/offline/home.html
Interactive nodes: rcas2061-rcas2079,
see http://www.phenix.bnl.gov/WWW/offline/casNodes.html

Created by Mickey.

Modified by Hisa

About Us | Site Map | Privacy Policy | Contact Us | ©2003 Company Name