Navy DSRC Archival Storage Guide
Accessing the Remote Mass Storage Server (RMSS) system (composed of a single Oracle T4-4 server named Newton) interactively or from batch mode will be reviewed within this tutorial. The underlying archival file system and access commands on Newton will utilize Sun Microsystems SAM/QFS software to provide file archival and file retrieval support. All users moving to the RMSS systems will be concerned with the four basic tasks:
1. Accessing the Archive Server
All of our HPC systems have access to an online archival mass storage system that provides long term storage for users' files on a petascale archival storage system that resides on a robotic tape library system. A 70-TByte disk cache frontends the tape file system and temporarily holds files while they are being transferred to or from tape.
Tape file systems have very slow access times. The tapes must be robotically pulled from the tape library, mounted in one of the limited number of tape drives and wound into position for file archival or retrieval. For this reason, users should always tar up their small files in a large tarball when archiving a significant number of files. A good maximum target size for tarballs is about 200 GBytes or less. At that size, the time required for file transfer and tape I/O is reasonable. Files larger than 1 TByte may span more than one tape, which will greatly increase the time required for both archival and retrieval.
The environment variables $ARCHIVE_HOST and $ARCHIVE_HOME are automatically set. $ARCHIVE_HOST can be used to reference the archive server, and $ARCHIVE_HOME can be used to reference your archive directory on the server. These can be used when transferring files to/from archive.
Navy DSRC HPC systems are configured with a transfer queue that has access to the Navy DSRC archive server, as well as to remote machines. It is recommended that users submit transfer queue jobs to move data within and external to the center. Transfer queue jobs are only charged for a single processor at a time, so transferring data via a transfer queue job instead of from within a large computational job will save allocations. Examples of using the transfer queue can be found in each of the HPC User Guides and the PBS Guides, accessible from the Documentation page. An example can also be found in the Sample Code Repository ($SAMPLES_HOME) on each HPC system.
Non-Kerberized rsh and rcp remote access to Newton is allowed only from other Navy DSRC systems inside the Navy DSRC local network; Kerberos access is required in all other cases. Users do not need a .rhosts file on Newton to use the non-kerberized commands.
Remote access to Newton from all Navy DSRC systems and other remote systems is also allowed via the standard set of Kerberos-aware commands such as krcp, kftp, krsh, ktelnet, krlogin, and ssh.
2. Copying Files to the Archive Server
Storing files on the archive server is very simple. Large amounts of data should be tarred and gzipped before copying them to the archive server. Use the accepted protocols (krcp/rcp/scp or kftp/sftp) to copy the files to the server. The $ARCHIVE_HOST and $ARCHIVE_HOME environment variables found on each of the HPC platforms may also be used to further simplify archiving files. Below are examples of tarring, gzipping and copying a file to the archive server.
haise > tar -cvf my_files.tar $WORKDIR/project1/ haise > gzip my_files.tar haise > rcp my_files.tar.gz $ARCHIVE_HOST:$ARCHIVE_HOME
The above example shows directory $WORKDIR/project1 being tarred. The tar command is used to create an archive file that combines multiple directories and files into a single file. The resulting tar file, my_files.tar, is then compressed via the gzip utility. Tarring and compressing the file will save space on the archive server and make retrieving entire datasets easier.
The newly created gzipped tar file, my_files.tar.gz, is then copied via the rcp command to $ARCHIVE_HOST, which is currently Newton, and placed in the directory referenced by $ARCHIVE_HOME, which points to a user's home directory on $ARCHIVE_HOST.
The non-Kerberized rcp command is available on HPC login nodes and transfer queues to facilitate ease of archiving data. When transferring data to the archive server from external sources, Kerberized commands such as krcp/scp or kftp/sftp should be used.
3. Determining the Status of Archived Files
The man page for the SAMfs command, sls, is quite long and offers many options, but only "-D" and "-2" are actually needed to determine whether a file is online or archived on tape locally.
newton > sls -2 /u/home/user1/r.sh -rwx------ 1 user1 NAV101 451 Dec 5 2003 /u/home/user1/r.sh O-a------ ----- -- -- sf dk
Using sls with the "-D" option gives more detailed information on files.
newton > sls -D /u/home/user1/r.sh /u/home/user1/r.sh: mode: -rwx------ links: 1 owner: user1 group: NAV101 length: 451 admin id: 0 inode: 1122645.1 offline; copy 1: ---- May 4 17:59 168888.192b6 sf SR3127 copy 2: ---- Feb 2 2008 65766.e dk disk01 d6/d87/f102 access: Dec 8 2003 modification: Dec 5 2003 changed: Aug 22 18:09 attributes: Dec 8 2003 creation: Dec 8 2003 residence: May 4 17:59
This file is offline. There are two archive copies. The first copy is on tape, and the second copy is on the disk archive server. The residence field indicates when the file's online/offline state was last changed.
NOTE: Files not recently accessed on RMSS will probably be offline. Recall times on such files will vary based on the load running on the RMSS server and the size of the file.
When remotely checking file status, remember to run sls and other SAM commands on Newton to get correct status on files:
haise > rsh newton sls -2 /u/home/user1/romulus.tar -rw-r--r-- 1 user1 NAVOSLMA 1781760 Sep 26 17:10 romulus.tar O-------- ----- -- -- sf dk
The following examples illustrate a typical user session:
haise > krsh newton "ls -lt habu.tar" -rw-r--r-- 1 user1 NA0101 10240 Nov 15 10:41 habu.tar haise > ssh newton "mv habu.tar habu.tar.old" haise > krsh newton "ls -lt habu.tar*" -rw-r--r-- 1 user1 NA0101 10240 Nov 15 10:41 habu.tar.old haise > krcp habu.tar newton:/u/a/user1/habu.tar haise > krsh newton "ls -lt habu.tar*" -rw-r--r-- 1 user1 NA0101 10240 Jan 5 12:16 habu.tar -rw-r--r-- 1 user1 NA0101 10240 Nov 15 10:41 habu.tar.old
4. Copying Files From the Archive Server
Use the sls or sfind commands to decide if a file is online or archived to tape. Any file access such as cat, more, file, tail, rcp, etc., will start an automatic stage request. You can also use the stage command to retrieve one or more files from tape.
Example of background command:
# run tar command in the background while doing other work interactively. # tarfile will automatically be unarchived from tape onto local disk # newton > sls -2 vlsc.tar -rw-r----- 1 user1 NA0101 84213760 Sep 29 1999 vlsc.tar O--------- --- sg newton > tar -tvf vlsc.tar >& tarfile.list &  21161 newton > jobs  Done newton > head -3 tarfile.list -rw-r--r-- 211/101 161 Feb 1 14:49 1999 user1/pclint.c -rwxr-xr-x 211/101 137 Jul 3 12:08 1997 user1/.cshrc -rwxr-xr-x 211/101 961 Jan 27 08:04 1999 user1/.cshrc.cray Example using stage command: newton > sls -2 MicroEMACS.help -rwxr--r-- 1 user1 NA0101 8755 Aug 19 1993 MicroEMACS.help O--------- --- sg newton > stage MicroEMACS.help newton > sls -2 MicroEMACS.help -rwxr--r-- 1 user1 NA0101 8755 Aug 19 1993 MicroEMACS.help ---------- --- sg newton > head -1 MicroEMACS.help
Since the /bin/stage command will complete before the file is actually staged onto disk, "stage -w" can be used in the background during interactive logins. Once "stage -w" completes, the file is completely online and accessible on disk:
newton > sls -2 3gb.file -rw-r--r-- 1 user1 NA0101 3221225472 Aug 21 23:07 3gb.file O--------- --- sg newton > stage -w 3gb.file & newton > jobs  Done stage -w 3gb.file newton > sls -2 3gb.file -rw-r--r-- 1 user1 NA0101 3221225472 Aug 21 23:07 3gb.file ---------- --- sg newton > file 3gb.file 3gb.file: English text
The next example uses "stage -w" and the wait command in a shell script to ensure a file is online before attempting to use the file:
#!/bin/bash # export fs=`sls -2 3gb.file | tail -1 | cut -c1` # if [ "$fs" == "O" ]; then echo "offline file" stage -w 3gb.file & wait echo "past wait...." else echo "online file" fi # file should now be accessible and on disk sls -D 3gb.file # continue script processing...
Here is standard output from the example script listed above. Note that this took several minutes to complete but guaranteed that processing of the file did not occur until it was completely unarchived and on disk:
newton > stage.csh offline file  1541  Done stage -w 3gb.file past wait.... 3gb.file: mode: -rw-r--r-- links: 1 owner: user1 group: NA0101 length: 3221225472 inode: 1548016 archdone; copy 1: ---- Aug 22 09:58 b0d.1 sg 200072 access: Oct 13 10:04 modification: Aug 21 23:07 changed: Aug 21 23:07 attributes: Oct 13 10:49 creation: Aug 21 23:03 residence: Oct 13 10:55
The previous script example can be modified for use under ksh or sh shells and can also be used to help prevent rcp timeouts in batch scripts that remotely access Newton from DSRC compute servers such as Haise and Kilrain. Here is a slightly modified version that runs on Haise with the resulting output:
#!/bin/bash # export RMSS=newton export rfs=/u/home/user1 export file=3gb.file export fs=`rsh $RMSS sls -2 $rfs/$file | tail -1 | cut -c1` # if [ "$fs" == "O" ]; then echo "offline file" rsh $RMSS "/bin/stage -w $rfs/$file; wait" wait echo "past wait...." else echo "online file, continue processing..." endif # # file should now be accessible and on disk # rcp $RMSS:$rfs/$file . wait ls -lt $file # continue script processing....
Here is standard output from the previous example script. Note that it took ~10 minutes to unarchive and rcp a 3-GByte file from Newton to /scr on Haise:
haise > ~/stage.csh offline file past wait... -rw-r--r-- 1 user1 NA0101 3221225472 Oct 13 12:56 3gb.file haise >