This document describes the details of ACE data processing and archiving
performed at the ACE Science Center.
Detailed information on ACE data formats may be found via the following links:
A general overview of the ACE Science Center can be found in
this paper (PDF doc).
The following applies to all stages of ACE data processing at the ASC:
All ACE data processing is done using the asc UNIX account (i.e. /users/asc, or ~asc)
on one of the ASC computers (mussel, otter or starfish).
Most ACE data processing is controlled by perl or shell scripts residing in the
~asc/scripts directory. All scripts referred to below reside in this directory, unless
otherwise noted.
Most ACE data processing is scheduled via cron jobs on mussel. Cron is a UNIX utility
for scheduling jobs to run at regular intervals. The cron jobs are managed
via the crontab file ~asc/cron_jobs_mussel.asc. To change the scheduling of ACE data
processing jobs, edit ~asc/cron_jobs_mussel.asc, and then submit the edited file to cron
using the crontab program. See the cron and crontab man pages for
more info.
All messages generated by the data processing scripts running under cron are
automatically emailed to ASC staff who are members of the "asc" email alias.
The asc email alias is maintained by the SRL systems administrator.
Important parameters, such as the locations of important data directories, are
stored as environment variables in ~asc/procenv. The first thing ASC perl and shell
scripts do is to read in ~asc/procenv.
In the procedure descriptions below, the naming of specific data archive
directories is avoided where possible. The locations of these directories can
change with time, making it difficult to maintain the many processing scripts
(and the documentation). In practice, permanent paths to these directories are
maintained using links from the asc home directory. These links, along with
the information in ~asc/procenv, make it unnecessary to edit all the
processing scripts if one changes the structure of the filesystem. ACE data processing
scripts should use ~asc/procenv and the links, and should not refer to directory paths
that may change with time.
Under normal circumstances Level 0 and Quicklook data processing
proceeds automatically, and no human intervention is required.
There are four stages in Level 0 and Quicklook data processing, as follows:
Ingest of IDR data files from the JPL Deep Space Network (DSN)
Production of Level 0 and Quicklook data files from the IDR files
Addition of Level 0 and Quicklook data file information into the ASC Level 0 database
Distribution of Quicklook data to ACE instrument teams
Note: Stages 1 and 2 were formerly the responsibility of the FOT at GSFC.
Stage 1. Ingest of IDR data files from DSN
IDR files are (almost) raw telemetry data files. They are generated at the end of
each spacecraft contact by DSN, and ftp'd to the ASC. There are three types of
IDR files: realtime (VC1), playback(VC2) and aggregate(VC3) (VC stands for Virtual Channel,
see DSN Telemetry Interface with ACE, above).
The file-naming convention used by DSN for IDR files is a little confusing.
As an example, for day 2002-100, the playback IDR filename is JPLIDR2002099.69238.
Basically, the day index in the IDR filenames starts at zero, whereas the day
index for Level 0, browse, Level 1 filenames starts at 1. The "69238" filename extension
in the example above is not important for data processing at the ASC.
The ingest process is controlled by the update_IDR script,
which is scheduled to run regularly via cron.
update_IDR performs the following operations if it finds that a new IDR
file (or files) has been delivered by DSN:
Determine file type - realtime(VS1), playback(VS2) or aggregate(VS3).
The file type information is contained in the file header, not in the filename.
A small C program (~asc/lvl0_gen/sortIDR2) reads the header and outputs the file type.
Move file to the appropriate IDR data directory.
Compress aggregate files, they are not normally used in ACE science data processing
Scan realtime and playback files for corrupted Download VCTF Headers.
If more than 5 (configurable) corrupt headers are found, the file is moved
to a subdirectory called "corrupt", and a message is emailed to ASC staff.
Call Level 0 and Quicklook generation scripts
Write a flag file to let other scripts know new IDR files have arrived
Monitoring of IDR file data coverage
The coverage of playback IDR files received by the ASC from DSN is monitored.
The IDRplot script performs this monitoring, and is scheduled by cron. If
a significant data gap is found (gap > ~3 minutes), email is sent to ASC staff.
Also, a graphical display of IDR file coverage is displayed on ASC workstations,
and is updated whenever new IDR files arrive from DSN.
Things that can go wrong, and What to do
Normally, the automatic delivery of IDR files from DSN to the ASC goes smoothly.
Occasionally, network or other problems cause significant data dropouts or corruption
that necessitate human intervention.
A corrupt IDR file is indicated if the update_IDR script generates a message
like this:
update_IDR: !!! ALERT !!! Found 100 bad dnld vctfs in [some_filename].
Moving file to [corrupt directory], it will not be further processed
unless a human moves it out of the corrupt area.
A corrupt IDR file will not be used in subsequent data processing unless it
is moved out of the corrupt area. But there are several things that should be tried
before resorting to using the corrupt file:
Wait several hours and then manually retrieve the file again from the DSN.
The corruption may have occurred during the original file transfer from
DSN to the ASC, so retrieving the file again may fix the problem.
Manually retrieve the "Data Type 2" data file from the DSN, if one is available.
Manual retrieval of IDR files from the DSN is documented in the next sub-section
of this document.
If there are still significant data dropouts (> ~5 minutes) after all IDR files
have been
successfully retrieved from the DSN, then it is possible that they were caused
by some anomaly during the spacecraft dontact. In this case, the ASC should contact
a Flight Ops Team staff member at GSFC to request that the appropriate slices be replayed
from the onboard solid-state recorders (SSRs) during the next spacecraft contact.
It is important that the request be made before the next spacecraft contact, otherwise the
data on the SSRs will be overwritten .
The current contact address for SSR replay requests at the FOT is acefot@listserv.gsfc.nasa.gov
Manual retrieval of IDR files from the DSN
Monitoring of IDR data coverage may indicate that expected IDR data have not been
received in the usual automatic manner from the DSN.
The DSN operates a website from which one can download ACE IDR files manually:
The website requires a username and password. The correct username is "ASC". The
password is the regular password for the asc UNIX account at SRL.
After logging in, navigate to the "IDR Inventory" page. The "Spacecraft
Identifier Code" should already be set to 92 (ACE). From this page you can
choose which day's IDR files to download. You can also choose the "Data Type"
(1 or 2). The DSN generally receives two datastreams from its stations.
Normally they are identical, but occasionally one stream has higher quality
data than the other. Only Data Type 1 files are ftp'd automatically to the ASC, so
a manual download of Data Type 2 files can sometimes help to fill in data dropouts.
Note: Some DSN stations do not regularly send the Data Type 2 datastream,
and, occasionally, they mess things up and send only Data Type 2.
In the "IDR Inventory Table" displayed on the web page, the "Data Destination" column
is important to understand. Entries in this column with suffix ".2" are playback
IDR files. Entries with suffix ".1" are realtime IDR files.
Entries with suffix ".0" are aggregate IDR files. Aggregate files are not used
in ACE data processing at the ASC.
The current contact person for IDR file issues at JPL is Norm Baker, (626)305-6216.
Stage 2. Production of Level 0 and Quicklook data files from the IDR files
Level 0 and Quicklook data files contain a file header, forward
time-ordered data packets with duplicate data removed, and optionally data quality
and accounting summaries appended to the end. Each data packet consists of a 10-byte
packet header, a 2-byte minor frame header, and 842 bytes of telemetry data
(one minor frame of data).
The spacecraft generates one packet per second. Detailed data format
documentation can be found in the references listed above.
Each Level 0 data file covers one 24-hour period, starting at 00:00:00UT. So,
in general, each Level 0 data file should contain ~86400 data packets. Each
Quicklook data file contains all available data for day N that is available at
the end of the spacecraft contact on day N. Otherwise, Quicklook files are in
the same format as Level 0 files.
The production of Level 0 and Quicklook data files is controlled by the lvl0_gen.pl and
quick-look_gen.pl scripts. The update_IDR script calls these scripts whenever it finishes
ingesting a new IDR file (see Stage 1 above).
Both lvl0_gen.pl and quick-look_gen.pl call the gen0 program,
located in ~asc/lvl0_gen. gen0 is a C program that requires two command-line parameters:
year and day-of-year. It scans IDR files for data packets that fall within the
day and produces an output file in Level 0 format. gen0 performs extensive
checking to ensure that the data packtets are forward time-ordered, duplicates
have been removed, and that no truncated or corrupted data are included in the
output.
lvl0_gen.pl attempts to create one new Level 0 data file each time it is called. The script
performs the following operations:
Change directory to ~asc/lvl0_gen
Link to the Level 0 temporary data output directory
Look in the main Level 0 archive directory and determine the year and day-of-year
for the next Level 0 file to be created
Call the gen0 program to create a Level 0 formatted output file.
Scan the gen0 logfile to determine how many packets were included in the output.
Determine if the gen0 output file is worthy of being named the official Level 0
data file for that day. The criteria are:
The output file contains at least 86100 packets (out of a possible 86400).
All data acquired during the spacecraft contact on day+1 have been received.
If the gen0 output is "worthy", then rename the file to the official Level 0 file
naming convention, as detailed in the ICD between the ACE Mission Team and the
ACE Science Center, e.g. ACE_G001_LZP_2002-098T00-00-00Z_V01.DAT1, then move the file
to a temporary holding directory so that it can be picked up by the next stage in processing.
If the gen0 output is not "worthy" and the ASC has received 3 days of data beyond the
day in question, then go ahead an make an official Level zero file using whatever data
are available. Warn asc staff of the anomaly.
quick-look_gen.pl works somewhat differently from lvl0_gen.pl:
Change directory to ~asc/lvl0_gen
Link to the quick-look temporary data output directory
Determine the current year and day-of-year
If IDR files for the current year and day-of-year exist, call the
gen0 program to create a Level 0 formatted output file.
If the gen0 output contains more packets than any previous quicklook file
generated for the current day, then rename the file to the official Quicklook file
naming convention, as detailed in the ICD between the ACE Mission Team and the
ACE Science Center, e.g. ACE_G001_QLP_2002-098T00-00-00Z_V01.DAT1, then move the file
to a temporary holding directory so that it can be picked up by the next stage in processing.
Note: the data for a given day may reside in several IDR files, and these files are
not all delivered to the ASC at the same time. Therefore, several versions of the quicklook
file might be generated on any given day. Later versions will contain more data.
Things that can go wrong, and What to do
If a data gap spans an entire day, then no Level 0 file will be made for that day.
Currently, this will halt Level 0 data production, because the lvl0_gen.pl script
will not give up and move on to the next day. The solution is for an asc staff person
to manually initiate the Level 0 file generation for the first day for which
there are data available after the data gap. The procedure is:
login to the asc account.
Change directory to ~asc/scripts.
Run lvl0_gen.pl with explicit commandline parameters, eg to create a Level 0
file for 2002-100, run "lvl0_gen.pl 2002 100".
Once this is done, automated processing will take over for subsequent days.
Stage 3. Ingest of Level 0 and Quicklook data files into the ASC Level 0 database
Note: This is where ASC data processing began when production of
Level 0 and quicklook data was performed by the FOT.
In Stage 2 above, Level 0 and quicklook data files are produced and deposited into temporary
holding directories. In Stage 3, the Level 0 and quicklook data files are transferred to
permanent directories and basic information about the files is entered into a database.
The ingest process is controlled by the update_LZP and
update_QLP scripts, which are scheduled to run regularly via cron.
update_LZP (update_QLP) performs the following operations if it finds that a new
Level 0 (quicklook) file, or files, has been deposited in the holding directory:
Make sure the file is at least several minutes old, i.e. it is not currently
being modified by another process.
Copy the file from the holding directory to an archive directory.
Copy the file from the holding directory to a backup archive directory.
on the thrym computer at SRL, and compress the backup file using gzip
(Level 0 files only, not quicklook).
Set appropriate permissions on the files.
If the copy operations above succeeded, remove the file from the
incoming ftp directory.
Add information about the contents of the Level 0 (quicklook) file to the Level Zero Product (LZP)
database. The LZP database is named ~asc/LZPDB_file.
This is an ASCII text file with one record per line. The first line of the file contains
a number indicating the number of records following. The program used to update the
database is named ~asc/database/LZPDB_addlzp.
Notify asc staff via email that a new file has been ingested
(Level 0 files only, not quicklook).
Things that can go wrong, and What to do
Use the asc account to perform manual Level0/quicklook data
processing. Only the asc account has the correct file permissions, which helps to protect
the data from accidental deletions. Possible reasons for human intervention in Level 0 data
processing include the following:
Network problems may cause the script to fail partway thru the
process. A human might have to complete the process manually.
Occasionally, problems with the contents of a Level 0 or quicklook file will require
the database ~asc/LZPDB_file to be edited manually to allow Level 1 data processing to
proceed (see below). After editing LZPDB_file, make sure the number of records in the file matches
the number in the first line of the file.
Stage 4. Distribution of Quicklook data to ACE instrument teams
Quicklook data are used by the instrument teams when they need to assess the health and
status of an instrument as soon as possible. Therefore, the quicklook data are made
available via anonymous ftp immediately after they are created.
The instrument teams are
generally only equipped to handle data files in Level 1 format, so the quicklook data
are converted to Level 1 format before they are posted on the ftp site.
The process is controlled by the update_QLP script
(same script as in Stage 3 above). For conversion of data to Level 1 format, update_QLP
calls the ~asc/aceprog/libgen program, described below in the
Level 1 Data Processing section.
Under normal circumstances, Ancillary data processing
proceeds automatically, and no human intervention is required.
Ancillary data files include Clock Calibration Report (CCR) files and
Attitude and Orbit State Report (AOSR) files. CCR files contain information
regarding calibration of the ACE onboard clock. AOSR files contain information
regarding the attitude, spin-rate, position and velocity of the ACE
spacecraft. The data are essential for Level 1, browse and Level 2 data
processing by the ASC. They are also used by the instrument teams in their own
science data processing. More details about the ancillary data may be found
here (AOSRs)
and
here (CCRs).
CCRs and AOSRs are sent to the ASC via ftp from the Flight Operations Team (FOT) at GSFC.
Normally, one CCR and one AOSR file is sent each day.
The data are deposited in a directory in the ASC ftp account on mussel. The ASC ingests
the data contained in these files and appends them to the ACE ancillary database, which is
an HDF-format data file.
The process of ingesting CCR and AOSR files is controlled by the
update_ancil script,
which is scheduled to run regularly via cron.
update_ancil performs the following operations if it finds that a new ancillary file
(or files) has been delivered by the FOT:
Only one AOSR and CCR should be sent by the FOT for each day.
Make sure the new file is not a duplicate or a new version of a previously-received
file.
Copy the file from the incoming ftp directory to the ancillary data directory
(~asc/ancillary)
Copy the file from the incoming directory to a backup archive directory.
If the copy operations above succeeded, remove the file from the
incoming ftp directory.
Set appropriate permissions on the archived files.
Change directory to the ancillary data directory.
Append the information in the file to the ACE ancillary HDF database.
The database is named ACE_ANCIL.HDF, in the ancillary data directory.
The programs used to append the information are
~asc/aceprog/ancillary/readaosr and
~asc/aceprog/ancillary/readccr.
Copy the file to ancillary data dir on the asc ftp site.
Move the file to the ancillary data archive directory
(~asc/ancillary/reports)
Copy ACE_ANCIL.HDF to the web and ftp sites.
Copy ACE_ANCIL.HDF to the backup archive directory.
Call ~asc/scripts/update_start_clocks to update an
auxiliary database of daily spacecraft clock start times (this is used by the gen0
program during production of Level 0 data files).
Things that can go wrong, and What to do
The update_ancil script runs silently under normal circumstances.
If a problem occurs, an email is sent to members of the asc
email alias (just like other ASC scripts running under cron).
Use the asc account to perform ancillary
data processing manually. Reasons for Human intervention in Ancillary data
processing include the following:
Network problems may cause
the script to fail partway through the process. A human might have to complete
the process manually.
The FOT might deliver several versions of an Ancillary file for the same day.
If this happens, the update_ancil script moves later versions to a holding
directory and emails a message to ASC staff. ASC staff will need to inspect the
different versions to decide which version should be included in the database.
Occasionally, problems with the contents of an Ancillary file will require
the ACE_ANCIL.HDF database to be deleted and recreated from scratch. It is easier
to recreate the database from scratch than to delete records from it.
The script named
~asc/ancillary/mknewancil performs this procedure. It
uses as input all the Ancillary data files in
~asc/ancillary/reports.Therefore, make sure that all
the correct input files are in this directory before running mknewancil.
Occasionally, the FOT has changed the spacecraft epoch in the CCR files.
Spacecraft epoch is the official time of launch of the spacecraft and it should
never change, but for whatever reason, it has happened...
When this happens, the ~asc/aceprog/ancillary/readccr.c program needs
to be edited and recompiled.
Quite frequently, the FOT delivers a CCR file containing parameters that
are outside expected bounds. When this happens, the suspect data are not
appended to the ACE_ANCIL.HDF database, and a warning message is emailed to
ASC staff. This is not a serious problem, since the spacecraft clock drift
changes slowly with time.
ACE Level 1 data are distributed by the ACE Science Center to each of the ACE
instrument teams, and each instrument team uses the Level 1 data as the starting
point for their science data analysis efforts.
ACE Level 1 data are derived from Level 0 data and are organized to meet the
individual requirements of the nine ACE instrument teams. Several (in some
cases many) data structures are defined for each instrument, and several more
are defined for the spacecraft housekeeping/engineering data. The data
structures were defined by the Science Center in consultation with each
instrument team. Therefore, the degree to which the raw data are massaged
during Level 1 processing varies somewhat from instrument to instrument. All
ACE Level 1 data are formatted in HDF (Version 4.1r2).
Detailed documentation of ACE Level 1 data can be found
here.
Level 1 data processing MUST be performed
using the asc account.
Level 1 data processing does NOT occur
automatically, although all activities associated with
Level 1 data processsing have been consolidated into two scripts,
LZP_check.pl, and ~asc/aceprog/runlzp-curr.
LZP_check.pl should be run daily by an ASC staff member, and runlzp-curr should be run whenever
the output of LZP_check.pl indicates that data are ready for Level 1 processing.
Preparatory Status Checking
LZP_check.pl performs an analysis of Level 0 and ancillary data that have been ingested
by the ASC. It produces a status report and indicates which days are ready for Level 1
processing, if any.
LZP_check.pl performs the following operations and reports status:
Determine which Level 0 data files have been created but not yet
processed into Level 1 format.
Verify that the new Level 0 data files have expected file size (between
70 and 80 Mbytes).
Automatic Level 0 processing should have added
an entry for the new Level 0 files in ~asc/LZPDB_file database.
Verify that the new entries in the LZPDB_file are consistent
(each one is in sync with the entry for the previous day).
Verify that the appropriate AOSR and CCR ancillary files have arrived and have been
appended to the ACE_ANCIL.HDF database.
For Level 1 processing of day N, AOSR files for days N and N+1 are generally required.
If an AOSR is missing, Level 1 processing can proceed as long as there was no spacecraft
maneuver
on the day for which the file is missing. Extrapolation of AOSR data is not allowed,
so if an AOSR is missing, Level 1 processing for day N must be delayed until an AOSR for
day N+X has been ingested, where X >= 1.
Maneuver days can be identified by inspecting the ACE DSN schedules received
via email from the FOT.
The drift of the spacecraft clock changes slowly with time,
so a day or two skip in the CCR coverage is not a problem. As long as a
CCR for day N or a day in the future has been ingested, Level 1 processing for day N
can proceed.
The script performs these ancillary data consistency checks and lets the user know if it is
OK to proceed.
Things that can go wrong, and What to do
LZP_check.pl occasionally detects a problem related to the consistency
of the scclock fields in ~asc/LZPDB_file. If a data gap between daily Level 0
files is less than 20 seconds, then go ahead with Level 1 processing. Otherwise
~asc/LZPDB_file needs to be edited to allow Level 1 processing to proceed.
Any issues related to editing of LZPDB_file should
be brought to the attention of Andrew Davis before proceeding.
If a day's ancillary data are missing, and the files can reasonably be expected be delivered
by the FOT within a day or two, then hold off on Level 1 processing for a day or two.
If AOSRs are not arriving from GSFC then contact a Flight Dynamics
person at GSFC:
Craig Roberts - croberts@csc.com, (301) 286-8865
If CCRs are not arriving from GSFC then contact an FOT person at GSFC:
Jacqueline Maldonado - jmaldona@pop500.gsfc.nasa.gov
Initiation of Level 1 Processing
Having run LZP_check.pl to determine which days are ripe for Level 1 processing,
the user initiates Level 1 processing as follows:
cd to ~asc/aceprog and run the following script
under the asc account to perform routine Level 1 data processing
runlzp-curr DOY1 DOY2 ...
where DOY1 DOY2 ... are the day-of-year's for the Level 0 data file(s) to be processed
into Level 1 HDF format.
The runlzp-curr script performs a large number of operations for each DOY processed,
beyond the production of the basic Level 1 data file. These operations are summarized here:
Check for existance of relevant files and directories
Prepare the proper calibration file for MAG browse data processing
Scan the Level zero file for scrambled/corrupt minor frames.
The ~asc/aceprog/scramblefilter/update_epam_uleis_crisis
script is called to do this.
Run the ~asc/aceprog/libgen program to create a
Level 1 format HDF file.
Note: this program also produces raw browse data in parallel (see Browse data
processing section below).
Check the libgen logfile for warnings/errors that should cause an abort at this stage.
Create a MAG subset Level 1 file (a Level 1 file containing only MAG data and housekeeping data).
Strip temperature data from the Level 1 housekeeping data and
update online temperature plots
Produce MAG 16-second-average browse data ascii files
ZIP compress the Level 1 file and the MAG subset Level 1 file and copy them both to the
asc ftp site on mussel. Leave the raw browse data file uncompressed.
call the update_browse script to do routine browse data processing (see below)
Level 1 processing notes, Things that can go wrong, and What to do
Reprocessing the whole mission:
Occasionally, it may be necessary to reprocess Level 1 data for the whole mission.
Scripts named ~asc/aceprog/runalllzpYYYY, where YYYY is the year, are maintained
to facilitate this. By default, these scripts have most functions except production
of the Level 1 and raw browse files turned off.
Script Updates:
The runlzp-curr and scramblefilter/update_epam_uleis_crisis scripts must
be updated at the beginning of each year - The YEAR variable must be
updated in each script. For runlzp-curr, any special cases applying to days in the
previous year must be deleted from the script.
Also at the beginning of each year, a new script to reprocess data for the previous
year should be prepared. This script should incorporate any special cases that were
necessary in the runlzp-curr script. The new script should be named ~asc/aceprog/runalllzpYYYY,
where YYYY is the previous year.
Output files:
The libgen program outputs two HDF data files for each day processed, in a
subdirectory of the working directory called "output". One file contains
the Level 1 data, the other contains raw browse data (see below). For example, for day
2002-099, the output files produced in the output directory would be ACE_LV1_2002-099T00-00-02Z_V3-2.HDF
and ACE_BRW_2002-099T00-00-02Z_V3-2.HDF.
Note: ~asc/aceprog/output is actually a link to a large multi-disk partition with
enough capacity to handle all Level 1 data for the whole mission (compressed).
Scrambled Frames:
Any issues related to scrambled minor frames should
be brought to the attention of Andrew Davis before proceeding.
If scrambled (corrupt) minor frames are detected in the Level 0 file
then the runlzp-curr script will print some information about the
number of scrambled frames found, and the script will then exit. At this point
the operator must make a decision about how to proceed.
Scrambled data are found only in SIS data, and there was a solar energetic
particle event on the day in question:
--> edit ~asc/aceprog/scramblefilter/update_epam_uleis_crisis to suppress
scanning of SIS data for scrambled frames for this day. The SIS instrument
occasionally produces data that appears to be scrambled during SEP events.
Then rerun runlzp-curr.
Scrambled data are found only in ULEIS data, and the ULEIS instrument was reset
or commanded into a different mode on the day in question:
--> edit ~asc/aceprog/scramblefilter/update_epam_uleis_crisis to suppress
scanning of ULEIS data for scrambled frames for this day. The ULEIS instrument
occasionally produces data that appears to be scrambled when it is reset or
commanded into a different mode.
Then rerun runlzp-curr.
Scrambled data are found in more than one instrument:
--> the scrambled frames are probably real. If the operator believes the Level 0
data processing was problem-free for the day in question, then
the operator should set the ALLOWSCRAMBLE
environment variable to 1. ("setenv ALLOWSCRAMBLE 1").
This will signal the runlzp-curr and update_epam_uleis_crisis scripts to handle
the scrambled frames properly.
Then rerun runlzp-curr.
Level 1 Data delivery to Instrument Teams
The primary delivery method for ACE Level 1 Data to the instrument teams is ftp.
Posting of the Level 1 data on the ASC ftp site is performed by the runlzp-curr
script (see above). The instrument teams login to the ASC ftp site regularly to
download the latest Level 1 data.
The Level 1 data are also distributed to the instrument teams on CDs via
the US mail. Generally, about 19 or 20 days worth of Level 1 data fit on a
CD, along with supporting documentation and software. Ths procedure for
preparing these CDs is described in Appendix 1, below.
The purpose of the ACE browse data is to quickly provide first-order ACE
mission results to the science community. The data include Solar wind
parameters, interplanetary magnetic field data, solar and cosmic ray particle
fluxes. More details can be found here.
The browse data are all contained in a single HDF data file - ACE_BROWSE.HDF.
For convenience and faster network access, the browse data are also subsetted into
yearly data files. For the current year, the data cover from the beginning of the year
up to the most recent data available.
Input files for browse data processing are the raw browse data files produced by the
libgen program during Level 1 processing (see above). libgen incorporates software
from the instrument teams that produces browse data in scientific units from
the Level 1 data. The raw browse files contain spin-averaged
browse data at time-averages that depend on the natural cycle-time of each instrument.
So, the raw MAG data are 16-second averages, while the raw SIS data are 256-second
averages, etc. Browse processing consists of converting the raw browse data to scientific
units, computing 5-minute, hourly, and daily averages, producing standard plots,
and presenting the data and plots on the Web.
Routine browse data processing is controlled by the update_browse script. This script
is now executed routinely by the ~asc/aceprog/runlzp-curr script (see Level 1
processing above).
update_browse performs the following operations:
Change directory to the browse processing directory (~asc/aceprog/browseproc).
Save a copy of the previously-made full-mission browse data file (ACE_BROWSE.HDF).
Construct a list of all daily raw browse data files available for processing
Process the browse subset file for the current year and copy it to the web/ftp site.
Append new browse records to the full-mission browse data file (ACE_BROWSE.HDF)
and copy it to the web/ftp site.
Make a set of standard browse data plots, and copy them to the web/ftp site.
The browse processing software and the software for making the plots live in
~asc/aceprog/browseproc. ace_br is the main browse
processing program, written in C. rw_yr_doy is a program
that extracts the record number for the first record of each day in a browse data file.
browseplot, brMAG_5_rd, mag_plot, brBRWS_5min_rd, brSIS_1hr_rd,
mag_swe_epam_sis_plot are all programs involved in making standard browse plots.
Browse processing notes, Things that can go wrong, and What to do
Reprocessing the whole mission:
Occasionally, it may be necessary to reprocess the browse data for the whole mission
(recreate ACE_BROWSE.HDF from scratch). The procedure is as follows (use the asc account):
where YYYY is the current year and rbflist.all is a file containing a listing of the full
pathnames of all available raw browse data files. The update_browse script illustrates
how to construct the rbflist.all file.
Yearly Script Updates:
At the beginning of each year, before any browse data for the new year are processed, but
AFTER all data for the previous year are processed, the
YEAR variable in the ~asc/scripts/update_browse script
must be updated. Then, around day 40 of the new year,
the start-time for the current browse subset file must be reset, i.e. the YRA and EPOCHA
variables in the above script must be set to the start of the new year.
Note: this is done around day 40 so that the browse subset file for the current year
will always have at least 40 days of data in it.
The browse data files are subsetted into yearly data files. The script that produces these
subset files is ~asc/aceprog/browseproc/run_browse_subsets. A
new section must be added to this script around day 40 of each year (at the same time as
the YRA and EPOCHA variables are updated in the update_browse script). Then the script
must be executed. Usually, this script is run just once each year.
The production of Level 2 data is the responsibility of each instrument team.
Using the Level 1 data as input, the instrument teams apply calibration data,
detector response maps, etc., they organize the data into appropriate energy
and time bins, and they transform vector data into appropriate coordinate
systems, etc, etc.
The instrument teams make their Level 2 data available to the ASC, and the ASC is
responsible for making the data available to the scientific community, and to
the NSSDC for long-term archiving. Level 2 data processing at the ASC consists of the
following operations:
Bridging of data gaps with fill-data records.
The computation longer-term time-averages of the Level 2 data from each instrument
team. For instance, the
SIS team delivers their Level 2 data as 256-second averages; the ASC
computes hourly, daily and 27-day averages from the 256-second averages.
Transformation of vector data into additional useful coordinate systems, eg.
the GSM coordinate system for magnetic field and solar wind vector data.
Formatting all Level 2 data into HDF (Version 4.1r2) data files.
By agreement with the instrument teams, Level 2 HDF data are separated
by instrument, and are archived in 27-day data files (each 27-day period is a
Bartels Rotation).
For the longer time-averages (hourly or longer), the data are also available
in mission-length data files.
Documentation: Addition of release notes, data description, and contact information
to the HDf data files.
Level 2 Data Ingest
Ingest of Level 2 data is controlled and performed by scripts
located in ~asc/level2/data:
getall_L2_data, get_cris_L2_testdata, and get_sis_L2_testdata>.
These scripts automatically poll ftp sites maintained by the instrument teams, and if new
Level 2 data are available, the scripts download the new data to subdirectories
of ~asc/level2/data. The scripts run once per day (scheduled by cron), and an email
message is sent to ASC staff whenever new data are downloaded.
Generally, the instrument teams post new Level 2 data on their ftp sites every
1 to 4 months.
The get_cris_L2_testdata and get_sis_L2_testdata scripts handle test level 2
data generated by the SIS and CRIS teams. These two instrument teams prefer to
have a "test phase" before their level 2 data are released to the community.
Level 2 Data Processing
Processing of Level 2 data is controlled and performed by scripts
located in ~asc/level2. The appropriate script
must be edited and run manually by an ASC staff member, whenever new Level 2 data
are ingested. The scripts are:
These scripts are short and relatively self-explanatory. For instance, if new EPAM
Level 2 data are ingested, edit the run_epam script so that the appropriate Bartels
Rotations are processed, and then run the script.
The scripts take care of posting the new data on the website. A script running
under cron on the web server takes care of updating the web pages to present the
new data.
During each daily DSN pass, a copy of the real time (VC1) data stream from the spacecraft is
sent from the Mission Operations Center at GSFC to the ASC via an internet socket connection.
The ASC receives the data stream and fans it out to the ACE instrument GSEs via internet
socket connections. In this way, the instrument teams can monitor the health and status
of their instruments each day, from their own institutions.
The fanout application runs on an ASC Sun workstation (otter). This application accepts
a socket connection from the data server at GSFC, and also accepts socket connections from
any number of clients. It serves a copy of the data stream to each client.
Procedures for maintaining the fanout application will be described here soon...
open
the script by typing: xe ~asc/scripts/moveL1_CDdata
(xe ~asc/scripts/lvl0_archive)
edit
the 'moveL1_CDdata' file so that it reflects what CD#,
beginning DOY, YEAR and Edition you want to
start at.
To
find out the info above type: cd CD_readme
(cd CD_readme/LVL_0)
Do
an ll and find the latest CD created
Do
a more on that file (more readmeXX_YRDOY)
Use the next DOY then the one in the file
run>
~asc/scripts/moveL1_CDdata
(run> ~asc/scripts/lvl0_archive)
cd
/home/frey3/CDinfo to view files as they are copied over
################
Next Step #### Writing Master CD #######
make
sure you have a blank CD in the drive before you begin.
IMPORTANT!!
WRITE anM
for Master and theCD#
on the CD before inserting it.
on
frey: cd /home/frey1/buildcd
make
the window LARGE before starting "gear"
type
in:
gear
CD>
newvol
VOLNAME
(of the form: 075ACE01241_01260 )
(for Lvl 0 use the form:
059LZP01163_01188 )
74
(default)
ISO
(default)
CD>
cp -r /home/frey3/CDinfo/ /
all
files should say "Successfully loaded as:"
CD>
writecd
Change
Current Settings: Y
--------------------------------------------
Physical
image.........N (default)
Verify.................Y
(default)
Enable
Record..........Y For
testing ..N
Write
Method...........1 disc at once (default)
Rec
Speed..............2 (default)
Reading
Speed..........16 (default)
Enable
Fixation........Y (default)
Enable
Multi...........N
-
check to see if the CD was written Successfully.
CD
> exit (to exit gear after you are done)
################
Next Step ######## CD Duplicator #######
Make
15 copies in the duplicator.
(make ONLY 2 copies for Lvl 0 CDs)
If
Power OFF then switch On both power buttons on the back of
the duplicator (order doesn't matter)
You
should get the message "SELECT COPY OR COMPARE"
Under
"OPTIONS" Press 8
Press
2 (for
2 x speed)
Place
the master on the top of the blank CD's and stack the CD's in
the left holder.
Press
'START' or 'COPY'
note:
15 CD's take about 8 hours to burn at x2 writing speed.
################
Next Step ######## Check CDs #######
Check
1 of the copies for each master CD written.
put
CD in mussel CD ROM
Bring
up a new xterm.
rlogin
mussel
cd
scripts/
run:
CD_tester.pl
if CD isn't automatically ejected then there was an error
type: eject to open the CD-ROM
rotate the CD and try again
################
Next Step ######## Print on CD's #######
Turn
ON the CD print duplicator.
Turn
on the PC.
click
"OK" when prompted for the "Network Password"
(no password is required)
the
CD printer should come on by itself.
put
1 of the CD's to be printed into the holding bin.
on
the CD printer press the tray eject button.
the
CD in the tray should be replaced by the blank CD
put
one of the created CD's in the PC and get the CD# and YR/DOY
info by clicking the "My Computer" icon. The info
should be in the title for the CD.
On
the PC: from the Desktop click open "My Documents"
in
"My Documents": click open "aceL1CD.psd"
(Lvl0CD.psd)
make
changes to image:
CD#
(see "5" above)
YR/DOY's
on CD (see "5" above)
Todays
Date
Software
Version - (rarely changes)
Edition
- (hardly ever changes)
make
sure to retrieve the CD out of the PC and put it back in the stack
Save
a COPY
of the image: click "File" and then "Save a
Copy..."
select:
JPEG
change filename to reflect present CD#
example: aceL1CD81.jpg
(example: Lvl0CD59.jpg)
click
Save
use
Maximum Quality (Default) and click: "OK"
load
the next 5 blank CD's into the holding bin (make sure they are
pushed up against the back of the bin) and place the rest of the
blank CD's so that they are leaning on the back of the bin (lowest
point of CD's is towards the front).
add a dummy CD on top of the stack. this is done because the printer wants 1 more CD then you want to print.
Open
up the "Imaging" software package by double
clicking the icon that says "CD Image Printing"
open the JPEG file you created in step 9 above
open
up the print window
click
"Options": select "Fit to page"
and uncheck "Print displayed annotations"
click
OK
change
number of copies, usually to "16" (3)
before
printing make sure you took the CD out of the PC and replaced it
into the print stack.
print:
takes about 30 min.
Shut
down the computer and then turn off the printer and printer
duplicator.