ACE Data Processing and Archiving




Contents


Introduction

This document describes the ACE data processing and archiving performed at the ACE Science Center.

Full details of data archiving plans and procedures are described in the ACE Mission Archive Plan, which is updated regularly.

Detailed information on ACE data formats may be found via the following links:

A general overview of the ACE Science Center can be found in this paper (PDF doc).

The following applies to all stages of ACE data processing at the ASC:

In the procedure descriptions below, the naming of specific data archive directories is avoided where possible. The locations of these directories can change with time, making it difficult to maintain the many processing scripts (and the documentation). In practice, permanent paths to these directories are maintained using links from the asc home directory. These links, along with the information in ~asc/procenv, make it unnecessary to edit all the processing scripts if one changes the structure of the filesystem. ACE data processing scripts should use ~asc/procenv and the links, and should not refer to directory paths that may change with time.

top

Level 0 and Quicklook Data Processing

Under normal circumstances Level 0 and Quicklook data processing proceeds automatically, and no human intervention is required.

There are four stages in Level 0 and Quicklook data processing, as follows:

  1. Ingest of IDR data files from the JPL Deep Space Network (DSN)
  2. Production of Level 0 and Quicklook data files from the IDR files
  3. Addition of Level 0 and Quicklook data file information into the ASC Level 0 database
  4. Distribution of Quicklook data to ACE instrument teams
Note: Stages 1 and 2 were formerly the responsibility of the FOT at GSFC.

Stage 1. Ingest of IDR data files from DSN

IDR files are (almost) raw telemetry data files. They are generated at the end of each spacecraft contact by DSN, and ftp'd to the ASC. There are three types of IDR files: realtime (VC1), playback(VC2) and aggregate(VC3) (VC stands for Virtual Channel, see DSN Telemetry Interface with ACE, above).

The file-naming convention used by DSN for IDR files is a little confusing. As an example, for day 2002-100, the playback IDR filename is JPLIDR2002099.69238. Basically, the day index in the IDR filenames starts at zero, whereas the day index for Level 0, browse, Level 1 filenames starts at 1. The "69238" filename extension in the example above is not important for data processing at the ASC.

The ingest process is controlled by the update_IDR script, which is scheduled to run regularly via cron.

update_IDR performs the following operations if it finds that a new IDR file (or files) has been delivered by DSN:

  1. Determine file type - realtime(VS1), playback(VS2) or aggregate(VS3). The file type information is contained in the file header, not in the filename. A small C program (~asc/lvl0_gen/sortIDR2) reads the header and outputs the file type.
  2. Move file to the appropriate IDR data directory.
  3. Compress aggregate files, they are not normally used in ACE science data processing
  4. Scan realtime and playback files for corrupted Download VCTF Headers. If more than 5 (configurable) corrupt headers are found, the file is moved to a subdirectory called "corrupt", and a message is emailed to ASC staff.
  5. Call Level 0 and Quicklook generation scripts
  6. Write a flag file to let other scripts know new IDR files have arrived

Monitoring of IDR file data coverage

The coverage of playback IDR files received by the ASC from DSN is monitored. The IDRplot script performs this monitoring, and is scheduled by cron. If a significant data gap is found (gap > ~3 minutes), email is sent to ASC staff. Also, a graphical display of IDR file coverage is displayed on ASC workstations, and is updated whenever new IDR files arrive from DSN.

Things that can go wrong, and What to do

Normally, the automatic delivery of IDR files from DSN to the ASC goes smoothly. Occasionally, network or other problems cause significant data dropouts or corruption that necessitate human intervention. A corrupt IDR file is indicated if the update_IDR script generates a message like this:
update_IDR: !!! ALERT !!! Found 100 bad dnld vctfs in [some_filename]. Moving file to [corrupt directory], it will not be further processed unless a human moves it out of the corrupt area.
A corrupt IDR file will not be used in subsequent data processing unless it is moved out of the corrupt area. But there are several things that should be tried before resorting to using the corrupt file: Manual retrieval of IDR files from the DSN is documented in the next sub-section of this document.

If there are still significant data dropouts (> ~5 minutes) after all IDR files have been successfully retrieved from the DSN, then it is possible that they were caused by some anomaly during the spacecraft dontact. In this case, the ASC should contact a Flight Ops Team staff member at GSFC to request that the appropriate slices be replayed from the onboard solid-state recorders (SSRs) during the next spacecraft contact. It is important that the request be made before the next spacecraft contact, otherwise the data on the SSRs will be overwritten .

The current contact address for SSR replay requests at the FOT is acefot@listserv.gsfc.nasa.gov

Manual retrieval of IDR files from the DSN

Monitoring of IDR data coverage may indicate that expected IDR data have not been received in the usual automatic manner from the DSN. The DSN operates a website from which one can download ACE IDR files manually:

JPL Deep Space Network (DSN) Central Data Recording (CDR) Assembly

The website requires a username and password. The correct username is "ASC". The password is the regular password for the asc UNIX account at SRL.

After logging in, navigate to the "IDR Inventory" page. The "Spacecraft Identifier Code" should already be set to 92 (ACE). From this page you can choose which day's IDR files to download. You can also choose the "Data Type" (1 or 2). The DSN generally receives two datastreams from its stations. Normally they are identical, but occasionally one stream has higher quality data than the other. Only Data Type 1 files are ftp'd automatically to the ASC, so a manual download of Data Type 2 files can sometimes help to fill in data dropouts.
Note: Some DSN stations do not regularly send the Data Type 2 datastream, and, occasionally, they mess things up and send only Data Type 2.

In the "IDR Inventory Table" displayed on the web page, the "Data Destination" column is important to understand. Entries in this column with suffix ".2" are playback IDR files. Entries with suffix ".1" are realtime IDR files. Entries with suffix ".0" are aggregate IDR files. Aggregate files are not used in ACE data processing at the ASC.

The current contact person for IDR file issues at JPL is Norm Baker, (626)305-6216.

Stage 2. Production of Level 0 and Quicklook data files from the IDR files

Level 0 and Quicklook data files contain a file header, forward time-ordered data packets with duplicate data removed, and optionally data quality and accounting summaries appended to the end. Each data packet consists of a 10-byte packet header, a 2-byte minor frame header, and 842 bytes of telemetry data (one minor frame of data). The spacecraft generates one packet per second. Detailed data format documentation can be found in the references listed above.

Each Level 0 data file covers one 24-hour period, starting at 00:00:00UT. So, in general, each Level 0 data file should contain ~86400 data packets. Each Quicklook data file contains all available data for day N that is available at the end of the spacecraft contact on day N. Otherwise, Quicklook files are in the same format as Level 0 files.

The production of Level 0 and Quicklook data files is controlled by the lvl0_gen.pl and quick-look_gen.pl scripts. The update_IDR script calls these scripts whenever it finishes ingesting a new IDR file (see Stage 1 above).

Both lvl0_gen.pl and quick-look_gen.pl call the gen0 program, located in ~asc/lvl0_gen. gen0 is a C program that requires two command-line parameters: year and day-of-year. It scans IDR files for data packets that fall within the day and produces an output file in Level 0 format. gen0 performs extensive checking to ensure that the data packtets are forward time-ordered, duplicates have been removed, and that no truncated or corrupted data are included in the output.

lvl0_gen.pl attempts to create one new Level 0 data file each time it is called. The script performs the following operations:

  1. Change directory to ~asc/lvl0_gen
  2. Link to the Level 0 temporary data output directory
  3. Look in the main Level 0 archive directory and determine the year and day-of-year for the next Level 0 file to be created
  4. Call the gen0 program to create a Level 0 formatted output file.
  5. Scan the gen0 logfile to determine how many packets were included in the output.
  6. Determine if the gen0 output file is worthy of being named the official Level 0 data file for that day. The criteria are:
    1. The output file contains at least 86100 packets (out of a possible 86400).
    2. All data acquired during the spacecraft contact on day+1 have been received.
  7. If the gen0 output is "worthy", then rename the file to the official Level 0 file naming convention, as detailed in the ICD between the ACE Mission Team and the ACE Science Center, e.g. ACE_G001_LZP_2002-098T00-00-00Z_V01.DAT1, then move the file to a temporary holding directory so that it can be picked up by the next stage in processing.
  8. If the gen0 output is not "worthy" and the ASC has received 3 days of data beyond the day in question, then go ahead an make an official Level zero file using whatever data are available. Warn asc staff of the anomaly.

quick-look_gen.pl works somewhat differently from lvl0_gen.pl:

  1. Change directory to ~asc/lvl0_gen
  2. Link to the quick-look temporary data output directory
  3. Determine the current year and day-of-year
  4. If IDR files for the current year and day-of-year exist, call the gen0 program to create a Level 0 formatted output file.
  5. If the gen0 output contains more packets than any previous quicklook file generated for the current day, then rename the file to the official Quicklook file naming convention, as detailed in the ICD between the ACE Mission Team and the ACE Science Center, e.g. ACE_G001_QLP_2002-098T00-00-00Z_V01.DAT1, then move the file to a temporary holding directory so that it can be picked up by the next stage in processing.
Note: the data for a given day may reside in several IDR files, and these files are not all delivered to the ASC at the same time. Therefore, several versions of the quicklook file might be generated on any given day. Later versions will contain more data.

Things that can go wrong, and What to do

If a data gap spans an entire day, then no Level 0 file will be made for that day. Currently, this will halt Level 0 data production, because the lvl0_gen.pl script will not give up and move on to the next day. The solution is for an asc staff person to manually initiate the Level 0 file generation for the first day for which there are data available after the data gap. The procedure is:
  1. login to the asc account.
  2. Change directory to ~asc/scripts.
  3. Run lvl0_gen.pl with explicit commandline parameters, eg to create a Level 0 file for 2002-100, run "lvl0_gen.pl 2002 100".
Once this is done, automated processing will take over for subsequent days.

top

Stage 3. Ingest of Level 0 and Quicklook data files into the ASC Level 0 database

Note: This is where ASC data processing began when production of Level 0 and quicklook data was performed by the FOT.
In Stage 2 above, Level 0 and quicklook data files are produced and deposited into temporary holding directories. In Stage 3, the Level 0 and quicklook data files are transferred to permanent directories and basic information about the files is entered into a database. The ingest process is controlled by the update_LZP and update_QLP scripts, which are scheduled to run regularly via cron. update_LZP (update_QLP) performs the following operations if it finds that a new Level 0 (quicklook) file, or files, has been deposited in the holding directory:
  1. Make sure the file is at least several minutes old, i.e. it is not currently being modified by another process.
  2. Copy the file from the holding directory to an archive directory.
  3. Copy the file from the holding directory to a backup archive directory. on the thrym computer at SRL, and compress the backup file using gzip (Level 0 files only, not quicklook).
  4. Set appropriate permissions on the files.
  5. If the copy operations above succeeded, remove the file from the incoming ftp directory.
  6. Add information about the contents of the Level 0 (quicklook) file to the Level Zero Product (LZP) database. The LZP database is named ~asc/LZPDB_file. This is an ASCII text file with one record per line. The first line of the file contains a number indicating the number of records following. The program used to update the database is named ~asc/database/LZPDB_addlzp.
  7. Notify asc staff via email that a new file has been ingested (Level 0 files only, not quicklook).

Things that can go wrong, and What to do

Use the asc account to perform manual Level0/quicklook data processing. Only the asc account has the correct file permissions, which helps to protect the data from accidental deletions. Possible reasons for human intervention in Level 0 data processing include the following:

top

Stage 4. Distribution of Quicklook data to ACE instrument teams

Quicklook data are used by the instrument teams when they need to assess the health and status of an instrument as soon as possible. Therefore, the quicklook data are made available via anonymous ftp immediately after they are created.

The instrument teams are generally only equipped to handle data files in Level 1 format, so the quicklook data are converted to Level 1 format before they are posted on the ftp site. The process is controlled by the update_QLP script (same script as in Stage 3 above). For conversion of data to Level 1 format, update_QLP calls the ~asc/aceprog/libgen program, described below in the Level 1 Data Processing section.

top


Ancillary Data Processing

Under normal circumstances, Ancillary data processing proceeds automatically, and no human intervention is required.

Ancillary data files include Clock Calibration Report (CCR) files and Attitude and Orbit State Report (AOSR) files. CCR files contain information regarding calibration of the ACE onboard clock. AOSR files contain information regarding the attitude, spin-rate, position and velocity of the ACE spacecraft. The data are essential for Level 1, browse and Level 2 data processing by the ASC. They are also used by the instrument teams in their own science data processing. More details about the ancillary data may be found here (AOSRs) and here (CCRs).

CCRs and AOSRs are sent to the ASC via ftp from the Flight Operations Team (FOT) at GSFC. Normally, one CCR and one AOSR file is sent each day. The data are deposited in a directory in the ASC ftp account on mussel. The ASC ingests the data contained in these files and appends them to the ACE ancillary database, which is an HDF-format data file.

The process of ingesting CCR and AOSR files is controlled by the update_ancil script, which is scheduled to run regularly via cron.

update_ancil performs the following operations if it finds that a new ancillary file (or files) has been delivered by the FOT:

  1. Only one AOSR and CCR should be sent by the FOT for each day. Make sure the new file is not a duplicate or a new version of a previously-received file.
  2. Copy the file from the incoming ftp directory to the ancillary data directory (~asc/ancillary)
  3. Copy the file from the incoming directory to a backup archive directory.
  4. If the copy operations above succeeded, remove the file from the incoming ftp directory.
  5. Set appropriate permissions on the archived files.
  6. Change directory to the ancillary data directory.
  7. Append the information in the file to the ACE ancillary HDF database. The database is named ACE_ANCIL.HDF, in the ancillary data directory. The programs used to append the information are ~asc/aceprog/ancillary/readaosr and ~asc/aceprog/ancillary/readccr.
  8. Copy the file to ancillary data dir on the asc ftp site.
  9. Move the file to the ancillary data archive directory (~asc/ancillary/reports)
  10. Copy ACE_ANCIL.HDF to the web and ftp sites.
  11. Copy ACE_ANCIL.HDF to the backup archive directory.
  12. Call ~asc/scripts/update_start_clocks to update an auxiliary database of daily spacecraft clock start times (this is used by the gen0 program during production of Level 0 data files).

Things that can go wrong, and What to do

The update_ancil script runs silently under normal circumstances. If a problem occurs, an email is sent to members of the asc email alias (just like other ASC scripts running under cron).

Use the asc account to perform ancillary data processing manually. Reasons for Human intervention in Ancillary data processing include the following:



top

Level 1 Data Processing

ACE Level 1 data are distributed by the ACE Science Center to each of the ACE instrument teams, and each instrument team uses the Level 1 data as the starting point for their science data analysis efforts. ACE Level 1 data are derived from Level 0 data and are organized to meet the individual requirements of the nine ACE instrument teams. Several (in some cases many) data structures are defined for each instrument, and several more are defined for the spacecraft housekeeping/engineering data. The data structures were defined by the Science Center in consultation with each instrument team. Therefore, the degree to which the raw data are massaged during Level 1 processing varies somewhat from instrument to instrument. All ACE Level 1 data are formatted in HDF (Version 4.1r2). Detailed documentation of ACE Level 1 data can be found here.

Level 1 data processing MUST be performed using the asc account.

Level 1 data processing does NOT occur automatically, although all activities associated with Level 1 data processsing have been consolidated into two scripts, LZP_check.pl, and ~asc/aceprog/runlzp-curr. LZP_check.pl should be run daily by an ASC staff member, and runlzp-curr should be run whenever the output of LZP_check.pl indicates that data are ready for Level 1 processing.

Preparatory Status Checking

LZP_check.pl performs an analysis of Level 0 and ancillary data that have been ingested by the ASC. It produces a status report and indicates which days are ready for Level 1 processing, if any.

LZP_check.pl performs the following operations and reports status:

  1. Determine which Level 0 data files have been created but not yet processed into Level 1 format.
  2. Verify that the new Level 0 data files have expected file size (between 70 and 80 Mbytes).
  3. Automatic Level 0 processing should have added an entry for the new Level 0 files in ~asc/LZPDB_file database. Verify that the new entries in the LZPDB_file are consistent (each one is in sync with the entry for the previous day).
  4. Verify that the appropriate AOSR and CCR ancillary files have arrived and have been appended to the ACE_ANCIL.HDF database.
    1. For Level 1 processing of day N, AOSR files for days N and N+1 are generally required. If an AOSR is missing, Level 1 processing can proceed as long as there was no spacecraft maneuver on the day for which the file is missing. Extrapolation of AOSR data is not allowed, so if an AOSR is missing, Level 1 processing for day N must be delayed until an AOSR for day N+X has been ingested, where X >= 1.
      Maneuver days can be identified by inspecting the ACE DSN schedules received via email from the FOT.
    2. The drift of the spacecraft clock changes slowly with time, so a day or two skip in the CCR coverage is not a problem. As long as a CCR for day N or a day in the future has been ingested, Level 1 processing for day N can proceed.
    The script performs these ancillary data consistency checks and lets the user know if it is OK to proceed.

Things that can go wrong, and What to do

LZP_check.pl occasionally detects a problem related to the consistency of the scclock fields in ~asc/LZPDB_file. If a data gap between daily Level 0 files is less than 20 seconds, then go ahead with Level 1 processing. Otherwise ~asc/LZPDB_file needs to be edited to allow Level 1 processing to proceed. Any issues related to editing of LZPDB_file should be brought to the attention of Andrew Davis before proceeding.

If a day's ancillary data are missing, and the files can reasonably be expected be delivered by the FOT within a day or two, then hold off on Level 1 processing for a day or two.
If AOSRs are not arriving from GSFC then contact a Flight Dynamics person at GSFC:
Craig Roberts - croberts@csc.com, (301) 286-8865
If CCRs are not arriving from GSFC then contact an FOT person at GSFC:
Jacqueline Maldonado - jmaldona@pop500.gsfc.nasa.gov

Initiation of Level 1 Processing

Having run LZP_check.pl to determine which days are ripe for Level 1 processing, the user initiates Level 1 processing as follows:

Level 1 processing notes, Things that can go wrong, and What to do

Reprocessing the whole mission: Occasionally, it may be necessary to reprocess Level 1 data for the whole mission. Scripts named ~asc/aceprog/runalllzpYYYY, where YYYY is the year, are maintained to facilitate this. By default, these scripts have most functions except production of the Level 1 and raw browse files turned off.

Script Updates: The runlzp-curr and scramblefilter/update_epam_uleis_crisis scripts must be updated at the beginning of each year - The YEAR variable must be updated in each script. For runlzp-curr, any special cases applying to days in the previous year must be deleted from the script.

Also at the beginning of each year, a new script to reprocess data for the previous year should be prepared. This script should incorporate any special cases that were necessary in the runlzp-curr script. The new script should be named ~asc/aceprog/runalllzpYYYY, where YYYY is the previous year.

Output files: The libgen program outputs two HDF data files for each day processed, in a subdirectory of the working directory called "output". One file contains the Level 1 data, the other contains raw browse data (see below). For example, for day 2002-099, the output files produced in the output directory would be ACE_LV1_2002-099T00-00-02Z_V3-2.HDF and ACE_BRW_2002-099T00-00-02Z_V3-2.HDF.
Note: ~asc/aceprog/output is actually a link to a large multi-disk partition with enough capacity to handle all Level 1 data for the whole mission (compressed).

Scrambled Frames: Any issues related to scrambled minor frames should be brought to the attention of Andrew Davis before proceeding.

If scrambled (corrupt) minor frames are detected in the Level 0 file then the runlzp-curr script will print some information about the number of scrambled frames found, and the script will then exit. At this point the operator must make a decision about how to proceed.

  1. Scrambled data are found only in SIS data, and there was a solar energetic particle event on the day in question:
    --> edit ~asc/aceprog/scramblefilter/update_epam_uleis_crisis to suppress scanning of SIS data for scrambled frames for this day. The SIS instrument occasionally produces data that appears to be scrambled during SEP events. Then rerun runlzp-curr.
  2. Scrambled data are found only in ULEIS data, and the ULEIS instrument was reset or commanded into a different mode on the day in question:
    --> edit ~asc/aceprog/scramblefilter/update_epam_uleis_crisis to suppress scanning of ULEIS data for scrambled frames for this day. The ULEIS instrument occasionally produces data that appears to be scrambled when it is reset or commanded into a different mode. Then rerun runlzp-curr.
  3. Scrambled data are found in more than one instrument:
    --> the scrambled frames are probably real. If the operator believes the Level 0 data processing was problem-free for the day in question, then the operator should set the ALLOWSCRAMBLE environment variable to 1. ("setenv ALLOWSCRAMBLE 1"). This will signal the runlzp-curr and update_epam_uleis_crisis scripts to handle the scrambled frames properly. Then rerun runlzp-curr.

Level 1 Data delivery to Instrument Teams

The primary delivery method for ACE Level 1 Data to the instrument teams is ftp. Posting of the Level 1 data on the ASC ftp site is performed by the runlzp-curr script (see above). The instrument teams login to the ASC ftp site regularly to download the latest Level 1 data.

top

Browse Data Processing

The purpose of the ACE browse data is to quickly provide first-order ACE mission results to the science community. The data include Solar wind parameters, interplanetary magnetic field data, solar and cosmic ray particle fluxes. More details can be found here.

Daily and Hourly browse data are all contained in a single HDF data file - ACE_BROWSE.HDF. For convenience and faster network access, the browse data are also subsetted into yearly data files. THe yearly data files also contain 5-minute averages. For the current year, the data cover from the beginning of the year up to the most recent data available.

Input files for browse data processing are the raw browse data files produced by the libgen program during Level 1 processing (see above). libgen incorporates software from the instrument teams that produces browse data in scientific units from the Level 1 data. The raw browse files contain spin-averaged browse data at time-averages that depend on the natural cycle-time of each instrument. So, the raw MAG data are 16-second averages, while the raw SIS data are 256-second averages, etc. Browse processing consists of converting the raw browse data to scientific units, computing 5-minute, hourly, and daily averages, producing standard plots, and presenting the data and plots on the Web.

Routine browse data processing is controlled by the ~asc/scripts/update_browse script. This script is now executed routinely by the ~asc/aceprog/runlzp-curr script (see Level 1 processing above).

update_browse performs the following operations:

  1. Change directory to the browse processing directory (~asc/aceprog/browse_proc).
  2. Save a copy of the previously-made full-mission browse data file (ACE_BROWSE.HDF).
  3. Construct a list of all daily raw browse data files available for processing
  4. Process the browse subset file for the current year and copy it to the web/ftp site.
  5. Append new browse records to the full-mission browse data file (ACE_BROWSE.HDF) and copy it to the web/ftp site.
  6. Make a set of standard browse data plots, and copy them to the web/ftp site.
The browse processing software and the software for making the plots live in ~asc/aceprog/browse_proc. ace_br and ace_br_no5min are the main browse processing programs, written in C. rw_yr_doy is a program that extracts the record number for the first record of each day in a browse data file. browseplot, brMAG_5_rd, mag_plot, brBRWS_5min_rd, brSIS_1hr_rd, mag_swe_epam_sis_plot are all programs involved in making standard browse plots.

Browse processing notes, Things that can go wrong, and What to do

Reprocessing the whole mission: Occasionally, it may be necessary to reprocess the browse data for the whole mission (recreate ACE_BROWSE.HDF from scratch). The procedure is as follows (use the asc account):
  1. change directory to ~asc/aceprog/browse_proc
  2. run the following:
           cp ACE_BROWSE.HDF /home/asc/browse/ACE_BROWSE.HDF.OLD
           rm -f ACE_BROWSE.HDF
           /bin/ls /users/asc/level1/*BRW*.HDF > rbflist.all
           ace_br_no5min YYYY rbflist.all ACE_BROWSE.HDF
           rw_yr_doy ACE_BROWSE.HDF   
           ~asc/scripts/update_browse
        
    where YYYY is the current year and rbflist.all is a file containing a listing of the full pathnames of all available raw browse data files.
Yearly Script Updates: At the beginning of each year, before any browse data for the new year are processed, but AFTER all data for the previous year are processed, the YEAR variable in the ~asc/scripts/update_browse script must be updated. Then, around day 40 of the new year, the start-time for the current browse subset file must be reset, i.e. the YRA and EPOCHA variables in the above script must be set to the start of the new year.
Note: proper values for EPOCHA may be found from /users/asc/aceprog/daily_start_clocks.db
Note: this is done around day 40 so that the browse subset file for the current year will always have at least 40 days of data in it.

The browse data files are subsetted into yearly data files. The script that produces a subset file is ~asc/aceprog/browse_proc/run_browse_subsets.last. Usually, this script is run just once each year, again around day 40. The script must first be edited, updating YRA, YRB, END, DIR, EPOCHA and EPOCHB parameters. Also, a new DIR (browseXX) directory must be first created in ${WEBHOME}/browse-data

top


Level 2 Data Processing

The production of Level 2 data is the responsibility of each instrument team. Using the Level 1 data as input, the instrument teams apply calibration data, detector response maps, etc., they organize the data into appropriate energy and time bins, and they transform vector data into appropriate coordinate systems, etc, etc.

Level 2 data are publication-quality data.

The instrument teams make their Level 2 data available to the ASC, and the ASC is responsible for making the data available to the scientific community, and to the NASA Space Physics Data Facility (SPDF) for long-term archiving. Level 2 data processing at the ASC consists of the following operations:

Level 2 Data Ingest

Ingest of Level 2 data is controlled and performed by scripts located in ~asc/level2/data: getall_L2_data, get_cris_L2_testdata, and get_sis_L2_testdata>. These scripts automatically poll ftp sites maintained by the instrument teams, and if new Level 2 data are available, the scripts download the new data to subdirectories of ~asc/level2/data. The scripts run once per day (scheduled by cron), and an email message is sent to ASC staff whenever new data are downloaded.

Generally, the instrument teams post new Level 2 data on their ftp sites every 1 to 4 months.

The get_cris_L2_testdata and get_sis_L2_testdata scripts handle test level 2 data generated by the SIS and CRIS teams. These two instrument teams prefer to have a "test phase" before their level 2 data are released to the community.

Level 2 Data Processing

Processing of Level 2 data is controlled and performed by scripts located in ~asc/level2. The appropriate script must be edited and run manually by an ASC staff member, whenever new Level 2 data are ingested. The scripts are: These scripts are short and relatively self-explanatory. For instance, if new EPAM Level 2 data are ingested, edit the run_epam script so that the appropriate Bartels Rotations are processed, and then run the script.

The scripts take care of posting the new data on the website. A script running under cron on the web server takes care of updating the web pages to present the new data.

top


Level 3 Data

Level 3 data include a wide variety of data, plots, and lists, provided by ACE team members and others. The ASC provides these data to the community via the web, and to the SPDF for long-term archiving. A full description of all these products is beyond the scope of this document; see The ACE Level 3 web pages for details.

Additions to and enhancements of the Level 3 data occur from time to time.

top


Real Time Data Processing

During each daily DSN pass, a copy of the real time (VC1) data stream from the spacecraft is sent from the Mission Operations Center at GSFC to the ASC via an internet socket connection. The ASC receives the data stream and fans it out to the ACE instrument GSEs via internet socket connections. In this way, the instrument teams can monitor the health and status of their instruments each day, from their own institutions.

The fanout application runs on an ASC Sun Solaris server (lupin). This application accepts a socket connection from the data server at GSFC, and also accepts socket connections from any number of clients. It serves a copy of the data stream to each client.


top

Data Archiving and delivery to SPDF/CDAWeb

ACE Browse and Level 2 data are available from CDAWeb, which is run by the NASA Space Physics Data Facility (SPDF).

The ASC converts ACE Browse and Level 2 data to the ISTP CDF data format and delivers new data to CDAWeb on a regular basis. The fulfills the long-term data archiving responsibilities of the ACE mission.

Details of the data archiving plans and procedures are described in the ACE Mission Archive Plan, which is updated regularly.

top


Presentation of data on the Web

ACE Browse, Level 2, and Level 3 data are available from the ACE Science Center web site, and from CDAWeb.

top



Last Updated: 22 July, 2015
Return to ASC Home Page