latest update: 9 February 2018
Until mid-September 2009, data have been received periodically each few weeks, by email. Since then, an internet connection has been established which permits daily access to the data. This works with a separate computer used as web server which receives a compressed file from the data acquisition computer via a serial link. This serial connection is normally established automatically, each morning.
The main advantage of this scheme is that data acquisition remains completely independent of data download, and therefore is not impaired when internet access is lost, or the server has not booted successfully. In such a case, once normal conditions are reestablished, all the missing data are transferred on the next occasion via the serial connection and so become accessible via internet.
The compacted file contains the raw data, processed quick-look data, instrument housekeeping information and log files.
1. Upon receipt of such a new data batch, the quick-look data are first visually inspected for quality and completeness.
2. Then, log files are inspected for ambient and data acquisition conditions during each day and night. This reveals any operational anomalies as well as environmental conditions:
Power outages (at the end of which the computers recover automatically) are indicative of strong winds (mostly occurring during the day, or in the evening). Ambient temperature variations signal cloudiness. We can even judge day-time cloud conditions from photon count variations due to background light leaking through the closed shutter (photomultiplier power switches on several hours before start of data acquisition).
3. Dark counts are determined each morning, and used as input for computing the corrected data version. We also inspect dark count at the beginning of the night, for additional diagnostics (but presently not used for data correction, because it may still contain some twilight scattered by clouds that could leak through the closed shutter).
4. Spectral positioning statistics are eye-checked for outliers before automatic conversion into drift correction information. We expect at most some small accumulation of mechanical drift in the very long term, i.e. several years, not due to filter aging, but to mechanical wear. At any rate, our standard procedure should take care of this.
5. Information from the daylight sensor (that detects the dusk and dawn transition) was previously processed to derive a correction for the real time clock reading. Since 10 September 2009, this has been replaced by fitting the RTC clock drift using the much more accurate time information available through the internet connection. Using this linear RTC drift information, each data time is corrected so that the timing of the final data is now practically free from systematic clock errors. We can therefore now normally expect a sub-second timing accuracy. This would not be possible using internet time information (NTP) automatically, because it is not available to the data acquisition machine in real time.
6. Raw data are then reprocessed into the corrected data version, including intensities extrapolated to integrated band brightness.
7. Using an interactive data editor, corrected data for each night are edited to remove outliers (from electrical spikes, contamination by moonlight and cloud, by excessive cloudless moonlight, or by excessive cloud absorption). Only the spectral background channel is left untouched, which is useful for data quality assessment. It serves to assess sky conditions, the presence of stars, the winter Milky Way, or moon-lit clouds, and is therefore included in the nocturnal variation plots.
8. From the edited data, nocturnal means and data coverage statistics are then computed.
9. The final data are converted into 5-parameter nocturnal variation plots for exhibition on our web site.
10. Statistical information (number of data per night, number of nights per month / per year) is also used to update our web site.
11. Updated tables and plots are transferred to IAFE's web server.
Under normal conditions, all this processing takes less than one day, even for data batches of several weeks, so that we could often update our web site shortly after receipt of a new data batch. But even with daily access, it is still more efficient to update the web site only after data for a certain number of nights have been gathered. We now usually update our website on a weekly schedule.