GSFC/NGIMS-FSW-26 Explaining MET (Mission Elapsed Time) for Contour/NGIMS Mike Paulkovich October 12, 2001 BACKGROUND The spacecraft provides 32-bit MET to NGIMS once per second with a precision (LSB) of 1 second. This was deemed to be insufficient for various reasons, and the requirement was put on the FSW to tag subscans (collections of 15 IPs worth of data) in telemetry with an MET of more precision. A prime reason for requiring more precision was actually because subscans are generated faster than once per second and thus two consecutive subscans could have the same MET. We were to strive for plus/minus one IP (33ms) in accuracy in the time tag. The solution implemented in TM is the 32-bit MET an additional 8 bits of "Fractional Seconds" which was computed based on the Tartan/Ada clock. That clock software ("package Calendar") proved to have bugs, so one was written ("package Kalender") that uses 1750 Timer A, which seems much more reliable. So, subscan data in TM is tagged with 40-bit "Extended MET" -- 32-bit seconds, plus 8-bit fractional seconds (precision of 1/256 sec). This is computed by extrapolating from time MET was received to the time the first IP of the subscan was sent to the microsequencer. Since the microsequencer FIFO is always loaded with 4 to 5 IPs, the actual time the data gets written to the DACs is 4 to 5 IPs later than the time indicated in the subscan data. Now, the MET message from the spacecraft is only read by the FSW on each interrupt from the microsequencer -- once per IP, or every 33 ms. This results in a fundamental (minimum) jitter in the reported of plus/minus not taking into account processing time and latencies. HISTORY The time processing algorithm had some hidden flaws: a) When MET was greater than FF_FFFF hex, a fatal exception would occur. This exception was fixed, but the fix caused flaw (b): b) When computed fractional MET was greater than 127, it would instead overflow, compute zero for the fractional part and MET+1 for the 32-bit part. This has the effect of more than one IP of jitter. FSW version 3.6 has taken care of both of these flaws. FSW version 3.5 retains (b) above. MET INTERPRETATION The MET reported in TM for each subscan thus should be adjusted for the 5 IP FIFO depth, and then has an additional inaccuracy of plus/minus 33 ms. There is some additional jitter due to processing time and latency which should be on the order of several microseconds. The upshot is this: to compute the actual time data was written to the DACs for the first IP of a subscan, take the 40-bit subscan time stamp, subtract 4 IPs (0.132s), then assume ±0.033s accuracy. Note that each subscan in TM has only one 40-bit "Extended MET" which tags the first IP of the 15 in the subscan. The GSE display extrapolates the time for the other 14 IPs, and thus the artifact that the time difference between IP #14 and IP #0 (of the next subscan) will not (typically) appear as 33 ms. In other words, when you look at science data on the GSE, the only MET time that the FSW generated is the one associated with IP #0 -- the rest were computed by the GSE, and the jitter will always appear at IP #0. Page 1 of 1