ESA Earth Home Missions Data Products Resources Applications
    24-Jul-2014
EO Data Access
How to Apply
How to Access
Index
Credits
AATSR Data Formats Products
Records
SST record 50 km cell MDS
BT/TOA Sea record 17 km cell MDS
ATS_TOA_1P_MDSR_conf
ATS_TOA_1P_MDSR_cl
ATS_TOA_1P_ADSR_pix
ATS_SST_AX_GADSR
Vegetation fraction for Land Surface Temperature Retrieval GADS
Topographic Variance data for Land Surface Temperature Retrieval GADS
Land Surface Temperature retrieval coefficients GADS
General Parameters for Land Surface Temperature Retrieval GADS
Climatology Variance Data for Land Surface Temperature Retrieval GADS
Level 0 SPH
Level 0 MDSR
Auxilliary Data SPH with N = 1
1.6 micron nadir view MDS
Summary Quality ADS
Scan pixel x and y ADS
Grid pixel latitude and longtitude topographic corrections ADS
Across-track Band Mapping Look-up Table
Configuration Data GADS
Processor configuration GADS
LST record 50 km cell MDS
Distributed product MDS
Level 2 SPH
SPH
10-arcminute mds
Limits GADS
Validation Parameters GADS
BT/TOA Land record 17 km cell MDS
General Parameters GADS
Temperature to Radiance LUT GADS
Radiance to Brightness Temperature LUT GADS
Medium/High Level Test LUT GADS
Infrared Histogram Test LUT GADS
11 Micron Spatial Coherence Test LUT GADS
11/3.7 Micron Nadir/Forward Test LUT GADS
11/12 Micron Nadir/Forward Test LUT GADS
Characterisation GADS
Browse Day_Time Colour LUT GADS
Browse SPH
Grid pixel latitude and longtitude topographic correction ADS
Level 2 SPH
Auxilliary Products
ATS_VC1_AX: Visible Calibration data
ATS_SST_AX: SST Retrieval Coeficients data
ATS_PC1_AX: Level-1B Processing configuration data
ATS_INS_AX: AATSR Instrument data
ATS_GC1_AX: General Calibration data
ATS_CH1_AX: Level-1B Characterization data
ATS_BRW_AX: Browse Product LUT data
Level 0 Products
ATS_NL__0P: AATSR Level 0 product
Browse Products
ATS_AST_BP: AATSR browse image
Level 1 Products
ATS_TOA_1P: AATSR Gridded brightness temperature and reflectance
Level 2 Products
ATS_NR__2P: AATSR geophysical product (full resolution)
ATS_MET_2P: AATSR Spatially Averaged Sea Surface Temperature for Meteo Users
ATS_AR__2P: AATSR averaged geophysical product
Frequently Asked Questions
The AATSR Instrument
Instrument Characteristics and Performance
In-flight performance verification
Instrument Description
Internal Data Flow
Instrument Functionality
AATSR Products and Algorithms
Common Auxiliary data sets
Auxiliary Data Sets for Level 2 processing
Instrument Specific Topics
Level 2 Products
Level 1B Products and Algorithms
Level 1B Products
Algorithms
Instrument Pixel Geolocation
Availability
The Level 0 Product
Differences Between ATSR-2 and AATSR Source Packets
Definitions and Conventions
Conventions
Organisation of Products
Relationship Between AATSR and ATSR Products
AATSR Product Organisation
Data Handling Cookbook
Characterisation and Calibration
Monitoring of AATSR VISCAL Parameters
Latency, Throughput and Data Volume
Throughput
Introduction
Heritage
Data Processing Software
Data Processing Centres
The AATSR Products User Guide
Image Gallery
Breakup of the Ross Ice Shelf
Land cover in the Middle East
Typhoon Saomai
Mutsu Bay, Japan
Deforestation in Brazil
Spatially Averaged Global SST, September 1993
Further Reading
How to use AATSR data
Why Choose AATSR Data?
Why Choose AATSR Data?
Special Features of AATSR
Principles of Measurement
Scientific Background
The AATSR Handbook
SST record 17 km cell MDS
Surface Vegetation class for Land Surface Temperature Retrieval GADS
1.6 micron forward view MDS
12 micron nadir view MDS
12 micron forward view MDS
Summary Quality ADS
Surveillance Limits GADS
Master Unpacking Definition Table GADS
1.6 micron Non-Linearity Correction LUT GADS
General Parameters GADS
Thin Cirrus Test LUT GADS
Fog/low Stratus Test LUT GADS
1.6 Micron Histogram
Browse MDS
ATS_CL1_AX: Cloud LUT data
Glossary
Pre-flight characteristics and expected performance
Payload description, position on the platform
Auxiliary products
Auxiliary Data Sets for Level 1B processing
Summary of auxiliary data sets
Calculate Solar Angles
Image Pixel Geolocation
Level 0 Products
Acquisition and On-Board Data Processing
Product Evolution History
Hints and Algorithms for Higher Level Processing
Data Volume
Software tools
Summary of Applications vs Products
Geophysical Coverage
Geophysical Measurements
ATS_TOA_1P_ADSR_sa
Visible calibration coefficients GADS
Level 1B SPH
LST record 17 km cell MDS
Conversion Parameters GADS
12 Micron Gross Cloud Test LUT GADS
ATS_PC2_AX: Level-2 Processor Configuration data
Level 2 Products
Hints and Algorithms for Data Use
BT/TOA Sea record 50 km cell MDS
BT/TOA Land record 50 km cell MDS
Level 2 Algorithms
Signal Calibration
Services
Site Map
Frequently asked questions
Glossary
Credits
Terms of use
Contact us
Search


 
 
 


2.6.1.1.5.1 Instrument Pixel Geolocation

2.6.1.1.5.1.1 Physical Justification

The objective of geolocation is to determine the co-ordinates on the Earth's surface corresponding to the centre of each scan pixel. The required co-ordinates are the latitude and longitude of the pixel on the reference ellipsoid. (The radial co-ordinate is then automatically known from the definition of the ellipsoid.)

The co-ordinates of the scan pixel correspond to the intersection of the line of sight (the direction of the instantaneous optic axis as it leaves the scan mirror) with the reference ellipsoid. The problem, then, is to determine this point of intersection, given the satellite position and attitude and the orientation of the scan mirror.

2.6.1.1.5.1.1.1 Co-ordinates of the scan pixel

To determine the co-ordinates of the scan pixel, we work in an Earth-fixed reference frame. This is a right-handed Cartesian frame of reference having its origin at the centre of the Earth. The Z axis is directed along the rotation axis towards the North pole, and the X and Y axes lie in the plane of the equator; the X axis lies in the plane of the Greenwich meridian, and the Y axis completes the right-handed set.

Suppose that at some instant of time t, the co-ordinates of the satellite in the earth-fixed reference frame are Xs , Ys , and Zs , and that the instantaneous line of sight of the ATSR optical system is defined relative to the same frame by the direction cosines l, m and n. The line of sight is then described by the equations

Equation 1 (1K) eq 2.23

Equation 2 (1K) eq 2.24

Equation 3 (1K) eq 2.25

where the parameter alpha (1K) represents the distance between the satellite and the point (X, Y, Z).

The equation of the reference ellipsoid is given by

Equation 4 (1K) eq 2.26

Here a is the semi-major axis of the ellipsoid (the equatorial radius of the Earth) and b is the semi-minor axis (the polar radius of the Earth).

The point of intersection is easily found by solving the simultaneous equations as follows. Substitution of the parametric equations of the line (5.3.1 - 3) in (5.3.4) gives

Equation 5 (1K) eq 2.27

This is a simple quadratic equation in the parameter alpha (1K); multiplying out gives

Equation 6 (1K) eq 2.28

where

Equation 7 (1K) eq 2.29

Equation 8 (1K) eq 2.30

Equation 9 (1K) eq 2.31

The equation has two solutions

Equation 10 (1K) eq 2.32

Provided the argument of the square root is positive, which will always be the case in practice, both solutions are real and positive, and the one that we require is the smaller of the two, which we denote by alpha (1K) min; this will be the one corresponding to the negative sign. The other solution then defines the point of emergence of the line of sight at the far side of the earth. (If the quantity under the square root is negative, the solutions of the equation are complex. This case would arise if the line of sight did not intersect the ellipsoid, and will never occur in the normal course of geolocation of AATSR data with the satellite in yaw steering mode.)

The pixel co-ordinates are then given by

Equation 11 (1K) eq 2.33

Equation 12 (1K) eq 2.34

Equation 13 (1K) eq 2.35

From the Cartesian co-ordinates of the pixel we can derive its longitude:

Equation 14 (1K) eq 2.36

and its geodetic latitude

Equation 15 (1K) eq 2.37

where

Equation 16 (1K) eq 2.38

This procedure solves for the intersection point exactly.

2.6.1.1.5.1.1.2 Line of sight in the satellite reference frame

In order to calculate the pixel co-ordinates as above, we must determine the direction cosines of the line of sight relative to the Earth-fixed frame of reference X, Y, Z. This calculation must be repeated for each pixel for which geolocation is required. (Note that strictly, the X, Y, Z co-ordinates of the origin of the line of sight should coincide with the centre of the scan mirror. In practice the satellite centre of mass is used. The error, in terms of displacement on the surface, is negligible.)

To determine the direction cosines of the line of sight, we proceed in two main stages. First, from knowledge of the angle through which the scan mirror has rotated, we determine the direction of the line of sight relative to a frame of reference fixed in the satellite; then we use the attitude steering law of the satellite and knowledge of its position in its orbit to relate the direction to the Earth-fixed frame of reference. These two aspects are discussed in this and the following sections.

2.6.1.1.5.1.1.2.1 AATSR Scan geometry

Imagine a Cartesian frame of reference fixed in the AATSR instrument, and orientated so that in the nominal flight attitude the Z axis points towards nadir, and the -Y axis is directed parallel to the satellite velocity vector, in the direction of satellite motion. We shall denote this frame of reference by the subscript b. Note that relative to the flight direction, the Yb axis points backwards. The essential features of the AATSR scan geometry are expressed in terms of this frame.

In this reference frame, the rotation axis of the scan mirror, which points forward in flight, lies in the (-Y, +Z) quadrant of the Y, Z plane of this reference frame, and is inclined at an angle kappa (1K) to the Zb axis.

Define a second reference frame, the scan reference frame, as the frame of reference derived from the first by a rotation about the common X axes through the angle kappa (1K) necessary to bring the Z axis parallel to the instrument scan axis. It will be denoted by the subscript a.

The viewing direction may be defined with respect to this frame as follows. The viewing direction rotates in a positive sense about the Za axis, to which it is inclined at angle kappa (1K). This means that the scan on the surface is traced in a clockwise direction, as seen from above. Let the scan rotation angle be phi (1K), defined to be zero when the scan direction is in the Xa - Za plane. With respect to the scan reference frame, the direction cosines of the line of sight are then

eq 2.39

lambda (1K) a = sin kappa (1K) cos phi (1K)

µ a = sin kappa (1K) sin phi (1K)

nu (1K) a = cos kappa (1K)

The components of any vector defined in Xa , Ya , Za are related to those of the same vector defined with respect to the instrument axes Xb , Yb , Zb by a linear transformation Mab (-kappa (1K)). The equations of this transformation are as follows.

Suppose that xa , ya , za are the components of a vector x relative to the scan reference frame, and that xb , yb , zb are the components of the same vector relative to the instrument reference frame. Mab (-kappa (1K)) is a rotation of -kappa (1K) about the common Xa , Xb axes (figure2.5 ), and so the components are related by

Equation 18 (1K) eq 2.40

Therefore

Equation 19 (1K) eq 2.41

Figure 1 (1K)
Figure 2.5

Thus relative to the instrument reference frame, the direction cosines of the line of sight are

Equation 20 (1K) eq 2.42

2.6.1.1.5.1.1.2.2 Instrument misalignment

Imagine a set of Cartesian axes Xp , Yp , Zp fixed with respect to the satellite. We assume that the directions of these axes are defined with respect to the structure of the satellite, but in such a way that in flight, when the satellite is being yaw-steered, the axis Z p is nominally directed towards nadir, -Y p is parallel to the satellite ground trace, and X p completes the right-handed set. This defines the platform reference frame (p).

The reference frame Xb , Yb , Zb defined above is defined with respect to the instrument. The nominal orientation of the AATSR instrument should be such that, when ENVISAT is flying in its nominal attitude in yaw steering mode, the Zb axis points towards the true nadir and the -Yb axis. In other words the instrument reference frame should be parallel to the platform frame after integration of the instrument into the satellite. However in practice the two frames may differ by small misalignment angles. The misalignments are defined by the transformation between the instrument reference frame and the platform frame.

Quite generally, the relationship between two different sets of Cartesian axes can be expressed in terms of three consecutive rotations about different axes. Define linear transformations M z(zeta (1K)), M y(eta (1K)), M x(xi (1K)), as follows:

Equation 21 (1K) eq 2.43

Equation 22 (1K) eq 2.44

Equation 23 (1K) eq 2.45

M z(zeta (1K)), M y(eta (1K)), M x(xi (1K)) represent elementary rotations of zeta (1K), eta (1K), xi (1K) about the z, y and x axes respectively. The transformation between instrument and platform frames is represented by the product of these elementary transformations. Thus the components of the line of sight vector expressed with reference to the platform frame are

Equation 24 (1K) eq 2.46

This equation can be regarded as defining the misalignment angles Delta (1K) x, Delta (1K) y, Delta (1K) z. (We adopt the convention that the rotations are to be applied in that order to give the total transformation to the platform frame. Strictly speaking the matrices representing elementary rotations about different axes do not commute, and so we should specify the order in which they are to be applied. In practice the angles are sufficiently small that any errors from this source are small in relation to the overall attitude error budget and the matrices can be regarded as commuting to a sufficient accuracy.)

In the above discussion we have used frames of reference in which the Z axes point downwards. This is convenient for a nadir viewing instrument; however in order to discuss the attitude transformations and to relate our frames of reference to those defined in the Mission Conventions Document we need to transform to a frame in which the Z axis point upwards.

Finally, the matrix Mps transforms the vector to the satellite frame of reference. It represents a simple rotation of 180° about the common Ys , Yp axes to bring the Z axis parallel to the outward vertical. Relative to the satellite frame of reference the components of x are

Equation 25 (1K) eq 2.47

so that

Equation 26 (1K) eq 2.48

The direction cosines with respect to the satellite frame are therefore obtained by multiplying the starting vector by the matrix product

Equation 27 (1K) eq 2.49

The satellite frame that we have defined here is equivalent to the Satellite Relative Actual Reference Frame defined in the mission conventions document, except that the frame here is explicitly imagined as fixed in the satellite. Note that we ignore mispointing throughout.)

2.6.1.1.5.1.1.3 Attitude Transformations

The Local Orbital Reference Frame is the reference frame with respect to which the attitude of the satellite is described. It is defined in the ENVISAT Mission Conventions Document Ref. [1.6 ] (PO-IS-ESA-GS-0561); its origin is the centre of mass of the satellite and its basis vectors are the three unit vectors L, R, and T as follows.

  • The unit vector L is directed along the outward radius from the centre of the earth to the satellite centre of mass. It is the yaw axis.
  • The unit vector R is perpendicular to L, in the plane containing L and the instantaneous inertial velocity vector of the satellite, and is directed forwards, approximately in the direction of motion of the satellite. It represents the roll axis.
  • Unit vector T completes the right-handed set, so that T = R x L. T is in the cross-track direction, and represents the pitch axis.

This frame is defined with respect to the orbit, not the structure of the satellite. It is an instantaneous frame; that is, it is defined at a particular instant of time.

The attitude of the satellite is specified relative to the Local Orbital Reference Frame by means of three angles rho (1K), tau (1K), lambda (1K). These angles define the rotations about the roll, pitch and yaw axes respectively which, if applied in sequence to the TRL frame, would bring its axes parallel to the satellite frame. The sign of each rotation is to be interpreted so that a positive angle means that a positive rotation about the relevant axis, in the conventional right-handed sense, is required to bring the initial axes into coincidence with the derived set. Rotations about different axes do not commute, and so it is strictly necessary to define the order in which the rotations are to be applied. We adopt the convention that the rotations are to be applied in the order roll, pitch, yaw.

Suppose that (t, r, l) are the components of a vector in relative to the TRL axes, and that (t´, r´, l´) are the components of the same vector in the transformed system, which we may denote by T´R´L´. (The frame T´R´L´ is essentially the 'Local Relative Yaw Steering Orbital Reference Frame' defined in the Mission Conventions Document Ref. [1.6 ] .) The rotation matrices in pitch, roll and yaw are identical to those defined above for rotations about X, Y and Z axes respectively. Thus the overall transformation can be expressed as

Equation 28 (1K) eq 2.50

From our definition of the satellite attitude, the transformed attitude frame will be coincident with the satellite fixed frame, apart from a fixed rotation. The latter appears because we have defined the local orbital reference frame so that the roll axis R points forward, but the satellite frame is defined so that the corresponding axis points backwards. (We have simply followed the definitions adopted by ESA for these reference frames.) Comparison of the definitions of the two frames shows that the transformed frame T´R´L´ is related to the satellite frame by a rotation through 180 degrees about the z (L) axis. Specifically,

Equation 29 (1K) eq 2.51

We can introduce the matrix M SA to represent this transformation:

Equation 30 (1K) eq 2.52

so that

Equation 31 (1K) eq 2.53

We now have all the components of the transformation from the satellite reference frame to the TRL frame. Equation eq. 2.32 defines the transformation from TRL to T´R´L´. The reverse transformation is easily written

Equation 32 (1K) eq 2.54

This follows because each of the matrices My , Mx , Mz represents a pure rotation, and the operation inverse to any rotation is a rotation equal in magnitude but of opposite sign about the same axis. Therefore from equation eq. 2.53 we have

Equation 33 (1K) eq 2.55

where MSA is the matrix given by equation eq. 2.52 . The matrix MSA represents a rotation about the vertical (L´ ) axis. It therefore reverses the direction of the orthogonal (R´ and T´) axes, while leaving the L´ axis unchanged. Moreover it must commute with the matrix ML (lambda (1K)), since this also represents a rotation about the L´ axis, and rotations about a common axis commute. It is easy to verify this directly.

However, MSA does not commute with the other two rotation matrices. Evidently a rotation of tau (1K) about the T´ axis is equivalent to a rotation of -tau (1K) about the -T´ axis, and similarly for rotations about R. (It is perhaps easier to visualise this if an active interpretation of the rotations is adopted, rather than the passive interpretation that is strictly applicable the present discussion.) Thus one can verify by direct multiplication that

eq 2.56

MAS MX (tau (1K)) = MX (-tau (1K)) MAS

and similarly that

eq 2.57

MAS MY (rho (1K)) = MY (-rho (1K)) MAS

Hence

Equation 36 (1K) eq 2.58

In equation eq. 2.58 the sign of lambda (1K) is negative, while the signs of the other two attitude angles are positive. This is a consequence of the co-ordinate rotation represented by MSA .

Finally, we must relate the line of sight vector to the inertial frame. Suppose that the x, y, and z components of T are tx , ty , tz respectively, and that the components of R and L are (rx , ry , rz ) and (lx , ly , lz ) respectively. If the components of the vector in TRL are t, r and l then the vector is

t T + r R + l L eq 2.59

In the inertial frame this becomes, in component form

Equation 38 (1K) eq 2.60

The matrix is orthogonal because the individual vectors T, R, L are normalized to unit length. Note that the explicit form of this matrix depends on the position and velocity of the satellite at the time it is evaluated. If we are given the direction cosines of the line of sight with respect to TRL, then we can evaluate the matrix equation to derive the direction cosines of the line of sight with respect to the inertial frame.

Nothing prevents us from choosing the inertial frame as that whose axes instantaneously coincide with the earth-fixed reference frame at the time in question, in which case we can equate the inertial co-ordinates that we have derived to the geostationary co-ordinates.

2.6.1.1.5.1.2 Algorithm Description

2.6.1.1.5.1.2.1 Summary

The geolocation algorithm calculates the latitude and longitude of each instrument pixel. In principle this would be done by applying the transformations of section 2.6.1.1.5.1.1.1. to each pixel. In practice, to reduce the processing overhead, they are carried out for a subset of tie point pixels, and the coordinates of intermediate pixels are determined by linear interpolation in scan number and scan angle. That is, the pixel latitude and longitude are regarded as functions of scan number and pixel number, and are interpolated accordingly.

As discussed above, the transformation between the scan frame and the Earth-fixed frame can be expressed in terms of a series of consecutive matrix transformations applied to the line of sight vector. However, the implementation of this algorithm can take advantage of the fact that some of these are catered for by the ESA software target.

For each tie point pixel p in each scan s the direction of the line of sight is determined in the scan reference frame. The corresponding direction cosines are determined, transformed to the satellite reference frame, and converted back to define an azimuth and elevation. The TARGET subroutine is used to derive the pixel co-ordinates on the ellipsoid.

Given the pixel co-ordinates of the tie point pixels, linear interpolation with respect to pixel number is used to define the co-ordinates of the intermediate pixels on the scan. The process is repeated for both forward and nadir view scans.

2.6.1.1.5.1.2.2 Algorithm Definition

The following steps are carried out for each tie point pixel on each scan st isin (1K) {0, INT_S, 2*INT_S, ...}. In the general case these points are

P super n sub t (1K) isin (1K) {0, INT_P, 2*INT_P, ..., MAX_NADIR_PIXELS - 1}eq 2.61

on the nadir scan, and

P super f sub t (1K) isin (1K) {0, INT_P, 2*INT_P, ..., MAX_FRWRD_PIXELS - 1}eq 2.62

on the forward scan. The parameters MAX_NADIR_PIXELS and MAX_FRWRD_PIXELS are found in the Level 1B Processor Configuration File 6.6.40. . The interpolation intervals INT_P and INT_S are defined in the Level 1B Characterisation Data File 6.6.15. . The adopted value of INT_P is 10, and so in practice the tie points are

P super n sub t (1K) isin (1K) {0, 10, 20, 30, ..., MAX_NADIR_PIXELS - 1}eq 2.63

on the nadir scan, and

P super f sub t (1K) isin (1K) {0, 10, 20, 30, ..., MAX_FRWRD_PIXELS - 1}eq 2.64

on the forward scan.

For each scan st and for each tie point pixel pt from the above sets, the following steps are executed:

1. Determine line of sight and its direction cosines in the scan reference frame.

The absolute pixel number p is calculated from

p = P super n sub t (1K) + FIRST_NADIR_PIXEL_NUMBER eq 2.65

or

p = P super f sub t (1K) + FIRST_FORWARD_PIXEL_NUMBER eq 2.66

as appropriate, where the values FIRST_NADIR_PIXEL_NUMBER, FIRST_FORWARD_PIXEL_NUMBER are found also in the Level 1B Characterisation Data File 6.6.15. . The scan angle is determined from

Scan Angle Determination (1K) eq 2.67

and this value is substituted in Equation (5.3.18) to determine the unit vector along the line of sight.

2. Transform to satellite frame.

The direction cosines of the line of sight are transformed to the platform frame according to

Direction Cosines (1K) eq 2.68

where the half-angle of the scan cone kappa (1K) and the misalignment correction angles Delta (1K)x, Delta (1K)y, Delta (1K)z are taken from the Level 1B Characterisation Data File 6.6.15. . The interface to the target subroutine requires us to use a slightly different set of satellite axes to that defined in Section 2.6.1.1.5.1.1. Instead of Equation (5.3.27) we therefore define

Transformation to satellite frame (1K) eq 2.69

and

Transformation to satellite frame (1K) eq 2.70

The azimuth and elevation of the line of sight are calculated with respect to this frame by inverting the equations

lambda (1K) s = cos (elevation) cos (azimuth)

µs = cos (elevation) sin (azimuth)

nu (1K) s = sin (elevation)

Thus

azimuth = atan2 (µs, lambda (1K) s)

elevation = atan2 (nu (1K) s, sqrt(lambda (1K) s 2 + µs 2))

If azimuth < 0.0 then azimuth = azimuth + 360.0.

Here atan2 represents the arc tangent function of two arguments defined in the conventional way, atan2(y, x) being the angle whose tangent is (y/x) and whose quadrant is defined by the signs of x and y.

3. Use the subroutine target to determine the geolocation parameters.

The subroutine target is entered, with the line of sight direction determined by the azimuth and elevation parameters defined above, and with the orbital parameters evaluated for the time of the scan, to determine the latitude and longitude of the tie point pixels. The required outputs are the geodetic pixel co-ordinates taken from the results vector.

The longitude is converted to lie in the range -180 to 180 by the subtraction or addition of 360 if necessary.

2.6.1.1.5.1.2.3 Interpolation

The steps above have determined the co-ordinates of the tie point pixels. Linear interpolation is used to determine the co-ordinates of the intermediate pixels. The process is repeated for both forward and nadir views.

Linear interpolation with respect to scan number s is used to define the latitude and longitude of the intermediate pixels

s Not in (1K) {st}.

For each scan s, linear interpolation with respect to pixel number is used to define the latitude and longitude of the intermediate pixels

First part of interpolation equation (1K) ; Second part of interpolation equation (1K).

In the case of longitude, the interpolation must take account of the fact that the 180 degree meridian may intersect the interpolation interval.

2.6.1.1.5.1.3 Accuracies

(To be added:

  • Discussion of effect of attitude mispointing; constant offset indistinguishable from misalignment.
  • Interpolation error; main contribution to absolute geolocation error is interpolation between tie points on same scan.)