You are here: Home / FAQ

FAQ

General considerations

Will my data be reduced, and if so, when?

All of the VISTA data are reduced at CASU using the same processing, regardless whether they are survey observations or PI programs. Normally, fully reduced data are available two months after observations, although in some exceptional cases (for example, programs where time critical follow-up might be needed) middle of the line products (with their associated caveats) could be made available in shorter time scales.

You can check the status of your observations of interest here:
http://casu.ast.cam.ac.uk/surveys-projects/vista/data-processing

What is a pawprint, a stack and a tile?

As is common in the near infrared, VISTA observations of a given target are normally broken down in several short exposures. Every one of these exposures will be shifted away from the nominal target position, so that, among other desirable effects, a) astronomical sources do not fall always in the same pixel and b) the gaps between detectors are mosaiced out and a complete coverage of the sky can be recovered. This leads to three types of reduced images:

  • Pawprints: This is a single pointing, corresponding to the 16 detectors of the VISTA array. Pawprints have been corrected of some cosmetics (pick-up noise stripes), sky and dark current substracted and flatfielded. Their headers also include an astrometric solution.
  • Stacks: Two or more pawprints, that are shifted from each other a few arcseconds, can be combined with the benefit of correcting partially the cosmetic defects of the detectors. These are normal jittered observations, and the produced stack will still have 16 extensions, associated with the original detectors, but now each of these extensions will not correspond to a physical detector, but the combination of several (i.e. they will have a random shape depending on the chosen jitter pattern). Prior to combination, the sky levels of each pawprint are adjusted, and both astrometric and photometric calibrations are present in the header.
  • Tiles: If so desired, several stacks (corresponding themselves to several pawprints) can be combined into a single image, normally offering a continuous coverage of the sky. The tile is the final result of this mosaicing process, and normally corresponds to a field 1.6 degrees across. Prior to combination, the different photometric zero points are applied to the stacks, while the radial distortion correction is also applied. In order to produce the associated photometry, large scale variations in the sky are removed, but this step is not applied to the tiles themselves, and it is only present in the catalogue files. 

Stacks an tiles have associated photometric catalogues and confidence maps. Catalogues are fit tables with aperture photometry for all the detected sources (see a description here) while confidence maps are 2D arrays (one per detector, in the case of stacks) normalized to median equal to 100 in which every pixel conveys the confidence level for the corresponding pixel in the tile or stack. The naming convention used in CASU for all these files is explained here.

Which files should I use?

The default best product for science use are tiles and their associated catalogues. They offer the best coverage and the deepest photometry. But these come at a price, and there are some caveats that need to be taken into account:

  • Before merging the stacks into a tile, the radial distortion has to be corrected (see here). This introduces some astrometric and photometric complications. Users wanting very precise photometry or looking after the best astrometry might want to stick with stacks.
  • The mosaic pattern required to fill all the gaps in the VISTA detector array is complicated, with several regions of overlap. This leads to the fact that, even under perfect observing conditions, sensitivity will be spatially variable within a tile. The associated confidence map can give an idea of this variation (see here).
  • Although the stacks that form a tile are observed sequentially, there is a noticeable time delay between the first and last exposure. This means that the effective MJD of observation varies spatially. This effect is described in detail here.
  • There is a slight PSF variation from detector to detector, and changing atmospheric conditions from exposure to exposure further modify the PSF. This implies that when combining these images, the final PSF will not be spatially coherent along a single tile, complicating analysis techniques based on PSF fitting. These variations are calibrated in CASU generated catalogues, but these are produced using aperture photometry (see here).
Are images compressed?

Yes, all fit images produced at CASU (both raw and processed) are compressed using the Rice algorithm. They can be unpacked with funpack. This compression is lossless, and we modify the BSCALE parameter to preserve the original quantisation of the data, that in the case of processed images takes into account the number of raw frames that where combined.

Pawprint catalogues and images

Are pawprints calibrated?
Yes, pawprints are calibrated both photometrically and astrometrically but these calibrations are only recorded in the header and not applied to the image or catalogue. Being so, to derive physically meaningful measurements, these calibrations need to be applied by the user to the fluxes and coordinates recorded in the files.
How do I convert fluxes to magnitudes in a pawprint catalogue/image?
Detailed information about this transformation can be found here. Namely, these are the steps necessary to derive magnitudes from fluxes:
  1. Several flux measurements are available in the catalogue, so the user can choose the one that fits best her/his necessities. The default one-size-fits-all magnitude is recorded under Aper_flux_3.
  2. Fluxes need to be divided by the exposure time, found in the header as EXPTIME.
  3. There is a distortion effect that makes pixels on the outside of the focal plane to sample a relatively larger area of sky than those close to the centre. This need to be corrected, as these differences might reach a few percent. The pseudo-code for this transformation is:
     x = data['X_coordinate']
     y = data['Y_coordinate']
     xi = head['TC3_3']*(x-head['TCRPX3'])+head['TC3_5']*(y-head['TCRPX5'])
     xn = head['TC5_3']*(x-head['TCRPX3'])+head['TC5_5']*(y-head['TCRPX5'])
     xi = xi*pi/180.0
     xn = xn*pi/180.0
     r = sqrt(xi**2+xn**2)
     distort_corr=1.0+3.0*(head['TV5_3']*r^2)/head['TV5_1']+5.0*(head['TV5_5']*r^4)/head['TV5_1']
     distort_corr=distort_corr*(1.0+head['TV5_3']*r^2/head['TV5_1']+(head['TV5_5']*r^4)/head['TV5_1'])
     corrected_flux=data['Aper_flux_3']/distortcorr
  4. Every aperture flux has its associated aperture correction, in the case of Aper_flux_3 the keyword would be APCOR3.
  5. Finally, the zero point for the chip has to be included, MAGZPT, leading to the expression:
    mag=head['MAGZPT']-2.5*log10(corrected_fluxr/head['EXPTIME'])-head['APCOR3']
It is worth noting that these corrections should be performed per chip.
How do I convert pixels to equatorial coordinates in the pawprint catalogues/images?
The astrometrical calibration for pawprints is based on the ZPN projection, and so to derive (RA, Dec) from the on-chip coordinates the pseudo-code is:
 #
 # 1.- From px to angular coordinates
 #
 x = data['X_coordinate']
 y = data['Y_coordinate']
 xi = head['TC3_3']*(x-head['TCRPX3'])+head['TC3_5']*(y-head['TCRPX5'])
 xn = head['TC5_3']*(x-head['TCRPX3'])+head['TC5_5']*(y-head['TCRPX5'])
 xi = xi*pi/180.0
 xn = xn*pi/180.0 
 #
 #2.- Tangent point, in radians
 #
 tpa = head['TCRVL3']*pi/180.0
 tpd = head['TCRVL5']*pi/180.0
#
#3.- Distortion correction #
r = sqrt(xi**2+xn**2)
d = r / (head['TV5_1'] + head['TV5_3']*(r**2) + head['TV5_5']*(r**4))
d = r / (head['TV5_1'] + head['TV5_3']*(d**2) + head['TV5_5']*(d**4))
d = r / (head['TV5_1'] + head['TV5_3']*(d**2) + head['TV5_5']*(d**4))
xi = xi * tan(d)/r
xn = xn * tan(d)/r
# #4.- Angular into sky coordinates # tand=tan(tpd) secd=1.0/cos(tpd) aa = arctan(xi*secd/(1.0-xn*tand)) alpha = aa + tpa # To avoid NaN, where xi==0, delta=xn+tpd delta = where(xi==0, xn+tpd, np.arctan((xn+tand)*sin(aa)/(xi*secd))) x_t = alpha y_t = delta # Wrapping around angles x_t = where(abs(y_t-tpd)>pi/2, x_t+pi, x_t) y_t = where(abs(y_t-tpd)>pi/2, -delta, y_t) x_t=where(x_t>2*pi, x_t-tpi, x_t) x_t=where(x_t<0, x_t+2*pi, x_t) # ra=x_t*180.0/pi dec=y_t*180.0/pi
It should be noted that the catalogues contain RA and DEC fields, but these are there only for backwards compatibility reasons (so they can be read by old fits table display programs) and should not be used for scientific purposes, as they have low numerical precision.
Very important care should be placed on precision: in the catalogues, pixel coordinates are stored as 32 bit floats. While for pixel units this is more than enough, this is not the case for angular coordinates. So if the variable type of the angular coordinates is inherited from the (X,Y) values read from the catalogues, this loss of precision will lead to appreciable errors in the final (RA,Dec) values.

Tile catalogues and images

Are tiles calibrated?
Yes, tiles are calibrated both photometrically and astrometrically. Before generating a tile, the stacked pawprints are brought to a common ZP --using their own calibrations-- and projected into a common coordinate system, taking care of radial distortions. After this, new catalogues and images are generated, which in turn are again calibrated with respect to 2MASS. These calibrations are only recorded in the header and not applied to the image or catalogue. Being so, to derive physically meaningful measurements, these calibrations need to be applied by the user to the fluxes and coordinates recorded in the files.
How do I convert fluxes to magnitudes in a tile catalogue/image?
Detailed information about this transformation can be found here. Namely, these are the steps necessary to derive magnitudes from fluxes:
  1. Several flux measurements are available in the catalogue, so the user can choose the one that fits best her/his necessities. The default one-size-fits-all magnitude is recorded under Aper_flux_3.
  2. Fluxes need to be divided by the exposure time, found in the header as EXPTIME.
  3. Every aperture flux has its associated aperture correction, in the case of Aper_flux_3 the keyword would be APCOR3.
  4. Finally, the zero point for the chip has to be included, MAGZPT, leading to the expression:
    mag=head['MAGZPT']-2.5*log10(flux/head['EXPTIME'])-head['APCOR3']
    
How do I convert pixels to equatorial coordinates in the tile catalogues/images?
The astrometrical calibration for pawprints is based on the ZPN projection, and so to derive (RA, Dec) from the on-chip coordinates the pseudo-code is:
 #
 # 1.- From px to angular coordinates
 #
 x = data['X_coordinate']
 y = data['Y_coordinate']
 xi = head['TC3_3']*(x-head['TCRPX3'])+head['TC3_5']*(y-head['TCRPX5'])
 xn = head['TC5_3']*(x-head['TCRPX3'])+head['TC5_5']*(y-head['TCRPX5'])
 xi = xi*pi/180.0
 xn = xn*pi/180.0 
 #
 #2.- Tangent point, in radians
 #
 tpa = head['TCRVL3']*pi/180.0
 tpd = head['TCRVL5']*pi/180.0
 #
 #3.- Angular into sky coordinates
 #
 tand=tan(tpd)
 secd=1.0/cos(tpd)
 aa = arctan(xi*secd/(1.0-xn*tand))
 alpha = aa + tpa
 # To avoid NaN, where xi==0, delta=xn+tpd
 delta = where(xi==0, xn+tpd, np.arctan((xn+tand)*sin(aa)/(xi*secd)))
 x_t = alpha
 y_t = delta
 # Wrapping around angles
 x_t = where(abs(y_t-tpd)>pi/2, x_t+pi, x_t)
 y_t = where(abs(y_t-tpd)>pi/2, -delta, y_t)
 x_t=where(x_t>2*pi, x_t-tpi, x_t)
 x_t=where(x_t<0, x_t+2*pi, x_t)
 #
 ra=x_t*180.0/pi
 dec=y_t*180.0/pi
It should be noted that the catalogues contain RA and DEC fields, but these are there only for backwards compatibility reasons (so they can be read by old fits table display programs) and should not be used for scientific purposes, as they have low numerical precision.
Very important care should be placed on precision: in the catalogues, pixel coordinates are stored as 32 bit floats. While for pixel units this is more than enough, this is not the case for angular coordinates. So if the variable type of the angular coordinates is inherited from the (X,Y) values read from the catalogues, this loss of precision will lead to appreciable errors in the final (RA,Dec) values.