Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude...

38
1 Latitude and longitude vertical disparities 2 3 4 Institute of Neuroscience, Newcastle University, UK AQ1 5 Jenny C. A. Read 6 7 Neuroinformatics Doctoral Training Centre, 8 Institute of Adaptive and Neural Computation, 9 School of Informatics, Edinburgh University, UK 10 Graeme P. Phillipson 11 12 School of Psychology and Clinical Language Sciences, 13 Reading University, UK 14 Andrew Glennerster 15 The literature on vertical disparity is complicated by the fact that several different denitions of the term vertical disparity16 are in common use, often without a clear statement about which is intended or a widespread appreciation of the properties 17 of the different denitions. Here, we examine two denitions of retinal vertical disparity: elevation-latitude and elevation- 18 longitude disparities. Near the xation point, these denitions become equivalent, but in general, they have quite different 19 dependences on object distance and binocular eye posture, which have not previously been spelt out. We present 20 analytical approximations for each type of vertical disparity, valid for more general conditions than previous derivations in 21 the literature: we do not restrict ourselves to objects near the xation point or near the plane of regard, and we allow for 22 non-zero torsion, cyclovergence, and vertical misalignments of the eyes. We use these expressions to derive estimates of 23 the latitude and longitude vertical disparities expected at each point in the visual eld, averaged over all natural viewing. 24 Finally, we present analytical expressions showing how binocular eye positionVgaze direction, convergence, torsion, 25 cyclovergence, and vertical misalignmentVcan be derived from the vertical disparity eld and its derivatives at the fovea. 26 Keywords: binocular vision, stereopsis, depth perception, induced effect, vertical disparity, computational vision 27 Citation: Read, J. C. A., Phillipson, G. P., & Glennerster, A. (2009). Latitude and longitude vertical disparities. Journal of 28 Vision, 0(0):1, 137, http://journalofvision.org/0/0/1/, doi:10.1167/0.0.1. 29 30 31 Introduction 32 Because the two eyes are set apart in the head, the 33 images of an object fall at different positions in the two 34 eyes. The resulting binocular disparity is, in general, a 35 two-dimensional vector quantity. Since Helmholtz (1925), 36 psychophysicists have divided this vector into two 37 components: horizontal and vertical disparities. There is 38 now an extensive literature discussing how vertical 39 disparities influence perception and how this may be 40 achieved within the brain, for example Backus, Banks, 41 van Ee, and Crowell (1999), Banks, Backus, and Banks 42 (2002), Berends, van Ee, and Erkelens (2002), Brenner, 43 Smeets, and Landy (2001), Cumming (2002), Duke and 44 Howard (2005), Durand, Zhu, Celebrini, and Trotter 45 (2002), Friedman, Kaye, and Richards (1978), Frisby 46 et al. (1999), Garding, Porrill, Mayhew, and Frisby 47 (1995), Gillam and Lawergren (1983), Kaneko and 48 Howard (1997b), Longuet-Higgins (1982), Matthews, 49 Meng, Xu, and Qian (2003), Ogle (1952), Porrill, 50 Mayhew, and Frisby ( 1990), Read and Cumming 51 (2006), Rogers and Bradshaw (1993), 1995), Westheimer 52 (1978), and Williams (1970). Yet the literature is compli- 53 cated by the fact that the term “vertical disparity” is used 54 in several different ways by different authors. The first and 55 most fundamental distinction is whether disparity is 56 defined in a head-centric or retino-centric coordinate 57 system. The second issue concerns how disparity, as a two- 58 dimensional vector quantity, is divided up into “vertical” 59 and “horizontal” components. 60 In a head-centric system, vertical and horizontal 61 disparities are defined in the optic array, that is the set 62 of light rays passing through the nodal points of each eye. 63 One chooses an angular coordinate system to describe the 64 line of sight from each eye to a point in space; vertical 65 disparity is then defined as the difference in the elevation 66 coordinates. If Helmholtz coordinates are used (Figure 1A, 67 Howard & Rogers, 2002), then for any point in space, the 68 elevation is the same from both eyes. Thus, head-centric 69 Helmholtz elevation disparity is always zero for real 70 objects (Erkelens & van Ee, 1998). This definition is 71 common in the physiology literature, where “vertical 72 disparity” refers to a vertical off-set between left and 73 right images on a frontoparallel screen (Cumming, 2002; 74 Durand, Celebrini, & Trotter, 2007; Durand et al., 2002; 75 Gonzalez, Justo, Bermudez, & Perez, 2003; Stevenson & 76 Schor, 1997). In this usage, a “vertical” disparity is always 77 a “non-epipolar” disparity: that is, a two-dimensional 78 disparity that cannot be produced by any real object, given 79 the current eye position. A different definition uses Fick 80 coordinates to describe the angle of the line of sight Journal of Vision (2009) 0(0):1, 137 http://journalofvision.org/0/0/1/ 1 doi: 10.1167/0.0.1 Received August 29, 2009; published 0 0, 2009 ISSN 1534-7362 * ARVO

Transcript of Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude...

Page 1: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

1 Latitude and longitude vertical disparities2

3

4 Institute of Neuroscience, Newcastle University, UK AQ15 Jenny C. A. Read6

7 Neuroinformatics Doctoral Training Centre,8 Institute of Adaptive and Neural Computation,9 School of Informatics, Edinburgh University, UK10 Graeme P. Phillipson11

12 School of Psychology and Clinical Language Sciences,13 Reading University, UK14 Andrew Glennerster

15 The literature on vertical disparity is complicated by the fact that several different definitions of the term “vertical disparity”16 are in common use, often without a clear statement about which is intended or a widespread appreciation of the properties17 of the different definitions. Here, we examine two definitions of retinal vertical disparity: elevation-latitude and elevation-18 longitude disparities. Near the fixation point, these definitions become equivalent, but in general, they have quite different19 dependences on object distance and binocular eye posture, which have not previously been spelt out. We present20 analytical approximations for each type of vertical disparity, valid for more general conditions than previous derivations in21 the literature: we do not restrict ourselves to objects near the fixation point or near the plane of regard, and we allow for22 non-zero torsion, cyclovergence, and vertical misalignments of the eyes. We use these expressions to derive estimates of23 the latitude and longitude vertical disparities expected at each point in the visual field, averaged over all natural viewing.24 Finally, we present analytical expressions showing how binocular eye positionVgaze direction, convergence, torsion,25 cyclovergence, and vertical misalignmentVcan be derived from the vertical disparity field and its derivatives at the fovea.

26 Keywords: binocular vision, stereopsis, depth perception, induced effect, vertical disparity, computational vision

27 Citation: Read, J. C. A., Phillipson, G. P., & Glennerster, A. (2009). Latitude and longitude vertical disparities. Journal of28 Vision, 0(0):1, 1–37, http://journalofvision.org/0/0/1/, doi:10.1167/0.0.1.

29

3031 Introduction

32 Because the two eyes are set apart in the head, the33 images of an object fall at different positions in the two34 eyes. The resulting binocular disparity is, in general, a35 two-dimensional vector quantity. Since Helmholtz (1925),36 psychophysicists have divided this vector into two37 components: horizontal and vertical disparities. There is38 now an extensive literature discussing how vertical39 disparities influence perception and how this may be40 achieved within the brain, for example Backus, Banks,41 van Ee, and Crowell (1999), Banks, Backus, and Banks42 (2002), Berends, van Ee, and Erkelens (2002), Brenner,43 Smeets, and Landy (2001), Cumming (2002), Duke and44 Howard (2005), Durand, Zhu, Celebrini, and Trotter45 (2002), Friedman, Kaye, and Richards (1978), Frisby46 et al. (1999), Garding, Porrill, Mayhew, and Frisby47 (1995), Gillam and Lawergren (1983), Kaneko and48 Howard (1997b), Longuet-Higgins (1982), Matthews,49 Meng, Xu, and Qian (2003), Ogle (1952), Porrill,50 Mayhew, and Frisby (1990), Read and Cumming51 (2006), Rogers and Bradshaw (1993), 1995), Westheimer52 (1978), and Williams (1970). Yet the literature is compli-53 cated by the fact that the term “vertical disparity” is used54 in several different ways by different authors. The first and

55most fundamental distinction is whether disparity is56defined in a head-centric or retino-centric coordinate57system. The second issue concerns how disparity, as a two-58dimensional vector quantity, is divided up into “vertical”59and “horizontal” components.60In a head-centric system, vertical and horizontal61disparities are defined in the optic array, that is the set62of light rays passing through the nodal points of each eye.63One chooses an angular coordinate system to describe the64line of sight from each eye to a point in space; vertical65disparity is then defined as the difference in the elevation66coordinates. If Helmholtz coordinates are used (Figure 1A,67Howard & Rogers, 2002), then for any point in space, the68elevation is the same from both eyes. Thus, head-centric69Helmholtz elevation disparity is always zero for real70objects (Erkelens & van Ee, 1998). This definition is71common in the physiology literature, where “vertical72disparity” refers to a vertical off-set between left and73right images on a frontoparallel screen (Cumming, 2002;74Durand, Celebrini, & Trotter, 2007; Durand et al., 2002;75Gonzalez, Justo, Bermudez, & Perez, 2003; Stevenson &76Schor, 1997). In this usage, a “vertical” disparity is always77a “non-epipolar” disparity: that is, a two-dimensional78disparity that cannot be produced by any real object, given79the current eye position. A different definition uses Fick80coordinates to describe the angle of the line of sight

Journal of Vision (2009) 0(0):1, 1–37 http://journalofvision.org/0/0/1/ 1

doi: 10 .1167 /0 .0 .1 Received August 29, 2009; published 0 0, 2009 ISSN 1534-7362 * ARVO

Page 2: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

81 (Figure 1B). With this definition, vertical disparities occur82 in natural viewing (e.g., Backus & Banks, 1999; Backus83 et al., 1999; Bishop, 1989; Hibbard, 2007; Rogers &84 Bradshaw, 1993), and so non-zero vertical disparities are85 not necessarily non-epipolar.86 Head-centric disparity is independent of eye position,87 and thus mathematically tractable. However, all visual88 information arrives on the retinas, and it seems clear that89 the brain’s initial encoding of disparity is retinotopic (Gur90 & Snodderly, 1997; Read & Cumming, 2003). Accord-91 ingly, the appropriate language for describing the neuronal92 encoding of disparity must be retinotopic (Garding et al.,93 1995). The retinal disparity of an object is the two-94 dimensional vector linking its two images in the two95 retinas. This depends both on the scene viewed and also96 on the binocular posture of the eyes viewing it. Thus, two-97 dimensional retinal disparity can be used to extract eye98 posture as well as scene structure (Garding et al., 1995;99 Longuet-Higgins, 1981). It is more complicated to handle100 than head-centric disparity, but it contains more information.101 For retinal disparity, as well, two definitions of vertical102 disparity are in common use within the literature,103 stemming from the difficulty of defining a “vertical”104 direction on a spherical eyeball. One possibility is to105 define “vertical” as the projection of vertical lines in space106 onto the retina, so that a line of constant “vertical”107 position on the retina is the projection of a horizontal line108 in space. This produces lines of elevation longitude on the

109spherical eyeball (Figures 2B–2D and 3AC). We shall110refer to the corresponding vertical coordinate as elevation111longitude, ); it is called inclination by Bishop, Kozak, and112Vakkur (1962) and is analogous to the Helmholtz113coordinates of Figure 1A. Equivalently, one can project114the hemispherical retina onto a plane and take the vertical115Cartesian coordinate on the plane, y (Figure 2A). This is116usual in the computer vision literature. Since there is a117simple one-to-one mapping between these two coordi-118nates, y = tan), we shall not need to distinguish between119them in this paper. The natural definition of vertical120disparity within this coordinate system is then the differ-121ence between the elevation longitude of the images in the122two eyes. Papers that have defined vertical disparity to be123differences in either ) or y include Garding et al. (1995),124Hartley and Zisserman (2000), Longuet-Higgins (1982),125Mayhew (1982), Mayhew and Longuet-Higgins (1982), and126Read and Cumming (2006).127An alternative approach is to define the vertical128coordinate as being lines of latitude on the sphere129(Figures 2C–2E and 3BD). This is analogous to the Fick130coordinates of Figure 1B. We shall refer to the corre-131sponding vertical coordinate as elevation latitude, . (see132Table A2 in Appendix A for a complete list of symbols133used in this paper). Studies defining vertical disparity as134the difference in elevation latitude . include Barlow,135Blakemore, and Pettigrew (1967), Bishop et al. (1962), and136Howard and Rogers (2002).

Figure 1. Two coordinate systems for describing head-centric or optic-array disparity. Red lines are drawn from the two nodal points L, Rto an object P. (A) Helmholtz coordinates. Here, we first rotate up through the elevation angle 1 to get us into the plane LRP, and theazimuthal coordinate K rotates the lines within this plane until they point to P. The elevation is thus the same for both eyes; no physicalobject can have a vertical disparity in optic-array Helmholtz coordinates. (B) Fick coordinates. Here, the azimuthal rotation K is appliedwithin the horizontal plane, and the elevation 1 then lifts each red line up to point at P. Thus, elevation is in general different for the twolines, meaning that object P has a vertical disparity in optic-array Fick coordinates.

Journal of Vision (2009) 0(0):1, 1–37 Read, Phillipson, & Glennerster 2

Page 3: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

Figure 3. Two definitions of vertical retinal disparity. (AB) A point in space, P, projects to different positions IL and IR on the two retinas.(CD) The two retinas are shown superimposed, with the two half-images of P shown in red and blue for the left and right retinas,respectively. In (AC), the retinal coordinate system is azimuth longitude/elevation longitude. In (BD), it is azimuth longitude/elevationlatitude. Point P and its images IL and IR are identical between (AC) and (BD); the only difference between left and right halves of thefigure is the coordinate system drawn on the retinas. The eyes are converged 30- fixating a point on the midline: X = 0, Y = 0, Z = 11. Theplane of gaze, the XZ plane, is shown in gray. Lines of latitude and longitude are drawn at 15- intervals. Point P is at X = j6, Y = 7, Z = 10.In elevation-longitude coordinates, the images of P fall at )L = j30-, )R = j38-, so the vertical disparity )$ isj8-. In elevation latitude, .L =j27-, .R = j34-, and the vertical disparity .$ = j6-. This figure was generated by Fig_VDispDefinition.m in the Supplementary material.

Figure 2. Different retinal coordinate systems. (A) Cartesian planar. Here, x and y refer to position on the virtual plane behind the retina;the “shadow” shows where points on the virtual plane correspond to on the retina, i.e., where a line drawn from the virtual plane to thecenter of the eyeball intersects the eyeball. (B) Azimuth longitude/elevation longitude. (C) Azimuth longitude/elevation latitude. (D) Azimuthlatitude/elevation longitude. (E) Azimuth latitude/elevation latitude. For the (B–E) angular coordinate systems, lines of latitude/longitude aredrawn at 15- intervals between T90-. For the (A) Cartesian system, the lines of constant x and y are at intervals of 0.27 = tan15-. Lines ofconstant x are also lines of constant !, but lines that are equally spaced in x are not equally spaced in !. In this paper, we use the signconvention that positive x, !, " represent the left half of the retina, and positive y, ), . represent the top. This figure was generated inMatlab by the program Fig_DiffRetCoords.m in the Supplementary material.

Journal of Vision (2009) 0(0):1, 1–37 Read, Phillipson, & Glennerster 3

Page 4: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

137 Both definitions of vertical disparity are perfectly valid138 and in common use. The trouble is that statements that are139 true of one are not true of the other. For example,140 elevation-longitude vertical disparity is always zero when141 the eyes are in primary position, but elevation-latitude142 vertical disparity is not. Horizontal rotations of the eyes143 away from primary position change the elevation longi-144 tude to which an object projects, but leave its elevation145 latitude unaltered. As a consequence, elevation-latitude146 vertical disparity is independent of convergence, whereas147 elevation-longitude vertical disparity increases as the eyes148 converge. Elevation-latitude vertical disparity is always149 zero for objects on the mid-sagittal plane, but elevation-150 longitude vertical disparity is not. Yet despite these151 crucial differences, papers on vertical disparity often do152 not spell out which definition they are employing. From153 personal experience, we believe that the differences154 between the definitions are not widely appreciated,155 perhaps because the two definitions become equivalent156 at the fovea. A key aim of this paper is to lay out the157 similarities and differences between both definitions in a158 single convenient reference.159 A second aim is to obtain analytical expressions for160 both types of disparity that are as general as possible.161 Most mathematical treatments in the psychology literature162 make simplifying assumptions, e.g., that the eyes have no163 torsion, that the object is in the plane of gaze, and that the164 eyes are correctly fixating on a single point in space.165 Conversely the computer vision literature allows for166 completely general camera positions but does not provide167 explicit expressions for vertical disparity. Here, we derive168 unusually general explicit expressions for both types of169 vertical disparity. We still use the small baseline approx-170 imation required by previous treatments and also small171 vergence angles. However, we are able to produce172 approximate expressions that allow for small amounts of173 cycloversion, cyclovergence, and vertical misalignments174 between the eyes.175 These general expressions allow us to derive simple176 expressions for the expected pattern of vertical disparity177 across the visual field, averaged across all scenes and eye178 positions. A few previous studies have attempted to179 estimate the two-dimensional distribution of disparities180 encountered in normal viewing (Hibbard, 2007; Liu,181 Bovik, & Cormack, 2008; Read & Cumming, 2004), but182 these have all averaged results across the entire visual183 field. This study is the first to examine the expected184 vertical disparity as a function of position in the visual185 field, and we hope it will be useful to physiologists186 studying the neuronal encoding of vertical disparity.187 In the vicinity of the fovea, the distinction between188 latitude and longitude becomes immaterial. The two189 definitions of vertical disparity are therefore equivalent.190 We show how eye position can be read off very191 straightforwardly from this unified vertical disparity. We192 derive simple analytic expressions giving estimates of193 each eye position parameter in terms of the vertical

194disparity at the fovea and its rate of change there. These195are almost all implicit in the existing literature (Backus &196Banks, 1999; Backus et al., 1999; Banks et al., 2002;197Banks, Hooge, & Backus, 2001; Kaneko & Howard, 1996,1981997b; Koenderink & van Doorn, 1976; Rogers &199Bradshaw, 1993, 1995; Rogers & Cagenello, 1989) but200are brought together here within a single clear set of201naming conventions and definitions so that the similar-202ities and differences between definitions can be readily203appreciated.

204

205206Methods

207All simulations were carried out in Matlab 7.8.0 R2009a208(www.mathworks.com), using the code made available in209the Supplementary material. Most of the figures in this210paper can be generated by this code. To produce a figure,211first download all the Matlab (.m) files in the Supple-212mentary material to a single directory. In Matlab, move to213this directory and type the name of the file specified in the214figure legend.

215

216217Results

218General expressions for elevation-longitude219and elevation-latitude vertical disparities

220Figure 3 shows the two definitions of retinal vertical221disparity that we consider in this paper. A point P in space222is imaged to the points IL and IR in the left and right223retinas, respectively (Figure 3AB). Figures 3C and 3D224show the left and right retinas aligned and superimposed,225so that the positions of the images IL and IR can be more226easily compared. The left (AC) and right-hand panels227(BD) of Figure 3 are identical apart from the vertical228coordinate system drawn on the retina: Figure 3AC shows229elevation longitude ), and Figure 3BD shows elevation230latitude .. The vertical disparity of point P is the231difference between the vertical coordinates of its two232half-images. For this example, the elevation-longitude233vertical disparity of P is )$ = j8-, while the elevation-234latitude disparity is .$ = j6-.235In the Appendices, we derive approximate expressions236for both types of retinal vertical disparity. These are given237in Table C2. In general, vertical disparity depends on the238position of object P (both its visual direction and its239distance from the observer) and on the binocular posture240of the eyes. Each eye has three degrees of freedom, which241we express in Helmholtz coordinates as the gaze azimuth242H, elevation V, and torsion T (Figure 4; Appendix A).243Thus, in total the two eyes have potentially 6 degrees of244freedom. It is convenient to represent these by the mean

Journal of Vision (2009) 0(0):1, 1–37 Read, Phillipson, & Glennerster 4

Page 5: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

Figure 4. Helmholtz coordinates for eye position (A) shown as a gimbal, after Howard (2002, Figure 9.10) and (B) shown for the cyclopeaneye AQ2. The sagittal YZ plane is shown in blue, the horizontal XZ plane in pink, and the gaze plane in yellow. There are two ways ofinterpreting Helmholtz coordinates: (1) Starting from primary position, the eye first rotates through an angle T about an axis through thenodal point parallel to Z, then through H about an axis parallel to Y, and finally through V about an axis parallel to X. Equivalently,(2) starting from primary position, the eye first rotates downward through V, bringing the optic axis into the desired gaze plane (shown inyellow) then rotates through H about an axis orthogonal to the gaze plane, and finally through T about the optic axis. Panel B wasgenerated by the program Fig_HelmholtzEyeCoords.m in the Supplementary material.

Journal of Vision (2009) 0(0):1, 1–37 Read, Phillipson, & Glennerster 5

Page 6: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

245 and the difference between the left and right eyes. Thus,246 we shall parametrize eye position by the three coordinates247 of an imaginary cyclopean eye (Figure 5), Hc, Vc, and Tc,248 and the three vergence angles, H$, V$, and T$, where Hc =249 (HR + HL)/2, and H$ = HR j HL, and so on (Tables A1250 and A2). When we refer below to convergence, we mean251 the horizontal vergence angle H$. We shall refer to V$ as252 vertical vergence error or vertical vergence misalignment.253 We call V$ a misalignment because, in order for the two254 eyes’ optic axes to intersect at a single fixation point, V$

255 must be zero, and this is empirically observed to be256 usually the case.257 We shall call T$ the cyclovergence. Non-zero values of258 T$ mean that the eyes have rotated in opposite directions259 about their optic axes. This occurs when the eyes look up260 or down: if we specify the eyes’ position in Helmholtz261 coordinates, moving each eye to its final position by262 rotating through its azimuth H about a vertical axis and263 then through the elevation V about the interocular axis, we264 find that in order to match the observed physical position265 of each eye, we first have to apply a rotation T about the266 line of sight. If V 9 0, so the eyes are looking down, this267 initial torsional rotation will be such as to move the top of268 each eyeball nearer the nose, i.e., incyclovergence. Note269 that the sign of the cyclovergence depends on the coordinate270 system employed; if eye position is expressed using rotation

271vectors or quaternions, converged eyes excycloverge when272looking downward (Schreiber, Crawford, Fetter, & Tweed,2732001).274We shall refer to Tc as cycloversion. Non-zero values of275Tc mean that the two eyes are both rotated in the same276direction. This happens, for example, when the head tilts277to the left; both eyes counter-rotate slightly in their278sockets so as to reduce their movement in space, i.e.,279anti-clockwise as viewed by someone facing the observer280(Carpenter, 1988).281As noted, V$ is usually zero. It is also observed that for282a given elevation, gaze azimuth, and convergence, the283torsion of each eye takes on a unique value, which is small284and proportional to elevation (Tweed, 1997c). Thus, out of285the 6 degrees of freedom, it is a reasonable approximation286to consider that the visual system uses only 3: Hc, Vc, and287H$, with V$ = 0, and cycloversion Tc and cyclovergence288T$ given by functions of Hc, Vc, and H$. Most treatments289of physiological vertical disparity have assumed that there290is no vertical vergence misalignment or torsion: V$ = T$ =291Tc = 0. We too shall use this assumption in subsequent292sections, but we start by deriving the most general293expressions that we can. The expressions given in294Table C2 assume that all three vergence angles are small295but not necessarily zero. This enables the reader to296substitute in realistic values for the cyclovergence T$ at

Figure 5. Different ways of measuring the distance to the object P. The two physical eyes are shown in gold; the cyclopean eye is inbetween them, in blue. F is the fixation point; the brown lines mark the optic axes, and the blue line marks the direction of the cyclopeangaze. Point P is marked with a red dot. It is at a distance R from the origin. Its perpendicular projection on the cyclopean gaze axis is alsodrawn in red (with a corner indicating the right angle); the distance of this projection from the origin is S, marked with a thick red line. Thisfigure was generated by the program Fig_DistancesRS.m in the Supplementary material.

Journal of Vision (2009) 0(0):1, 1–37 Read, Phillipson, & Glennerster 6

Page 7: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

297 different elevations (Minken & Van Gisbergen, 1994;298 Somani, DeSouza, Tweed, & Vilis, 1998; Van Rijn & Van299 den Berg, 1993). The expressions in Table C2 also assume300 that the interocular distance is small compared to the301 distance to the viewed object. If the eyes are fixating near302 object P, then the small vergence approximation already303 implies this small baseline approximation, since if P is far304 compared to the interocular separation, then both eyes305 need to take up nearly the same posture in order to view it.306 While Porrill et al. (1990) extended the results of Mayhew307 and Longuet-Higgins (1982) to include cyclovergence, we308 believe that this paper is the first to present explicit309 expressions for two-dimensional retinal disparity that are310 valid all over the visual field and which allow for non-zero311 vertical vergence misalignment and cycloversion as well312 as cyclovergence.313 Under these assumptions, the vertical disparity expressed314 as the difference in elevation-longitude coordinates is

)$ , cos2)c sinTccosHc j sinHctan)cð Þ IS

þ tan!csin)ccos)ccosTc j sinTcð ÞH$

j ðcosHccosTc þ tan!csin)ccos)ccosHcsinTc

þ tan!ccos2)csinHcÞV$ j tan!ccos

2)c

� �T$ ð1Þ

315316 assuming that I/S, H$, T$, and V$ are all small.317 While the vertical disparity expressed as the difference318 in elevation-latitude coordinates is

.$ , ðcosTccosHcsin!csin.c j sinHccos!csin.c

þ sinTccosHccos.cÞ IR

j sinTccos!cH$

j cosHccosTccos!c þ sin!csinHcð ÞV$ j sin!cT$ ð2Þ

319320 assuming that I/R, H$, T$, and V$ are all small.321 The coordinates (!c, )c) represent the visual direction of322 the viewed object P in the azimuth-longitude/elevation-323 longitude coordinate system shown in Figure 3AC, while324 (!c, .c) represent visual direction in the azimuth-longitude/325 elevation-latitude system of Figure 3BD. (!c, )c) or326 (!c, .c) specify the position of P’s image on an imaginary327 cyclopean retina midway between the two real eyes, with328 gaze azimuth Hc, elevation Vc, and torsion Tc.329 S and R both represent the distance to the viewed object330 P. R is the distance of P from the cyclopean point midway331 between the eyes. S is the length of the component along332 the direction of cyclopean gaze (Figure 5). These are333 simply related by the following equation:

S ¼ Rcos!ccos.c ¼ Rcos"ccos)c: ð3Þ334335

336As noted, Equations 1 and 2 assume that I/S, I/R, H$,337V$, and T$ are all small, and they are correct to first order338in these terms. However, they make no assumptions about339!c, )c, .c, Hc, Vc, and Tc. They are thus valid over the340entire retina, not just near the fovea, and for all cyclopean341eye positions. Under this small-vergence approximation,342the total vertical disparity is the sum of four terms,343respectively proportional to one of four possible sources344of disparity: (i) the interocular separation as a fraction of345object distance, I/R or I/S, (ii) the horizontal vergence H$,346(iii) vertical vergence error V$, and (iv) cyclovergence T$.347Each source of disparity is multiplied by a term that348depends on one or both of the components of visual349direction (!c and )c or .c), the gaze azimuth Hc and the350overall torsion Tc. For example, cyclovergence T$ is351multiplied by !c, and so makes no contribution to vertical352disparity on the vertical retinal meridian. None of the four353disparity terms explicitly depends on elevation Vc,354although elevation would affect the disparity indirectly,355because it determines the torsion according to Donders’356law (Somani et al., 1998; Tweed, 1997a).357The contribution of the three vergence sources of358disparity is independent of object distance; object distance359only enters through the interocular separation term (i).360This may surprise some readers used to treatments that are361only valid near fixation. In such treatments, it is usual to362assume that the eyes are fixating the object P, so the363vergence H$ is itself a function of object distance R. We364shall make this assumption ourselves in the Average365vertical disparity expected at different positions on the366retina section. However, this section does not assume that367the eyes are fixating the object P, so the three vergence368angles H$, V$, and T$ are completely independent of the369object’s distance R. Thus, object distance affects disparity370only through the explicit dependence on R (or S) in the371first term (i). The contribution of the three vergence terms372(ii–iv) is independent of object distance, provided that the373visual direction and eye posture is held constant. That is,374if we move the object away but also increase its distance375from the gaze axis such that the object continues to fall at376the same point on the cyclopean retina, then the377contribution of the three vergence terms to the disparity378at that point are unchanged. (If the vergence changed to379follow the object as it moves away, then of course this380contribution would change.)381Of course, expressing disparity as a sum of independent382contributions from four sources is valid only to first order.383A higher order analysis would include interactions384between the different types of vergence, between vergence385and interocular separation, and so on. Nevertheless, we386and others (e.g., Garding et al., 1995) have found that387first-order terms are surprisingly accurate, partly because388several second-order terms vanish. We believe that the389present analysis of disparity as arising from 4 independent390sources (interocular separation plus 3 vergence angles) is391both new and, we hope, helpful. In the next section, we392show how we can use this new analysis to derive

Journal of Vision (2009) 0(0):1, 1–37 Read, Phillipson, & Glennerster 7

Page 8: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

393 expressions for the average vertical disparity experienced394 in different parts of the visual field.

395

396 Average vertical disparity expected397 at different positions on the retina

398 There have been a few previous attempts to derive the399 distribution of vertical disparity encountered during400 normal viewing (Hibbard, 2007; Liu et al., 2008; Read401 & Cumming, 2004). However, these studies averaged402 results across all visual directions. For example, Read and403 Cumming calculated the distribution of physically possi-404 ble disparities for all objects whose images fall within 15-405 of the fovea in both retinas. Critically, they averaged this406 distribution not only over all possible objects but over the407 whole 15- parafoveal area. The spread of their distribution408 thus reflects both variation in the vertical disparities that409 are possible at different positions on the retina, and410 variation that is possible at a single retinal location. To411 make the distinction clear with a toy example, suppose412 that all eye position parameters are frozen (Hc = Tc = T$ =413 V$ = 0), except for vergence, H$, which varies between 0414 and 40-, so that elevation-longitude disparity is )$ ,415 0.5H$tan(!c)sin(2)c). Under these circumstances, 10- to416 the left and 10- up from the fovea, vertical disparity417 would always be positive, running from 0- to +0.9-. On418 the opposite side of the retina, 10- right and 10- up,419 vertical disparity would always be negative, running from420 0- to j0.9-. Along the retinal meridians, the vertical421 disparity would always be zero. Read and Cumming’s422 analysis would lump all these together to report that the423 range of possible vertical disparity is from j0.9- to +0.9-.424 In other words, the results of Read and Cumming (2004),425 like those of Hibbard (2007) and Liu et al. (2008),426 confound variation in the vertical disparity that is possible427 at a given retinal location with variation across different428 locations. Similarly, physiological studies that have429 investigated tuning to vertical disparity have not reported430 where in the visual field individual neurons were, making431 it impossible to relate the tuning of these neurons to432 ecological statistics. For example, one would expect the433 range of vertical disparity tuning to be narrower for434 neurons located directly above or below the fovea than for435 neurons to the “northwest.” The published physiological436 literature does not make it possible to examine this437 prediction.438 Deriving a full probability density function for ecolo-439 gical disparity requires making assumptions about the eye440 postures adopted during natural viewing, the scenes441 viewed, and the fixations chosen within each scene.442 Although there have been major steps toward realistic443 estimates of these quantities (Hibbard, 2007; Liu et al.,444 2008), there is as yet no definitive study, and the issue is445 beyond the scope of the present paper. However, the

446expressions derived in the previous section do enable us to447estimate the mean vertical disparity as a function of448position in the visual field. The previous studies (Hibbard,4492007; Liu et al., 2008; Read & Cumming, 2004) say only450that the mean vertical disparity, averaged across the visual451field, is zero; they do not discuss how the mean varies as a452function of position. In this section, making only some453fairly limited and plausible assumptions about eye454position, we shall obtain expressions showing how mean455vertical disparity varies as a function of position in the456visual field.457The last term in both Equations 1 and 2 depends on eye458position only through T$, the cyclovergence. According to459the extended binocular versions of Listing’s law (Minken460& Van Gisbergen, 1994; Mok, Ro, Cadera, Crawford, &461Vilis, 1992; Somani et al., 1998; Tweed, 1997c), this term462depends on elevation and more weakly on convergence.463Relative to zero Helmholtz torsion, the eyes twist inward464(i.e., top of each eye moves toward the nose) on looking465down from primary position and outward on looking up,466and this tendency is stronger when the eyes are converged:467T$ = Vc(0 + MH$), where 0 and M are constants G 1468(Somani et al., 1998). Humans tend to avoid large469elevations: if we need to look at something high in our470visual field, we tilt our head upward, thus enabling us to471view it in something close to primary position. Thus,472cyclovergence remains small in natural viewing, and since473both positive and negative values occur, the average is474likely to be smaller still. Thus, we can reasonably475approximate the mean cyclovergence as zero: bT$À = 0.476This means that the last term in the expressions for both477kinds of vertical disparity vanishes.478The next-to-last term is proportional to vertical ver-479gence error, V$. We assume that this is on average zero480and independent of gaze azimuth or torsion, so that terms481like V$cosHccosTc all average to zero. This assumption482may not be precisely correct, but vertical vergence errors483are likely to be so small in any case that neglecting this484term is not likely to produce significant errors in our485estimate of mean vertical disparity.486The next term is proportional to convergence angle H$.487This is certainly not zero on average. However, part of its488contribution depends on sin(Tc), the sine of the cyclo-489version. Empirically, this is approximately Tc È jVcHc/2490(Somani et al., 1998; Tweed, 1997b). So, although491cycloversion can be large at eccentric gaze angles,492provided we assume that gaze is symmetrically distributed493about primary position, then bVcHcÀ = 0 and so the mean494torsion is zero. Again, in the absence of a particular495asymmetry, e.g., that people are more likely to look up496and left while converging and more likely to look up and497right while fixating infinity, we can reasonably assume498that bH$sinTcÀ = 0. Equation 1 also contains a term in499H$cosTc. This does not average to zero, but under the500assumption that convergence and cycloversion are inde-501pendent, and that cycloversion is always small, the mean

Journal of Vision (2009) 0(0):1, 1–37 Read, Phillipson, & Glennerster 8

Page 9: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

502 value of this term will be approximately bH$À. Thus, under503 the above assumptions, Equations 1 and 2 become

b)$À , cos2)c sinTccosHc j sinHctan)cð Þ IS

� �

þ tan!csin)ccos)cð ÞbH$À ð4Þ504505

b.$À ,

*ðcosTccosHcsin!csin.c j sinHccos!csin.c

þsinTccosHccos.cÞ IR

+: ð5Þ

506507

508 Looking at Equation 4, we see that the average elevation-509 longitude disparity encountered in natural viewing contains510 terms in the reciprocal of S, the distance to the surface:511 bsin(Tc)cos(Hc)/SÀ and bsin(Hc)/SÀ. We now make the512 reasonable assumption that, averaged across all visual513 experience, gaze azimuth is independent of distance to the514 surface. This assumes that there are no azimuthal asymme-515 tries such that nearer surfaces are systematically more516 likely to be encountered when one looks left, for example.517 Under this assumption, the term bsin(Hc)/SÀ averages to518 zero. Similarly we assume that bsin(Tc)cos(Hc)/SÀ = 0.519 Thus, the entire term in I/S averages to zero. The vast array520 of different object distances encountered in normal viewing521 makes no contribution to the mean elevation-longitude522 disparity at a particular place on the retina. The mean523 elevation-longitude disparity encountered at position (!c, )c)524 is simply

b)$À , bH$À tan!csin)ccos)c: ð6Þ525526

527 We have made no assumptions about the mean528 convergence, bH$À, but simply left it as an unknown. It529 does not affect the pattern of expected vertical disparity530 across the retina but merely scales the size of vertical531 disparities. Convergence is the only eye-position param-532 eter that we cannot reasonably assume is zero on average,533 and thus it is the only one contributing to mean vertical534 disparity measured in elevation longitude.535 For elevation-latitude disparity, the dependence on536 object distance S does not immediately average out.537 Again, we assume that the terms bsin(Hc)/RÀ and bsin(Tc)538 cos(Hc)/RÀ are zero, but this still leaves us with

b.$À ,I

RcosHccosTc

� �sin!csin.c: ð7Þ

539540

541 To progress, we need to make some additional assump-542 tions about scene structure. We do this by introducing the543 fractional distance from fixation, %:

R ¼ R0ð1þ %Þ; ð8Þ

544545where R0 is the radial distance from the origin to the546fixation point (or to the point where the optic axes most547nearly intersect, if there is a small vertical vergence error),548i.e., the distance OF in Figure 5. This is

R0 ,IcosHc

H$: ð9Þ

549550Thus,

I

RcosHc , H$ 1 j %ð Þ: ð10Þ

551552

553Substituting Equation 10 into Equation 7, we obtain

b.$À , H$ 1j %ð ÞcosTch isin!csin.c: ð11Þ554555

556We can very plausibly assume that torsion is independ-557ent of convergence and scene structure and on average558zero, so that bH$(1 j %)cosTcÀ averages to bH$(1 j %)À. In559natural viewing, the distributions of H$ and % will not be560independent (cf. Figure 6 of Liu et al., 2008). For561example, when H$ is zero, its smallest value, the fixation562distance is infinity, and so % must be negative or zero.563Conversely when the eyes are converged on a nearby564object (large H$), perhaps most objects in the scene are565usually further away than the fixated object, making %566predominantly positive. In the absence of accurate data,567we assume that the average bH$%À is close to zero. We568then obtain

b.$À , bH$Àsin!csin.c: ð12Þ569570

571Equation 12 gives the expected elevation-latitude dis-572parity b.$À as a function of cyclopean elevation latitude,573.c, whereas Equation 6 gave the expected elevation-574longitude disparity b)$À as a function of cyclopean575elevation longitude, )c. To make it easier to compare the576two, we now rewrite Equation 6 to give the expected577elevation-longitude disparity b)$À as a function of cyclo-578pean elevation latitude, .c. The expected vertical disparity579at (!c, .c) is thus, in the two definitions,

)$ !c;.cð Þh i , H$h isin!ctan.c= cos2!c þ tan2.cð Þ.$ !c; .cð Þh i , H$h isin!csin.c

ð13Þ

580581

582These expressions will strike many readers as familiar.583They are the longitude and latitude vertical disparity fields584that would be obtained when the eyes adopt their average585position, i.e., fixating on the mid-sagittal plane with no586cyclovergence, cycloversion, or vertical vergence and with587the average convergence bH$À, and viewing a spherical

Journal of Vision (2009) 0(0):1, 1–37 Read, Phillipson, & Glennerster 9

Page 10: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

588 surface centered on the cyclopean point and passing589 through fixation. The much more general expressions we590 have considered reduce to this, because vertical-disparity591 contributions from eccentric gaze, from the fact that592 objects may be nearer or further than fixation, from593 cyclovergence and from vertical vergence all cancel out

594on average. Thus, they do not affect the average vertical595disparity encountered at different points in the visual field596(although of course they will affect the range of vertical597disparities encountered at each position).598It is often stated that vertical disparity increases as a599function of retinal eccentricity. Thus, it may be helpful to600give here expressions for retinal eccentricity J:

cosJ ¼ cos!ccos.c

tan2J ¼ tan2!c þ tan2)c:ð14Þ

601602

603Here, eccentricity J is defined as the angle EV, where604E is the point on the retina whose retinal eccentricity is605being calculated, C is the center of the eyeball, and V is606the center of the fovea (Figure 6).607The mean convergence, bH$À, is not known but must be608positive. It does not affect the pattern of vertical disparity609expected at different points in the visual field but simply610scales it. Figure 7 shows the pattern expected for both611types of vertical disparity. In our definition, !c and .c612represent position on the cyclopean retina, and their signs613are thus inverted with respect to the visual field (bottom of614the retina represents upper visual field). However, con-615veniently Equation 13 is unchanged by inverting the sign616of both !c and .c, meaning that Figure 7 can be equally617well interpreted as the pattern across either the cyclopean618retina or the visual field.619Conveniently, within the central 45- or so, the expected620vertical disparity is almost identical for the two definitions621of retinal vertical disparity we are considering. Near the

Figure 6. Definition of retinal eccentricity J: the eccentricity ofPoint E is the angle ECV, where C is the center of the eyeball andV is the center of the fovea.

Figure 7. Expected vertical disparity in natural viewing, as a function of position in the cyclopean retina, for (a) elevation-longitude and(b) elevation-latitude definitions of vertical disparity. Vertical disparity is measured in units of bH$À, the mean convergence angle. Becausethe vertical disparity is small over much of the retina, we have scaled the pseudocolor as indicated in the color bar, so as to concentratemost of its dynamic range on small values. White contour lines show values in 0.1 steps from j1 to 1. This figure was generated byFig_ExpectedVDisp.m in the Supplementary material.

Journal of Vision (2009) 0(0):1, 1–37 Read, Phillipson, & Glennerster 10

Page 11: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

622 fovea, the value expected for both types of vertical623 disparity is roughly bH$À!c.c. Throughout the visual field,624 the sign of vertical disparity depends on the quadrant.625 Points in the top-right or bottom-left quadrants of the626 visual field experience predominantly negative vertical627 disparity in normal viewing, while points in the top-left or628 bottom-right quadrants experience predominantly positive629 vertical disparities. Points on the vertical or horizontal630 meridian experience zero vertical disparity on average,631 although the range would clearly increase with vertical632 distance from the fovea. To our knowledge, no physio-633 logical studies have yet probed whether the tuning of634 disparity-sensitive neurons in early visual areas reflects635 this retinotopic bias.636

637 Properties of elevation-longitude638 and elevation-latitude definitions639 of vertical disparity, in the absence640 of torsion or vertical vergence

641 So far, we have provided general expressions for642 vertical disparity but, in order to make comparisons with643 previous literature more straightforward, in this and644 subsequent sections we make the simplifying assumption645 that cycloversion, cyclovergence, and vertical vergence646 are all zero (Tc = T$ = V$ = 0) and demonstrate the647 consequences of this assumption on the properties of the648 two types of vertical disparity. In this case, the only649 degrees of freedom that affect disparity are the horizontal650 rotation of each eye, expressed as the convergence H$ and651 the gaze angle Hc. From Equation 1, elevation-longitude652 vertical disparity in the absence of torsion and vertical653 vergence is

)$ ,1

2sin2)c j

I

SsinHc þ H$tan!c

� �ð15Þ

654655while from Equation 2, elevation-latitude vertical dispar-656ity is

.$ ,I

Rsin.csin !c j Hcð Þ: ð16Þ

657658

659In general, these two types of vertical disparity have660completely different properties. We see that elevation-661longitude vertical disparity is zero for all objects,662irrespective of their position in space, if the eyes are in663primary position, i.e., Hc = H$ = 0. Elevation-latitude664vertical disparity is not in general zero when the eyes are665in primary position, except for objects on the midline or at666infinite distance. Rotating the eyes into primary position667does not affect elevation-latitude disparity because, as668noted in the Introduction section, horizontal rotations of669the eyes cannot alter which line of elevation latitude each670point in space projects to; they can only alter the671azimuthal position to which it projects. Thus, .$ is672independent of convergence, while gaze azimuth simply673sweeps the vertical disparity pattern across the retina,674keeping it constant in space. Thus, .$ depends on gaze675azimuth Hc only through the difference (!c j Hc),676representing azimuthal position in head-centric space. As677a consequence, elevation-latitude vertical disparity is zero678for all objects on the midline (X = 0, meaning that !c =679Hc). The elevation-latitude disparity .$ at each point in680the cyclopean retina is scaled by the reciprocal of the681distance to the viewed object at that point. In contrast,682elevation-longitude disparity, )$, is independent of object683distance when fixation is on the mid-sagittal plane; it is684then proportional to convergence. In Table 1, we685summarize the different properties of elevation-longitude686and elevation-latitude vertical disparities, under the con-687ditions Tc = T$ = V$ = 0 to which we are restricting688ourselves in this section.

Vertical disparity defined as: Properties in the absence of vertical vergence error and torsion (Tc = T$ = V$ = 0)t1.1

Difference in retinal elevationlongitude, )$

Is zero for objects in plane of gaze.t1.2

Is zero when the eyes are in primary position, for objects at any distance anywhere on the retina.t1.3

Increases as eyes converge.t1.4

May be non-zero even for objects at infinity, if the eyes are converged.t1.5

Is proportional to sine of twice the elevation longitude.t1.6

Is not necessarily zero for objects on the midline.t1.7

For fixation on midline, is independent of object distance for a given convergence angle.t1.8

Difference in retinal elevationlatitude, .$

Is zero for objects in plane of gaze.t1.9

Is zero for objects at infinity.t1.10

Is inversely proportional to object’s distance.t1.11

Is independent of convergence for objects at a given distance.t1.12

May be non-zero even when eyes are in primary position.t1.13

Is proportional to sine of elevation latitude.t1.14

Is zero for objects on the mid-sagittal plane.t1.15

t1.16 Table 1. Summary of the different properties of the two definitions of retinal vertical disparity in the absence of vertical vergence error andtorsion.

Journal of Vision (2009) 0(0):1, 1–37 Read, Phillipson, & Glennerster 11

Page 12: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

689 We have seen that )$ and .$ depend very differently on690 convergence and object distance. For midline fixation,691 )$ is proportional to convergence and is independent of692 object distance, whereas .$ is independent of convergence693 and is inversely proportional to object distance. However,694 if we consider only objects close to fixation, then object695 distance and convergence convey the same information.696 Under these circumstances the two definitions of vertical697 disparity become similar. This is shown in Figure 8, which698 plots the vertical disparity field for a frontoparallel plane.699 The two left panels show elevation-longitude vertical700 disparity; the two right panels show elevation-latitude701 vertical disparity. In the top row, the eyes are viewing a702 frontoparallel plane at a distance of 60 cm, and in the703 bottom row, a plane at 10 m. In each case, the eye position704 is the same: looking 15- to the left, and converged so as to705 fixate the plane at 60 cm.706 In Figure 8AB, where the eyes are fixating the viewed707 surface, both definitions of vertical disparity give similar708 results, especially near the fovea. For both definitions,709 vertical disparity is zero for objects in the plane of gaze710 (Y = 0, i.e., )c = .c = 0) and also along a vertical line711 whose position depends on the gaze angle. For elevation-

712latitude disparity .$, this line is simply the line of azimuth713longitude !c = Hc, here 15-. This is the retinal projection714of the mid-sagittal plane, X = 0. That is, in the absence of715torsion or vertical vergence error, elevation-latitude716vertical disparity is zero for objects on the midline,717independent of their distance or of the convergence angle.718For elevation-longitude vertical disparity )$, no such719simple result holds. The locus of zero vertical disparity720(vertical white line in Figure 8AC) depends on object721distance and the eyes’ convergence, as well as gaze angle.722However, for objects relatively near fixation, these differ-723ences are minor, so the locus of zero )$ is also close724to 15-.725It is not always the case, however, that the differences726between elevation-longitude and elevation-latitude verti-727cal disparities are minor. Figure 8CD shows the two728vertical disparity fields for a surface at a much greater729distance from the observer than the fixation point. The730position of the eyes is the same as in AB, but now the731viewed surface is a plane at 10 m from the observer. Now,732the pattern of vertical disparities is very different. As we733saw from Equation 16, elevation-latitude vertical disparity734is zero for all objects at infinity, no matter what the

Figure 8. Vertical disparity field all over the retina, where the visual scene is a frontoparallel plane, i.e., constant head-centered coordinate Z.AB: Z = 60 cm; CD: Z = 10 m. The interocular distance was 6.4 cm, gaze angle Hc = 15-, and convergence angle H$ = 5.7-, i.e., such asto fixate the plane at Z = 60 cm. Vertical disparity is defined as difference in (AC) elevation longitude and (BD) elevation latitude. Lines ofazimuth longitude and (AC) elevation longitude, (BD) elevation latitude are marked in black in 15- intervals. The white line shows wherethe vertical disparity is zero. The fovea is marked with a black dot. The same pseudocolor scale is used for all four panels. Note that theelevation-longitude disparity, )$, goes beyond the color scale at the edges of the retina, since it tends to infinity as |!c| tends to 90-. Thisfigure was generated by DiagramOfVerticalDisparity_planes.m in the Supplementary material.

Journal of Vision (2009) 0(0):1, 1–37 Read, Phillipson, & Glennerster 12

Page 13: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

735 vergence angle H$. Thus, for Z = 10 m, it is already close736 to zero across the whole retina (Figure 8D). Elevation-737 longitude vertical disparity does not have this property. It738 is zero for objects at infinity only if the eyes are also739 fixating at infinity, i.e., H$ = 0. Figure 8C shows results740 for H$ = 5.7-, and here the second term in Equation 15741 gives non-zero vertical disparity everywhere except along742 the two retinal meridians.743 In summary, the message of this section is that elevation-744 longitude and elevation-latitude definitions of vertical745 disparity give very similar results for objects near the fovea746 and close to the fixation distance. However, when these747 conditions are not satisfied, the two definitions of vertical748 disparity can produce completely different results.

749 Epipolar lines: Relationship between vertical750 and horizontal disparities

751 Disparity is a two-dimensional quantity, but for a given752 eye position, not all two-dimensional disparities are753 physically possible. Figure 9A shows how the physically754 possible matches for the red dot on the left retina fall755 along a line in the right retina. The object projecting to the756 red dot could lie anywhere along the red line shown757 extending to infinity, and each possible position implies a758 different projection onto the right retina. The set of all759 possible projections is known in the literature as an760 epipolar line (Hartley & Zisserman, 2000).761 This definition of epipolar line treats the eyes asym-762 metrically: one considers a point in one eye, and the763 corresponding line in the other eye. Everywhere else in764 this paper, we have treated the eyes symmetrically,765 rewriting left and right eye coordinates in terms of their766 sum and difference: position on the cyclopean retina and767 disparity. So in this section, we shall consider something768 slightly different from the usual epipolar lines: we shall769 consider the line of possible disparities at a given point in770 the cyclopean visual field. Figures 9B–9D show how this771 differs from an epipolar line. As one moves along an772 epipolar line (Figure 9B), not only the two-dimensional773 disparity, but also the cyclopean position, varies. We shall774 consider how disparity varies while keeping cyclopean775 position constant (Figure 9D).776 To achieve this, we need to express vertical disparity as777 a function of horizontal disparity. So far in this paper, we778 have expressed vertical disparity as a function of object779 distance and eye position. Of course, horizontal disparity780 is also a function of object distance and eye position. So if781 we substitute in for object distance using horizontal782 disparity, we obtain the function relating horizontal and783 vertical disparities, for a given eye position and location in784 the visual field. Using the expressions given in the785 Appendix, it is simple to obtain the line of possible786 disparities for arbitrary cyclovergence, cycloversion, and787 vertical vergence error. In this section, for simplicity, we788 continue to restrict ourselves to Tc = T$ = V$ = 0. Under

789these circumstances, azimuth-longitude horizontal dispar-790ity is (Appendix B; Table C5)

!$ , jI

Scos!ccos !cjHcð Þ þ H$: ð17Þ

791792

793If we use horizontal disparity to substitute for object794distance in Equation 15, we obtain the following relation-795ship between horizontal (azimuth-longitude) and vertical796(elevation-longitude) disparities:

) $ ,1

2sin2)csec!c H$sin!c j !$ j H$ð Þsec !c j Hcð ÞsinHcð Þ:

ð18Þ

797798

799For elevation-latitude vertical disparity, again substitut-800ing for object distance in Equation 16, we obtain

.$ ,1

2sin2.cð Þ !$ jH$ð Þtan Hc j !cð Þ: ð19Þ

801802

803Thus, the vertical disparities that are geometrically804possible at a given position in the visual field are a linear805function of the horizontal disparity. This is shown for806elevation-latitude disparity by the green line in Figure 10.807Where we are on this line depends on object distance.808Note that this expression is not valid if either !c or809(!c j Hc) is 90-, since then horizontal disparity is810independent of object distance (Equation 17). So for811example if we are considering an azimuthal direction of81245- (!c = 45-) and the eyes are looking off 45- to the right813(Hc = j45-), this expression fails. Apart from this814relatively extreme situation, it is generally valid.815Note also that the line of possible disparities does not816extend across the whole plane of disparities. We have817adopted a sign convention in which “far” disparities are818positive. The largest possible horizontal disparity occurs819for objects at infinity. Then, we see from Equation 17 that820the horizontal disparity is equal to the convergence angle,821H$. For objects closer than infinity, the horizontal822disparity is smaller, becoming negative for objects nearer823than the fixation point. Thus, the green line in Figure 10824terminates at !$ = H$. The elevation-latitude vertical825disparity at this point in the visual field thus has only one826possible sign, either negative or positive depending on the827sign of .c(Hc j !c) (since (!$ j H$) is always negative).828For elevation-latitude vertical disparity, the eye-position829parameters have a particularly simple effect on the line of830possible disparities. The convergence angle H$ controls831the intercept on the abscissa, i.e., the horizontal disparity832for which the vertical disparity is zero. The gradient of the833line is independent of convergence, depending only on the

Journal of Vision (2009) 0(0):1, 1–37 Read, Phillipson, & Glennerster 13

Page 14: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

834 gaze angle. To avoid any confusion, we emphasize that835 this “disparity gradient” is the rate at which vertical836 disparity would change if an object slid nearer or further837 along a particular visual direction, so that its horizontal838 disparity varied while its position in the cyclopean visual839 field remained constant. Thus, we are considering the set840 of two-dimensional disparities that can be produced by a841 real object for a given binocular eye position. This might842 theoretically be used by the visual system in solving the

843stereo correspondence problem if eye position were844known. This “disparity gradient” is not the same as the845disparity derivatives discussed below (see Discussion846section) in the context of deriving eye position given the847solution of the correspondence problem, which concern848the rate at which vertical disparity changes as a function849of visual direction in a given scene.850In Figure 10, the gradient of the green line is851exaggerated for clarity. In fact, when .c and (Hc j !c)

Figure 9. Epipolar line and how it differs from the “line of possible disparities” shown below in (D) AQ2. (A) How an epipolar line is calculated:it is the set of all possible points on the right retina (heavy blue curve), which could correspond to the same point in space as a given pointon the left retina (red dot). (B) Epipolar line plotted on the planar retina. Blue dots show 3 possible matches in the right eye for a fixed pointin the left retina (red dot). The cyclopean location or visual direction (mean of left and right retinal positions, black dots) changes as onemoves along the epipolar line. (C) Possible matches for a given cyclopean position (black dot). Here, we keep the mean location constantand consider pairs of left/right retinal locations with the same mean. (D) Line of possible disparities implied by the matches in (B). Theseare simply the vectors linking left to right retinal positions for each match (pink lines). Together, these build up a line of possible disparities(green line). Panel A was generated by Fig_EpipolarLine.m in the Supplementary material.

Journal of Vision (2009) 0(0):1, 1–37 Read, Phillipson, & Glennerster 14

Page 15: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

852 are both small (i.e., for objects near the midline and near853 the plane of regard), the gradient is close to zero. Even854 quite large changes in horizontal disparity produce very855 little effect on vertical disparity. In these circumstances, it856 is reasonable to approximate vertical disparity by its value857 at the chosen distance, ignoring the gradient entirely. We858 go through this in the next section.

859

860 For objects near fixation, vertical disparity861 is independent of object distance

862 It is often stated that, to first order, vertical disparity is863 independent of object distance, depending only on eye864 position (e.g., Garding et al., 1995; Read & Cumming,865 2006). Horizontal disparity, in contrast, depends both on866 eye position and object distance. Thus, vertical disparity867 can be used to extract an estimate of eye position, which868 can then be used to interpret horizontal disparity.869 At first sight, these statements appear to conflict with870 much of this paper. Consider the first-order expressions871 for vertical disparity given in Equations 15 and 16. Both872 depend explicitly on the object distance (measured873 radially from the origin, R, or along the gaze direction,874 S, Figure 5). Figure 8AB versus Figure 8CD, which differ875 only in the object distance, show how both types of876 vertical disparity depend on this value. Elevation-latitude877 disparity does not even depend on the convergence878 angle H$, making it appear impossible to reconstruct879 vergence from measurements of elevation-latitude dispar-880 ity alone.

881This apparent contradiction arises because the authors882quoted were considering the disparity of objects near to883fixation. Equations 15 and 16, in contrast, are valid for all884object locations, provided only the object distance is large885compared to the interocular distance (small baseline886approximation). We now restrict ourselves to the vicinity887of fixation. That is, we assume that the object is at roughly888the same distance as the fixation point. We express the889radial distance to the object, R, as a fraction of the890distance to fixation, R0:

R ¼ R0ð1þ %Þ: ð20Þ891892

893Under our small vergence angle approximation, the894radial distance to the fixation point is

R0 ,IcosHc

H$: ð21Þ

895896

897For small values of %, then, using the approximation898(1 + %)j1 , (1 j %), we have

I

R,

H$ 1 j %ð ÞcosHc

I

S,

H$ 1 j %ð Þcos!ccos.ccosHc

:

ð22Þ899900

901Note that this breaks down at Hc = 90-. This is the case902where the eyes are both directed along the interocular

Figure 10. The thick green line shows the line of two-dimensional disparities that are physically possible for real objects, for the given eyeposture (specified by convergence H$ and gaze azimuth Hc) and the given visual direction (specified by retinal azimuth !c and elevation .c).The green dot shows where the line terminates on the abscissa. For any given object, where its disparity falls on the green line dependson the distance to the object at this visual direction. The white circle shows one possible distance. Although, for clarity, the green line isshown as having quite a steep gradient, in reality it is very shallow close to the fovea. Thus, it is often a reasonable approximation toassume that the line is flat in the vicinity of the distance one is considering (usually the fixation distance), as indicated by the horizontalgreen dashed line. This is considered in more detail in the next section.

Journal of Vision (2009) 0(0):1, 1–37 Read, Phillipson, & Glennerster 15

Page 16: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

903 axis. Then, the distance to the fixation point is undefined,904 and we cannot express R as a fraction of it. The case Hc =905 90- is relevant to optic flow, but not to stereo vision. Our906 analysis holds for all gaze angles that are relevant to907 stereopsis.908 From Equations 15 and 16, using the fact that R =909 Ssec!csec.c, the two definitions of vertical disparity then910 become

)$ , sin)ccos)c jH$ 1j%ð ÞcosHc

sec!csec.csinHc þ H$tan!c

� �

.$ ,H$ 1 j %ð ÞcosHc

sin.csin !c j Hcð Þ: ð23Þ

911912

913 The dependence on object distance is contained in the914 term %, the fractional distance from fixation. However, by915 assumption, this is much smaller than 1. The vertical916 disparities are dominated by terms independent of dis-917 tance; to an excellent approximation, we have

)$ , H$sin)ccos)c tan!c j tanHcsec!cffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi1þ tan2)ccos

2!cp�

.$ ,H$

cosHcsin.csin !c j Hcð Þ; ð24Þ

918919 where we have used tan. = tan)cos! to substitute for920 elevation latitude . in the expression for elevation-921 longitude vertical disparity, )$.922 Thus, for objects at the same distance as fixation, the923 dependence on object distance can be expressed as the924 convergence angle. Changes in scene depth produce925 negligible changes in vertical disparity: to a good926 approximation, vertical disparity is independent of scene927 structure, changing only with slow gradients, which reflect928 the current binocular eye position. This statement is true929 all across the retina (i.e., for all !c and .c).930 For horizontal disparity, the situation is more subtle. It931 can be shown that, under the same approximations used in932 Equation 23, azimuth-longitude horizontal disparity is933 given by

!$ , jH$ 1j%ð ÞcosHc

sec.ccos !c j Hcð Þ þ H$: ð25Þ

934935

936 This equation resembles Equation 23, so at first sight it937 seems that we can drop % as being small in comparison938 with 1, meaning that horizontal disparity is also inde-939 pendent of object distance. If Hc, !c, and .c are large, this940 is correct. Under these “extreme” conditions (far from the941 fovea, large gaze angles), horizontal disparity behaves just942 like vertical disparity. It is dominated by eye position and943 location in the visual field, with object distance making

944only a small contribution. However, the conditions of945most relevance to stereo vision are those within È10- of946the fovea, where spatial resolution and stereoacuity is947high. In this region, a key difference now emerges948between horizontal and vertical disparities: Vertical949disparity becomes independent of scene structure, whereas950horizontal disparity does not. The terms in Equation 25951that are independent of object distance % cancel out nearly952exactly, meaning that the term of order % is the only one953left. Thus, horizontal disparity becomes

!$ , H$ % j !ctanHcð Þ ðparafoveal approximationÞ:ð26Þ

954955

956This expression is valid near the fixation point (%, !c, .c957all small) and for gaze angles that do not approach 90-958(where Equation 25 diverges). Near the fovea, elevation959latitude and elevation longitude become indistinguishable960(see lines of latitude and longitude in Figure 8). For the961near-fixation objects we are considering, therefore, ele-962vation-latitude and elevation-longitude definitions of963vertical disparity we derived previously (Equation 24)964become identical and both equal to

)$ , .$ , H$.c !c j Hcð Þ ðparafoveal approximationÞ:ð27Þ

965966

967Critically, this means that for the near-fixation case968most relevant to stereo vision, horizontal disparity reflects969scene structure as well as eye position, whereas vertical970disparity depends only on eye position. This means that971estimates of eye position, up to elevation, can be obtained972from vertical disparity and used to interpret horizontal973disparity. In this section, we have set 3 of the 6 eye974position parametersVcyclovergence, cycloversion, and975vertical vergence errorVto zero, meaning that we only976have 2 eye position parameters left to extract. Thus before977proceeding, we shall generalize, in the Discussion section,978to allow non-zero values for all 6 eye position parameters.979We shall then show how 5 of these parameters can be980simply derived from the vertical disparity in the vicinity981of the fovea.982

983

984Obtaining eye position from vertical disparity985and its derivatives at the fovea

986In this section, we derive approximate expressions for9875 binocular eye position parameters in terms of the vertical988disparity and its derivatives near the fovea. As throughout989this paper, we work in terms of retinal disparity, since this is990all that is available to the visual system before eye position

Journal of Vision (2009) 0(0):1, 1–37 Read, Phillipson, & Glennerster 16

Page 17: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

991 has been computed. We do not require any special proper-992 ties of the viewed surface other than that it is near fixation993 and smooth, so that all derivatives exist. We allow small994 amounts of cycloversion, cyclovergence, and vertical995 vergence error but restrict ourselves to small gaze angles.996 Mathematically, this means we approximate cosHc È 1 and997 sinHc È Hc. In Figure 12, we show that our results hold up998 well at least out to Hc = 15-. This is likely to cover most999 gaze angles adopted during natural viewing. We work in1000 the vicinity of the fovea, so retinal azimuth !c and1001 elevation .c are also both small. In this case, the1002 distinction between latitude and longitude becomes imma-1003 terial. We shall write our expressions in terms of elevation1004 latitude ., but in this foveal approximation, the same1005 expressions would also hold for elevation longitude ). We1006 shall show how our equations for the vertical disparity1007 field, .$, can be used to read off gaze angle, convergence,1008 cyclovergence, cycloversion, and vertical vergence.1009 We begin with Equation 10, which expressed I/R in1010 terms of horizontal vergence H$ and the fractional1011 distance of an object relative to fixation, %. If there is a1012 vertical vergence error V$, then there will not be a fixation1013 point, because gaze rays will not intersect. However,1014 Equation 10 is still valid, with % interpreted as a fraction1015 of the distance to the point where the gaze rays most1016 closely approach each other. We substitute Equation 101017 into our most general expression for vertical disparity,1018 Equation 2, and make the additional approximation that1019 the gaze azimuth Hc and overall torsion Tc are both1020 small:

.$ , sin!csin.c j Hccos!csin.c þ Tccos.cð ÞH$ 1 j %ð Þj Tccos!cð ÞH$ j cos!c þ Hcsin!cð ÞV$ j sin!cð ÞT$:

ð28Þ

10211022

1023 When we finally make the approximation that we are1024 near the fovea, i.e., that !c and .c are also small, we find1025 that the lowest order terms are

.$ ¼ jVv j !cT$ þ H$.c !c j Hcð Þ j %H$Tc

j !cHcV$: ð29Þ

10261027

1028 Because we are allowing non-zero torsion Tc, vertical1029 disparity now also depends on object distance, through %.1030 However, this is a third-order term. To first order, the1031 vertical disparity at the fovea measures any vertical1032 vergence error. Thus, we can read off vertical vergence1033 V$ simply from the vertical disparity measured at the1034 fovea (Figure 11)AQ3 :

V$ È j .$: ð30Þ10351036

1037To derive expressions for the remaining eye position1038parameters, we will need to differentiate Equation 28 with1039respect to direction in the visual field. We will use1040subscripts as a concise notation for differentiation: for1041example, .$! indicates the first derivative of the vertical1042disparity .$ with respect to azimuthal position in the1043visual field, !c, holding the visual field elevation .c1044constant. Similarly, .$!. is the rate at which this gradient1045itself alters as one moves vertically:

.$! K¯.$¯!c

j.c .$!. K¯¯.c

j!c¯¯!c

j.c.$: ð31Þ

10461047

1048Note that these derivatives examine how vertical1049disparity changes on the retina as the eyes view a given1050static scene. This is not to be confused with the gradient1051discussed in Figure 10, which considered how vertical1052disparity varies as an object moves in depth along a1053particular visual direction. We assume that the visual scene1054at the fovea consists of a smooth surface that remains1055close to fixation in the vicinity of the fovea. The surface’s1056shape is specified by %, the fractional difference between1057the distance to the surface and the distance to fixation. In1058the Average vertical disparity expected at different1059positions on the retina section, we were considering a

Figure 11. Partial differentiation on the retina. The cyclopeanretina is shown colored to indicate the value of the verticaldisparity field at each point. Differentiating with respect toelevation . while holding azimuth constant means finding the rateat which vertical disparity changes as one moves up along a lineof azimuth longitude, as shown by the arrow labeled ¯/¯..Differentiating with respect to azimuth !, while holding elevationconstant, means finding the rate of change as one moves arounda line of elevation latitude. This figure was generated byFig_DifferentiatingAtFovea.m in the Supplementary material.

Journal of Vision (2009) 0(0):1, 1–37 Read, Phillipson, & Glennerster 17

Page 18: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

1060 single point in the cyclopean retina, and so % was just a1061 number: the fractional distance at that point. Since we are1062 now considering changes across the retina, % is now a1063 function of retinal location, %(!c, .c). The first derivatives of1064 % specify the surface’s slant, its second derivatives specify1065 surface curvature, and so on. % and its derivatives %!, and1066 so on, are assumed to remain small in the vicinity of the1067 fovea.1068 After performing each differentiation of Equation 28,1069 we then apply the parafoveal approximation and retain1070 only the lowest order terms. In this way, we obtain the1071 following set of relationships between derivatives of the1072 vertical disparity field and the binocular eye position1073 parameters:

.$ , jV$; ð32Þ10741075

.$! , jT$; ð33Þ10761077

.$. , jHcH$; ð34Þ10781079

.$!. , H$; ð35Þ10801081

.$.. , jTcH$; ð36Þ

10821083 assuming that !c, .c, %, %!, %., %!!, %!., %.., Hc, Tc, H$, T$,1084 and V$ are all small.1085 To lowest order, there is no dependence on scene1086 structure: under this near-fixation approximation, vertical1087 disparity and its derivatives depend only on eye position.1088 Each term enables us to read off a different eye-position1089 parameter. Any vertical disparity at the fovea reflects a1090 vertical vergence error (Equation 32; Howard, Allison, &1091 Zacher, 1997; Read & Cumming, 2006). The rate at which1092 vertical disparity changes as we move horizontally across1093 the visual field, sometimes called the vertical shear1094 disparity (Banks et al., 2001; Kaneko & Howard,1095 1997b), tells us the cyclovergence (Equation 33). A1096 “saddle” pattern, i.e., the second derivative with respect1097 to both horizontal and vertical disparities, tells us the1098 vergence (Equation 35; Backus et al., 1999). In addition1099 then the rate at which vertical disparity changes as we1100 move vertically across the visual field tells us the gaze1101 angle (Equation 34; Backus & Banks, 1999; Banks &1102 Backus, 1998; Gillam & Lawergren, 1983; Mayhew,1103 1982; Mayhew & Longuet-Higgins, 1982). Finally, the1104 second derivative provides an estimate of cyclopean1105 cycloversion (Equation 36). Although many of these1106 relationships with aspects of eye position have been1107 identified in the past, it is useful to be able to identify1108 the extent to which the approximations hold under a range1109 of eye positions.

1110The relationships given in Equations 32–36 provide an1111intuitive insight into how different features of the1112parafoveal vertical disparity field inform us about eye1113position. The approximations used lead to some small1114errors, but they are sufficiently small to ignore under most1115circumstances. For example, retaining third-order terms in1116the expression for the second derivative .$.. yields

.$.. , jH$Tc j H$%..Tc j 2H$%. !c j Hcð Þþ H$ Hc j !cð Þ.c þ %H$Tc: ð37Þ

11171118

1119If torsion Tc is zero, then near the fovea .$.. will in fact1120be dominated by a term depending on the rate of change1121of % as we move vertically in the visual field, reflecting1122surface slant:

.$.. , 2H$Hc%.: ð38Þ

11231124

1125Applying the formula in Equation 36 would lead us to1126conclude Tc , j2Hc%., instead of the correct value of1127zero. Now Equation 36 was derived assuming small Hc

1128and %., so the misestimate will be small but nevertheless1129present. In Figure 12, we examine how well our1130approximations bear up in practice. Each panel shows the1131eye position parameters estimated from Equations 32–361132plotted against their actual values, for 1000 different1133simulations. On each simulation run, first of all a new1134binocular eye posture was generated, by picking values of1135Hc, Tc, Vc, H$, T$, and V$ randomly from uniform1136distributions. Torsion Tc, cyclovergence T$, and vertical1137vergence error Vc are all likely to remain small in normal1138viewing and were accordingly picked from uniform1139distributions between T2-. Gaze azimuth and elevation1140were picked from uniform distributions between T15-.1141Convergence was picked uniformly from the range 0 to114215-, representing viewing distances from infinity to 25 cm1143or so. Note that it is not important, for purposes of testing1144Equations 32–36, to represent the actual distribution of1145eye positions during natural viewing but simply to span1146the range of those most commonly adopted. A random set1147of points in space was then generated in the vicinity of the1148chosen fixation point. The X and Y coordinates of these1149points were picked from uniform random distributions,1150and their Z coordinate was then set according to a function1151Z(X, Y), whose exact properties were picked randomly on1152each simulation run but which always specified a gently1153curving surface near fixation (for details, see legend to1154Figure 12). The points were then projected onto the two1155eyes, using exact projection geometry with no small1156baseline or other approximations, and their cyclopean1157locations and disparities were calculated. In order to1158estimate derivatives of the local vertical disparity field, the1159vertical disparities of points within 0.5- of the fovea, of

Journal of Vision (2009) 0(0):1, 1–37 Read, Phillipson, & Glennerster 18

Page 19: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

1160 which there were usually 200 or so, were then fitted with a1161 parabolic function:

.$ ¼ c0 þ c1!c þ c2.c þ c3!2c þ c4.

2 þ c5!.: ð39Þ

11621163 The fitted coefficients ci were then used to obtain1164 estimates of vertical disparity and its gradients at the1165 fovea (.$! = c1, and so on). Finally, these were used in1166 Equations 32–36 to produce the estimates of eye position1167 shown in Figure 12.1168 The results in Figure 12 show that most eye position1169 parameters can be recovered with remarkable accuracy.1170 The worst is the cycloversion Tc, which is recovered quite1171 accurately (say to within 10 arcmin) about half the time,1172 but the rest of the time is widely scattered, for the reasons1173 discussed around Equation 38. Nevertheless, overall1174 performance is good, with a median error of G0.3-. This1175 shows that our simple intuitive analytical expressions1176 relating eye position to vertical disparity (Equations 32–36)1177 are reliable under most circumstances.1178 This is a theoretical paper, and the expressions above1179 (Equations 32–36) are simply a mathematical statement,1180 spelling out the relationships that exist between eye1181 position and vertical disparity. Does the visual system, in1182 fact, use retinal eye position estimates extracted from the1183 disparity field? In the case of vertical vergence, cyclo-1184 version, and cyclovergence, the answer seems to be yes.1185 Disparity fields indicating non-zero values of these1186 parameters elicit corrective eye movements tending to1187 null the retinal disparity, suggesting that the retinal1188 disparity field was taken as evidence of ocular misalign-1189 ment (Carpenter, 1988; Howard, 2002). In the case of1190 gaze azimuth and convergence, the use is more subtle.1191 There is little evidence that the perceived head-centric1192 direction of a stimulus corresponds to the gaze azimuth1193 indicated by its vertical disparity field (Banks et al., 2002;

1194Berends et al., 2002). Rather, retinal estimates of gaze1195azimuth and vergence seem to be used to convert1196horizontal disparity directly into estimates of surface slant1197and curvature.1198Here, we briefly sketch this process, showing how the1199eye position parameters obtained from Equations 32–361200can be used to interpret horizontal disparity, !$. In the1201domain we are considering, horizontal disparity differs1202from vertical disparity in that it is affected by the viewed1203scene as well as eye position (recall that Equation 291204showed that, to lowest order, vertical disparity is inde-1205pendent of scene structure). The horizontal disparity itself1206depends on the distance of the viewed surface relative to1207fixation, %, while its first derivatives reflect the surface’s1208slant. Again retaining terms to lowest order, it can be1209shown that

!$ , %H$ j TcV$ þ .cT$!$! , %!H$ j H$ Hc j !cð Þ j .cV$

!$. , %.H$ j H$.c þ Hc j !cð ÞV$ þ T$;ð40Þ

12101211where %! is the rate of change of % as we move1212horizontally in the visual field, %! = ¯%/¯!|.. It is a1213measure of surface slant about a vertical axis, and %.,1214defined analogously, reflects surface slant about a hori-1215zontal axis. % is a dimensionless quantity, the fractional1216distance from the fixation point, but the derivative %! is1217approximately equal to R!/R, where R is the distance to1218the fixated surface and R! is the rate at which this distance1219changes as a function of visual field azimuth. Thus, %! is1220the tangent of the angle of slant about a vertical axis,1221while %. represents the tangent of the angle of slant about1222a horizontal axis. We can invert Equation 40 to obtain1223estimates of surface distance and slant in terms of1224horizontal disparity and eye position, and then substitute1225in the eye position parameters estimated from vertical

Figure 12. Scatterplots of estimated eye position parameters against actual values, both in degrees, for 1000 different simulated eyepositions. Black lines show the identity line. Some points with large errors fall outside the range of the plots, but the quoted medianabsolute errors are for all 1000 simulations. On each simulation run, eye position was estimated as follows. First, the viewed surfacewas randomly generated. Head-centered X and Y coordinates were generated randomly near the fixation point (XF, YF, ZF). SurfaceZ-coordinates were generated from Zd = @ij aijXd

iYdj, where Xd is the X-position relative to fixation, Xd = X j XF (Yd, Zd similarly, all in

centimeters), i and j both run from 0 to 3, and the coefficients aij are picked from a uniform random distribution between T0.02 on eachsimulation run. This yielded a set of points on a randomly chosen smooth 3D surface near fixation. These points were then projected tothe retinas, and the vertical disparity within 0.5- of the fovea was fitted with a parabolic surface. This simulation is Matlab programExtractEyePosition.m in the Supplementary material.

Journal of Vision (2009) 0(0):1, 1–37 Read, Phillipson, & Glennerster 19

Page 20: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

1226 disparity. Note that the estimates of surface slant are1227 unaffected by small amounts of cycloversion, Tc. This is1228 convenient for us, since cycloversion was the aspect of1229 eye position captured least successfully by our appro-1230 ximate expressions (Figure 12).1231 From Equations 32–36 and Equation 40, we can solve1232 for % and its derivatives in terms of horizontal and vertical1233 disparities and their derivatives:

% ,!$ þ .c.$!

.$!.þ .$...$

.2$!.

%! ,!$! j .$. j .c.$

.$!.;

ð41Þ12341235

%. ,!$. þ .$!ð Þ.$!. j .$.$.

.2$!.þ .c: ð42Þ

12361237

1238 These expressions relate horizontal and vertical retinal1239 disparities directly to surface properties, without any1240 explicit dependence on eye position. It seems that the1241 visual system does something similar. As many previous1242 workers have noted, the visual system appears to use1243 purely local estimates, with no attempt to enforce global1244 consistency in the underlying eye postures implicit in1245 these relationships. Thus, values of surface slant consis-1246 tent with opposite directions of gaze (left versus right) can1247 simultaneously be perceived at different locations in the1248 visual field (Allison, Rogers, & Bradshaw, 2003; Kaneko1249 & Howard, 1997a; Pierce & Howard, 1997; Rogers &1250 Koenderink, 1986; Serrano-Pedraza, Phillipson, & Read,1251 submitted for publication). Enforcing consistency across1252 different points of the visual field would require lateral1253 connections that might be quite costly in cortical wiring1254 and would be completely pointless, since in the real world1255 eye position must always be constant across the visual1256 field (Adams et al., 1996; Garding et al., 1995).

1257 Relationship to previous literature

1258 There is a substantial literature on obtaining metric1259 information about scene structure from two-dimensional1260 disparity. It can be divided into two fairly distinct1261 categories: “photogrammetric” and “psychological.” The1262 first comes mainly from the computer vision community1263 (Hartley & Zisserman, 2000; Longuet-Higgins, 1981).1264 Here, one uses the two-dimensional disparities of a1265 limited number of point correspondences to solve for1266 binocular eye position, and then back-projects to calculate1267 each object’s 3D location in space. The second approach1268 is more common in the psychological literature (Backus &1269 Banks, 1999; Backus et al., 1999; Banks et al., 2001;1270 Banks et al., 2002; Kaneko & Howard, 1996, 1997b;1271 Koenderink & van Doorn, 1976; Rogers & Bradshaw,1272 1993, 1995; Rogers & Cagenello, 1989). Here, one1273 calculates quantities such as horizontal and vertical size

1274ratios, which are effectively local derivatives of disparity,1275and uses these either to extract estimates of eye position1276parameters (Banks et al., 2001; Kaneko & Howard,12771997b) or to move directly to scene properties such as1278surface slant, without computing an explicit estimate of1279eye position. These two approaches are closely related1280(Adams et al., 1996; Garding et al., 1995). In the1281photogrammetric approach, the point correspondences1282can be anywhere in the visual field (subject to certain1283restrictions, e.g., not all collinear; Longuet-Higgins,12841981). If the points all happen to be closely spaced1285together, then they contain the same information as the1286derivatives of disparity at that location. Thus, in this1287regard the psychological literature represents a special1288case of the photogrammetric approach: extracting eye1289position from a particular set of correspondences.1290However, the photogrammetric literature does not1291provide explicit expressions for eye position in terms of1292disparity; rather, eye position is given implicitly, in large1293matrices that must be inverted numerically. Because the1294treatment is fully general, the distinction between hori-1295zontal and vertical disparities is not useful (e.g., because a1296torsion of 90- transforms one into the other, or because1297some epipolar lines become vertical as gaze azimuth1298approaches 90-). Thus, in the machine vision literature,1299disparity is considered as a vector quantity, rather than1300analyzed as two separate components. The psychological1301literature is less general but offers a more intuitive1302understanding of how eye position affects disparity in1303the domain most relevant to natural stereo viewing1304(objects close to the fovea, eyes close to primary1305position). As we saw in the previous section, in this1306domain, disparity decomposes naturally into horizontal1307and vertical components, which have different properties.1308Critically, in this domain, vertical disparity is essentially1309independent of scene structure, and eye position can be1310estimated from this component alone.1311However, as far as we are aware, no paper gives explicit1312expressions for all eye position parameters in terms of1313retinal vertical disparity. Much of the psychological1314literature jumps straight from disparity derivatives to1315properties such as surface slant, without making explicit1316the eye position estimates on which these implicitly1317depend. In addition, the psychological literature can be1318hard to follow, because it does not always make it clear1319exactly what definition of disparity is being used. Some-1320times, the derivation appears to use optic array disparity,1321so it is not clear how the brain could proceed given only1322retinal disparity; or the derivation appears to rely on1323special properties of the scene (e.g., it considers a1324vertically oriented patch), and it is not clear how the1325derivation would proceed if this property did not hold.1326Our derivation makes no assumptions about surface1327orientation and is couched explicitly in retinal disparity.1328Our expression for %!, Equation 41, is a version of the1329well-known expressions deriving surface slant from1330horizontal and vertical size ratios (Backus & Banks,

Journal of Vision (2009) 0(0):1, 1–37 Read, Phillipson, & Glennerster 20

Page 21: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

1331 1999; Backus et al., 1999; Banks et al., 2002; Banks et al.,1332 2001; Kaneko & Howard, 1996, 1997b; Koenderink &1333 van Doorn, 1976; Rogers & Bradshaw, 1993, 1995;1334 Rogers & Cagenello, 1989). “Horizontal size ratio” or1335 HSR is closely related to the rate of change of horizontal1336 disparity as a function of horizontal position in the visual1337 field, whereas “vertical size ratio” reflects the gradient of1338 vertical disparity as a function of vertical position. In the1339 notation of Backus and Banks (1999), for example, which1340 defines HSR and VSR around the fixation point,

lnðHSRÞ , !$!; lnðVSRÞ , .$.: ð43Þ

13411342 In their notation, S = surface slant, so our %! = tan(S), and1343 convergence, our H$ , .$!., is 2. Thus if there is no1344 vertical disparity at the fovea, Equation 41 becomes

%! , tanS ,!$! j .$.

.$!.,

1

2ln

HSR

VSR

� �; ð44Þ

13451346 which is Equation 1 of Backus et al. (1999) and Backus1347 and Banks (1999).1348 This relationship has been proposed as an explanation1349 of the induced effect. In the induced effect, one eye’s1350 image is stretched vertically by a factor m about the1351 fixation point, thus adding a term m.c to the vertical1352 disparity field. The vertical disparity at the fovea is still1353 zero, and the only vertical disparity derivative to be1354 affected is .$., which gains a term m. This causes a1355 misestimate of surface slant about a vertical axis:

%!;est , %!;true jm

H$: ð45Þ

13561357

1358 Size-ratio-based theories of the induced effect are often1359 contrasted with the photogrammetric approach based on1360 misestimates of gaze angle (Clement, 1992; Garding et al.,1361 1995; Longuet-Higgins, 1982; Mayhew & Longuet-1362 Higgins, 1982; Ogle, 1952). Size-ratio theories use local1363 disparity derivatives to produce a direct estimate of slant.1364 Photogrammetric theories use a set of point correspond-1365 ences distributed across the whole retina. These are used1366 to obtain an estimate of eye position, which is then used to1367 interpret horizontal disparity. The treatment here makes1368 clear that the mathematics underlying both theories is1369 really the same. Suppose that in the photogrammetric1370 approach, the point correspondences are close together in1371 the visual field. The points project to slightly different1372 points on the cyclopean retina and have slightly different1373 disparities. We can express these differences as disparity1374 gradients on the cyclopean retina, or equivalently as size1375 ratios. From these disparity gradients we can derive eye1376 posture, and hence the surface distance and slant. Thus,1377 both size ratio and photogrammetric explanations of the1378 induced effect rely, mathematically, on the fact that1379 vertical magnification can be interpreted as a misestimate

1380of gaze angle. This is obscured in Equation 44 because1381there is no explicit mention of gaze angle, but in fact, as1382we see by comparing Equations 32–36 and Equation 40,1383the reason that VSR is useful in interpreting horizontal1384disparity is because it acts as a proxy for gaze angle1385(Adams et al., 1996).1386The real difference between the theories is the scale at1387which they operate (Adams et al., 1996; Garding et al.,13881995). Mayhew and Longuet-Higgins originally described1389an algorithm for fitting a unique eye posture to the1390correspondences across the whole retina (Mayhew, 1982;1391Mayhew & Longuet-Higgins, 1982). This would seem to1392be the best strategy for obtaining the most reliable1393estimate of eye position, and for that reason is the1394approach used in computer vision. However, as noted in1395the Discussion section, the brain appears to proceed1396locally, at least in the case of surface slant. That is, it1397directly estimates surface slant from local disparity1398derivatives, as in Equation 41, without checking that the1399eye postures implied by these local derivatives are1400globally consistent. There seems to be considerable1401inter-subject variation in what “local” means, ranging1402from as large as 30- for some subjects down to 3- for1403others (Kaneko & Howard, 1997a; Serrano-Pedraza et al.,1404submitted for publication).14051406

1407

14081409Discussion

1410Vertical disparity has been much discussed in recent1411years (Adams et al., 1996; Backus & Banks, 1999; Backus1412et al., 1999; Banks & Backus, 1998; Banks et al., 2002;1413Banks et al., 2001; Berends & Erkelens, 2001; Berends et1414al., 2002; Bishop, 1989; Brenner et al., 2001; Clement,14151992; Cumming, Johnston, & Parker, 1991; Garding et al.,14161995; Gillam, Chambers, & Lawergren, 1988; Kaneko &1417Howard, 1997a; Longuet-Higgins, 1981, 1982; Mayhew,14181982; Mayhew & Longuet-Higgins, 1982; Read &1419Cumming, 2006; Rogers & Bradshaw, 1993; Schreiber et al.,14202001; Serrano-Pedraza et al., submitted for publication;1421Serrano-Pedraza & Read, 2009; Stenton, Frisby, &1422Mayhew, 1984; Stevenson & Schor, 1997). However,1423progress has been hampered by the lack of a clear agreed1424set of definitions. In the Introduction section, we identified1425no fewer than 4 definitions of vertical disparity: two types1426of optic-array disparity, and two types of retinal disparity.1427Individual papers are not always as clear as they could be1428about which definition they are using, and the different1429properties of the different definitions are not widely1430appreciated. This means that different papers may appear1431at first glance to contradict one another.1432In this paper, we aim to clarify the situation by1433identifying two definitions of retinal vertical disparity that1434are in common use in the literature. Vertical disparity1435is sometimes defined as the difference between the

Journal of Vision (2009) 0(0):1, 1–37 Read, Phillipson, & Glennerster 21

Page 22: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

1436 elevation-longitude coordinates of the two retinal images1437 of an object, sometimes as the difference in elevation1438 latitude. Both definitions are valid and sensible, but they1439 have rather different properties, as summarized in Table 1.1440 The differences between the two types of vertical disparity1441 are most significant for objects not at the fixation distance1442 (Figure 8CD), and in the visual periphery. The periphery1443 is where retinal vertical disparities tend to be largest1444 during natural viewing, which has motivated physiologists1445 to investigate vertical disparity tuning there (Durand et al.,1446 2002). Psychophysically, it has been shown that the1447 perceived depth of centrally viewed disparities (Rogers1448 & Bradshaw, 1993) can be influenced by manipulations of1449 “vertical” disparities in the periphery (i.e., when the field1450 of view is large). Thus, it is particularly important to1451 clarify the difference between the alternative definitions of1452 vertical disparity where stimuli fall on peripheral retina.1453 For objects close to the fixation point, the images fall1454 close to the fovea in both eyes. Here, latitude and1455 longitude definitions of vertical disparity reduce to the1456 same quantity. In this regime, vertical disparity is much1457 less strongly affected than horizontal disparity by small1458 variations in depth relative to the fixation point. Where1459 this variation is small, it can be treated as independent of1460 surface structure. We have derived expressions giving1461 estimates of each eye position parameter, except eleva-1462 tion, in terms of vertical disparity and its derivatives at the1463 fovea. Although these are only approximations, they1464 perform fairly well in practice (Figure 12). These1465 expressions are closely related to the vertical size ratios1466 discussed in the literature (Backus & Banks, 1999; Backus1467 et al., 1999; Banks et al., 2002; Gillam & Lawergren,1468 1983; Kaneko & Howard, 1996, 1997b; Koenderink &1469 van Doorn, 1976; Liu, Stevenson, & Schor, 1994; Rogers1470 & Bradshaw, 1993).1471 Little if anything in this paper will be new to experts in1472 vertical disparity. However, for non-cognoscenti, we hope1473 that it may clarify some points that can be confusing.1474 Even for experts, it may serve as a useful reference. We1475 identify, in particular, four areas where we hope this paper1476 makes a useful contribution.1477

1478 1. Previous derivations have often been couched in1479 terms of head-centric disparity or have assumed that1480 the surfaces viewed have special properties such as1481 being oriented vertically. Our derivations are1482 couched entirely in terms of retinal images and do1483 not assume the viewed surface has a particular1484 orientation. We feel this may provide a more helpful1485 mathematical language for describing the properties1486 of disparity encoding in early visual cortex.1487 2. We present analytical expressions for both elevation-1488 longitude and elevation-latitude vertical disparities1489 that are valid across the entire retina, for arbitrary1490 gaze angles and cycloversion, and for non-zero1491 vertical vergence and cyclovergence. Much previous1492 analysis has relied on parafoveal approximations

1493and has assumed zero vertical vergence, cyclo-1494version, and cyclovergence.14953. We present analytical expressions for the average1496vertical disparity expected at each position in the1497visual field, up to a scale factor representing the1498mean convergence.14994. Explanations relating the perceptual effects of1500vertical disparity to disparity gradients have some-1501times been contrasted with those based on explicit1502estimates of eye position (Garding et al., 1995;1503Longuet-Higgins, 1982; Mayhew&Longuet-Higgins,15041982). This paper is the first to give explicit (though1505approximate) expressions for 5 binocular eye posi-1506tion parameters in terms of retinal vertical disparity1507at the fovea. The way in which all 5 eye position1508parameters can be derived immediately from verti-1509cal disparity derivatives has not, as far as we are1510aware, been laid out explicitly before. Thus, this1511paper clarifies the underlying unity of gaze-angle1512and vertical-size-ratio explanations of vertical-dis-1513parity illusions such as the induced effect.

1514Binocular eye position is specified by 6 parameters, 5 of1515which we have been able to derive from the vertical1516disparity field around the fovea. The exception is1517elevation. All the other parameters have a meaning as1518soon as the two optic centers are defined (and a zero1519torsion line on the retina), whereas elevation needs an1520additional external reference frame to say where “zero1521elevation” is. Disparity is, to first order, independent of1522how this reference is chosen, meaning that elevation1523cannot be directly derived from disparity. However, in1524practice, the visual system obeys Donder’s law, meaning1525that there is a unique relationship between elevation and1526torsion. The torsional states of both eyes can be deduced1527from the vertical disparity field, as laid out in Equations 331528and 36. Thus, in practice the brain could derive torsion1529from the gradient of vertical disparity and use this to1530obtain an estimate of elevation independent of oculomotor1531information regarding current eye position (although1532clearly it would rely on an association between torsion1533and elevation that would ultimately stem from the1534oculomotor system). It has already been suggested that1535the Listing’s law relationship between torsion and ele-1536vation helps in solving the stereo correspondence problem1537(Schreiber et al., 2001; Tweed, 1997c). The fact that it1538enables elevation to be deduced from the two-dimensional1539disparity field may be another beneficial side effect.1540Another benefit of the framework we have laid out is1541that it leads to a set of predictions about the physiological1542range of retinal disparities. The existing physiological1543literature does not test such predictions. For example,1544Durand et al. (2002) explain that “VD is naturally weak in1545the central part of the visual field and increases with1546retinal eccentricity” but then report their results in terms1547of head-centric Helmholtz disparity (Figure 1A), in which1548naturally occurring vertical disparities are always zero,

Journal of Vision (2009) 0(0):1, 1–37 Read, Phillipson, & Glennerster 22

Page 23: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

1549 everywhere in the visual field. This makes it impossible to1550 assess whether the results of Durand et al. are consistent1551 with the natural distribution of retinal vertical disparities1552 to which they drew attention in their introduction. This1553 paper has emphasized the importance of calculating1554 neuronal tuning as a function of retinal vertical disparity1555 (whether elevation longitude or latitude). Our expressions1556 for average vertical disparity as a function of position in1557 the visual field predict the expected sign of vertical1558 disparity preference. It is intuitively clear that in natural1559 viewing early cortical neurons viewing the top-right visual1560 field should receive a diet of inputs in which the left half-1561 image is higher on the retina than the right (Figure 3), and1562 vice versa for those viewing the top-left visual field. One1563 would expect the tuning of neurons to reflect this biased1564 input. This simple qualitative prediction has not yet been1565 discussed or examined in the physiological literature. Our1566 analysis also makes quantitative predictions. For example,1567 consider eccentricities 5- and 15- in the direction “north-1568 east” from the fovea. Our calculations (Equation 13)1569 predict that the mean vertical disparity tuning of V11570 neurons at 15- eccentricity should be 9 times that at an1571 eccentricity of 5-. This too could be tested by appropriate1572 physiological investigations.1573 There are, of course, limitations to the quantitative1574 predictions we can make from geometric considerations1575 alone. To predict the expected range of vertical disparities1576 at any retinal location requires a knowledge of the1577 statistics of binocular eye movements (especially version1578 and vergence) under natural viewing conditions. As recent1579 studies have pointed out, such statistics are quite difficult1580 to gather (Hibbard, 2007; Liu et al., 2008), but they are1581 crucial if the diet of 2D disparities received by binocular1582 neurons across the retina is to be estimated accurately.1583

15841585 Conclusion

1586 The term “vertical disparity” is common in the stereo1587 literature, and the impression is often given that it has an1588 established definition and familiar properties. In fact,1589 neither of these assumptions hold. If the terms “vertical”1590 and “horizontal” are to continue to be used in discussions1591 of binocular disparity, and we argue here that there are1592 reasons in favor of doing so, it is critical that the1593 respective definitions and properties should be set out1594 explicitly, as we have done here.

15951596 Appendix A: Definitions

1597 Subscripts

1598 Table A1.1599

1600Symbols

1601Table A2.1602

1603Coordinate systems

16041605Head-centered coordinate system (X, Y, Z) for object1606position in space

1607Figure A1 shows the right-handed head-centered1608coordinate system used throughout this paper. The1609X-axis points right, the Y-axis upward, and the Z-axis1610straight ahead of the observer. By definition, the nodal1611point of the left eye is at (X, Y, Z) = (i, 0, 0) and the1612nodal point of the right eye is at (X, Y, Z) = (ji, 0, 0),1613where i represents half the interocular distance I. The1614position of a point in space can be described as a vector,1615P = (X, Y, Z).1616

1617Eye posture

1618Each eye has potentially three degrees of freedom, two1619to specify the gaze direction (azimuth left/right and1620elevation up/down) and a third to specify the rotation of1621the eyeball around this axis (torsion). We adopt the1622Helmholtz coordinate system for describing eye posture1623(Figure 4). We start with the eye in primary position,1624looking straight forward so that its optic axis is parallel to1625the Z-axis (Figure A1). We define the torsion here to be1626zero. To move from this reference state in which all three1627coordinates are zero to a general posture with torsion,1628azimuth H, and elevation V, we start by rotating the1629eyeball about the optic axis by the torsion angle T. Next1630rotate the eye about a vertical axis, i.e., parallel to the1631Y-axis, through the gaze azimuth H. Finally rotate the eye1632about a horizontal, i.e., interocular axis, through the gaze1633elevation V. We define these rotation angles to be anti-1634clockwise around the head-centered coordinate axes. This1635means that we define positive torsion to be clockwise1636when viewed from behind the head, positive gaze azimuth1637to be to the observer’s left, and positive elevation to be1638downward.

L left eye t2.1

R right eye t2.2

$ difference between left and right eye values, e.g.,convergence angle H$ = HR j HL t2.3

% half-difference between left and right eye values, e.g., half-convergence H% = (HR j HL)/2 t2.4

c cyclopean eye (mean of left and right eye values), e.g.,cyclopean gaze angle Hc = (HR + HL)/2 t2.5

t2.6Table A1. Meaning of subscripts.

Journal of Vision (2009) 0(0):1, 1–37 Read, Phillipson, & Glennerster 23

Page 24: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

1639 We use the subscripts L and R to indicate the left and1640 right eyes (Table A1). Thus, VL is the Helmholtz elevation1641 of the left eye and VR that of the right eye.1642 One advantage of Helmholtz coordinates is that it is1643 particularly simple to see whether the eyes are correctly

1644fixating, such that their optic axes intersect at a common1645fixation point. This occurs if, and only if, the Helmholtz1646elevations of the two eyes are identical and the optic axes1647are not diverging. Thus, any difference between VL and VR

1648means that the eyes are misaligned. We refer to this as the1649vergence error, VR j VL. The difference in the Helmholtz1650gaze azimuths is the horizontal vergence angle, HR j HL.1651Negative values mean that the eyes are diverging.1652In the mathematical expressions we shall derive below,1653the vergence angles will usually occur divided by two. We1654therefore introduce symbols for half the vergence angles.1655As shown in Table A1, these are indicated with the1656subscript %:

H% K ðHRjHLÞ=2; andsoon: ðA1Þ

16571658

1659We also introduce cyclopean gaze angles, which are the1660means of the left and right eyes. As shown in Table A1,1661these are indicated with the subscript c:

Hc K ðHR þ HLÞ=2: ðA2Þ16621663

1664

1665Rotation matrices

1666Eye posture can be summarized by a rotation matrix M.1667So for example if we have a vector that is fixed with respect1668to the eye, then if the vector is initially r in head-centered1669coordinates when the eye is in its reference position, it1670will move to Mr when the eye adopts the posture1671specified by rotation matrix M. An eye’s rotation1672matrix M depends on the eye’s elevation V, gaze azimuth1673and torsion T. As above, we use subscripts L and R to1674indicate the left and right eyes. For the left eye, the1675rotation matrix is ML = MVLMHLMTL, where

MVL ¼1 0 0

0 cosVL jsinVL

0 sinVL cosVL

24

35;

MHL ¼cosHL 0 sinHL

0 1 0

jsinHL 0 cosHL

24

35; ðA3Þ

MTL ¼cosTL jsinTL 0

sinTL cosTL 0

0 0 1

24

35;

16761677where VL, HL, and TL are the gaze elevation, gaze1678azimuth, and torsion of the left eye. The ordering of the1679matrix multiplication, ML = MVLMHLMTL, is critical,1680reflecting the definition of the Helmholtz eye coordinates.

I interocular distancet3.1

i half-interocular distance, i = I/2t3.2

k, l integer counters taking on values 1, 2, 3t3.3

ML, MR rotation matrix for left and right eyes, respectivelyt3.4

Mc cyclopean rotation matrix, Mc = (MR + ML)/2t3.5

M% half-difference rotation matrix, M% = (MR j ML)/2t3.6

m vectors mj are the three columns of the correspondingrotation matrix M, e.g., mc1 = [Mc

11Mc21Mc

31]; m%2 =[M%

12M%22M%

32] (Equation A6)t3.7

HL,R,c gaze azimuth in Helmholtz system for left, right, andcyclopean eyest3.8

VL,R,c gaze elevation in Helmholtz system for left, right, andcyclopean eyest3.9

TL,R,c gaze torsion in Helmholtz system for left, right, andcyclopean eyest3.10

H$ horizontal convergence anglet3.11

V$ vertical vergence misalignment (non-zero valuesindicate a failure of fixation)t3.12

T$ cyclovergencet3.13

X, Y, Z position in space in Cartesian coordinates fixed withrespect to the head (Figure A1)t3.14

X unit vector parallel to the X-axist3.15

P vector representing position in space in head-centeredcoordinates: P = (X, Y, Z)t3.16

U, W, S position in space in Cartesian coordinates fixed withrespect to the cyclopean gaze. The S-axis is the opticaxis of the cyclopean eye (see Figure 5)t3.17

R distance of an object from the origin. R2 = X 2 + Y 2 +Z2 = U2 + W 2 + S2 (see Figure 5)t3.18

R0 distance of the fixation point from the origin (or distanceto the point where the gaze rays most nearlyintersect, if the eyes are misaligned so that noexact intersection occurs)t3.19

% fractional difference between the fixation distance, R0,and the distance to the object under consideration,R. That is, % = (R j R0)/R0t3.20

x horizontal position on the retina in Cartesian coordinatesystem (Figure 2A)t3.21

y vertical position on the retina in Cartesian coordinatesystem (Figure 2A)t3.22

! azimuth-longitude coordinate for horizontal position onthe retina (Figures 2B and 2C)t3.23

) elevation-longitude coordinate for vertical position onthe retina (Figures 2B–2D)t3.24

" azimuth-latitude or declination coordinate for horizontalposition on the retina (Figures 2D and 2E)t3.25

. elevation-latitude or inclination coordinate for verticalposition on the retina (Figures 2C–2E)t3.26

J retinal eccentricity (Equation 14)t3.27

t3.28 Table A2. Definition of symbols.

Journal of Vision (2009) 0(0):1, 1–37 Read, Phillipson, & Glennerster 24

Page 25: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

1681 Obviously, analogous expressions hold for the right eye.1682 Once again, it will be convenient to introduce the1683 cyclopean rotation matrix, which is defined as the mean1684 of the left- and right-eye rotation matrices:

Mc ¼ ðMR þMLÞ=2; ðA4Þ16851686 and the half-difference rotation matrix:

M% ¼ ðMR j MLÞ=2: ðA5Þ16871688

1689 It will also be convenient to introduce vectors m that1690 are the columns of these matrices:

mc1 ¼ ½M11c M21

c M31c �; mc2 ¼ ½M12

c M22c M33

c �; mc3 ¼ ½M13c M23

c M33c �

m%1 ¼ ½M11% M21

% M31% �; m%2 ¼ ½M12

% M22% M33

% �; m%3 ¼ ½M13% M23

% M33% �;

ðA6Þ16911692 where Mkl indicates the entry in the kth row and lth1693 column of matrix M.

1694

1695 Gaze-centered coordinate system for object position1696 in space

1697 We use the vectors mck to define a new coordinate1698 system for describing an object’s position in space. As

1699well as the head-centered coordinate system (X, Y, Z), we1700introduce a coordinate system (U, W, S) centered on the1701direction of cyclopean gaze, as specified by the three1702Helmholtz angles Hc, Vc, and Tc. Whereas Z is the object’s1703distance from the observer measured parallel to the1704“straight ahead” direction, S is the object’s distance1705parallel to the line of gaze (Figure 5). The coordinates1706(U, W, S) are defined by writing the vector P = (X, Y, Z) as1707a sum of the three mc vectors:

P ¼ Umc1 þWmc2 þ Smc3: ðA7Þ170817091710

1711Retinal coordinate systems for image position1712on retina

1713The retina is at least roughly hemispherical, and treating1714it as perfectly hemispherical involves no loss of general-1715ity, since there is a one-to-one map between a hemisphere1716and a physiological retina. All the coordinate systems we1717shall consider are based on the vertical and horizontal1718retinal meridians. These are great circles on a spherical1719retina. They are named after their orientations when the1720eye is in its reference position, looking straight ahead1721parallel to the Z-axis in Figure A1. By definition, our1722retinal coordinate systems are fixed with respect to the1723retina, not the head, so as the eye rotates in the head, the1724“horizontal” and “vertical” meridians will in general no1725longer be horizontal or vertical in space. For this reason1726we shall call the angle used to specify “horizontal”1727location the azimuth !, and the angle used to specify1728“vertical” location, the elevation ). Both azimuth and1729elevation can be defined as either latitude or longitude.1730This gives a total of 4 possible retinal coordinate systems1731(Figures 2B–2E). The azimuth-latitude/elevation-longitude1732coordinate system is the same Helmholtz system we have1733used to describe eye position (cf. Figure 1A). The azimuth-1734longitude/elevation-latitude coordinate system is the Fick1735system (cf. Figure 1B). One can also choose to use1736latitude or longitude for both directions. Such azimuth-1737longitude/elevation-longitude or azimuth-latitude/elevation-1738latitude systems have the disadvantage that the coordinates1739become ill-defined around the great circle at 90- to the1740fovea. However, this is irrelevant to stereopsis, since it is1741beyond the boundaries of vision. The azimuth-longitude/1742elevation-longitude coordinate system is very simply1743related to the Cartesian coordinate system, which is1744standard in the computer vision literature. We can imagine1745this as a virtual plane, perpendicular to the optic axis and at1746unit distance behind the nodal point (Figure 2A). To find the1747image of a point P, we imagine drawing a ray from point1748P through the nodal point N and see where this intersects1749the virtual plane (see Figure 3 of Read & Cumming, 2006).1750The ray has vector equation p = N + s(P j N), where s1751represents position along the ray. Points on the retina are1752given by the vector p = N j MZ + xMZ + yMY, where x

Figure A1. Head-centered coordinate system used throughout thispaper. The origin is the point midway between the two eyes. TheX-axis is defined by the nodal points of the two eyes and pointsrightward. The orientation of the XZ plane is defined by primaryposition but is approximately horizontal. The Y-axis points upwardand the Z-axis points in front of the observer.

Journal of Vision (2009) 0(0):1, 1–37 Read, Phillipson, & Glennerster 25

Page 26: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

1753 and y are the Cartesian coordinates on the planar retina,1754 and the rotation matrix M describes how this plane is1755 rotated with respect to the head. Equating these two1756 expressions for p, we find that

s Pj Nð Þ ¼ jMZ þ xMX þ yMY: ðA8Þ17571758

1759 Multiplying the matrix M by the unit vectors simply1760 picks off a column of the matrix, e.g., MX= m1. Using1761 this plus the fact that MX, MY, and MZ are orthonormal,1762 we find that the ray intersects the retina at the Cartesian1763 coordinates

x ¼ jm1: PjNð Þm3: PjNð Þ ; y ¼ j

m2: PjNð Þm3: PjNð Þ : ðA9Þ

17641765

1766 It is sometimes imagined that the use of planar retinas1767 involves a loss of generality or is only valid near the1768 fovea, but in fact, no loss of generality is involved, since1769 there is a one-to-one map from the virtual planar retina to1770 the hemispherical retina.1771 Each coordinate system has a natural definition of1772 “horizontal” or “vertical” disparity associated with it.1773 Disparity is defined to be the difference between the1774 horizontal and vertical coordinates of the two retinal images.1775 So we immediately have three different definitions of retinal1776 vertical disparity: (1) Cartesian vertical disparity, y$ = yR j1777 yL; (2) elevation-longitude disparity, )$ = )R j )L; and (3)1778 elevation-latitude disparity, .$ = .R j .L. In Appendix B,1779 we shall derive expressions for all 3 definitions.1780 It may also be useful to collect together here for1781 reference the relationships between gaze-centered coor-1782 dinates and the corresponding retinal coordinates. The1783 equations in Table A3 show where an object located at1784 (U, W, S) in gaze-centered coordinates projects to on the1785 cyclopean retina, in different retinal coordinate systems.1786 Table A4 gives the relationships between location on the1787 retina in different coordinate systems.17881789

17901791 Appendix B: Derivations

1792 Relationships between the rotation vectors

1793 The fact that rotation matrices are orthogonal means1794 that certain simple relationships hold between the vectors

1795mck and m%k defined in Equation A6. First, the inner1796product of any difference vector m%k with the correspond-1797ing cyclopean vector mck is identically zero:

m%k I mck ¼ 0 for k ¼ 1; 2; 3: ðB1Þ17981799

1800This is actually a special case of the following more1801general statement:

m%k I mcl ¼ jm%l I mck for k; l ¼ 1; 2; 3: ðB2Þ

18021803

1804Equations B1 and B2 are exact and do not depend on1805any approximations at all.1806To obtain the values of these dot products, we need to1807use Equation A3 to derive expressions for Mc and M% in1808terms of the 6 Helmholtz gaze parameters for the two1809eyes: HL, VL, TL, HR, VR, TR. We can then use1810trigonometric identities to re-express these in terms of1811the cyclopean (half-sum) and vergence (half-difference)1812equivalents: Hc, Vc, Tc, H%, V%, T%. Needless to say, this1813yields extremely complicated expressions. However, we1814now introduce the first critical approximation of this1815paper. We assume that differences in eye posture are1816small. We therefore work to first order in the horizontal1817vergence H%, the vertical vergence half-error V%, and the1818half-cyclovergence T%, i.e., we replace terms like cosH%

1819with 1, and we neglect terms in sin2H%, sinH%IsinV%, and1820so on. Under these approximations, the 3 mc and the 3 m%

1821are approximately orthonormal, i.e.,

mck:mcl , 1 if k ¼ l and , 0 otherwise;

m%k:m%l , 1 if k ¼ l and , 0 otherwise; ðB3Þ

18221823and we obtain the following simple expressions for inner1824products of an mc and an m% vector:

m%1:mc2 ¼ jm%2:mc1 , T% þ V%sinHc

m%2:mc3 ¼ jm%3:mc2 , H%sinTc þ V%cosHccosTc

m%1:mc3 ¼ jm%3:mc1 , jH%cosTc þ V%cosHcsinTc:

ðB4Þ18251826

1827Notice that if the eyes are correctly fixating (V% = 0) and1828there is no torsion (Tc = T% = 0), then the only non-zero1829inner product is m%1.mc3 , jH%.

U , jSxc W , jSyc R2 = U2 + W 2 + S2 = X2 + Y 2 + Z2t4.1

U = jS tan!c W = jS tan.csec!c S = Rcos!ccos.ct4.2

U = jS tan"csec)c W = jS tan)c S = Rcos"ccos)ct4.3

t4.4 Table A3. The relationship between the quantities (U, W, S), giving an object’s location in gaze-centered coordinates (cf. Figure 5), andthat object’s projection onto the cyclopean retina. The projection is given in planar Cartesian coordinates (xc, yc) and as azimuth longitude!c, elevation longitude )c, azimuth latitude "c, and elevation latitude .c. The object’s head-centered coordinates (X, Y, Z) will depend oneye position.

Journal of Vision (2009) 0(0):1, 1–37 Read, Phillipson, & Glennerster 26

Page 27: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

1830Below, we shall also encounter the inner products1831mc1.X, mc2.X, and mc3.X, where X is a unit vector1832along the X-axis. These are the entries in the top row of1833the cyclopean rotation matrix, which under the above1834approximation are

mc1:X¼ M11c , cosHccosTc;

mc2:X¼ M12c , jsinTccosHc; ðB5Þ

mc3:X¼ M13c , sinHc:

18351836

1837We shall also use the following:

2m%1:P , W T$ þ Hsinc V$

� �þ S jTcosc H$ þ Hcos

c Tsinc V

� �2m%2:P , jU T$ þ Hsin

c V$

� �þ S Tsinc H$ þ Hcos

c Tcosc V$

� �2m%3:P , U Tcos

c H$jHcosc Tsin

c V$

� �j W Tsin

c H$ þ Hcosc Tcos

c V$

� �; ðB6Þ

18381839where to save space we have introduced the notation1840Tc

cos = cosTc, and so on.

1841Deriving expressions for retinal disparity

1842Disparity in Cartesian coordinates on a planar retina

1843The utility of the above expressions will now become1844clear. Suppose that an object’s position in space is1845represented by the vector P = (X, Y, Z) in head-centered1846coordinates. Then, the object projects onto the left retina1847at a point given by (xL, yL) in Cartesian coordinates,1848where (Equation A9)

xL ¼ jmL1: Pj NLð ÞmL3: Pj NLð Þ ; yL ¼ j

mL2: Pj NLð ÞmL3: Pj NLð Þ ;

ðB7Þ18491850where NL is the vector from the origin to the nodal point1851of the left eye, and mLk is the kth column of the left eye’s1852rotation matrix ML. For the left eye, we have NL = iX,1853where Xis a unit vector along the X-axis and i is half the1854interocular distance, while for the right eye, NR = jiX.1855We shall also rewrite the left and right eyes’ rotation1856vectors, mL and mR, in terms of the half-sum and half-1857difference between the two eyes:

mL ¼ mc j m%; mR ¼ mc þm%: ðB8Þ18581859

1860The image in the left eye is then

xL ¼ jmc1 j m%1ð Þ: Pj iX

� �mc3 j m%3ð Þ: Pj iX

� � ; ðB9Þ

Cartesian(x,y)

(Figure

2A)

Azimuth

longitu

de,

eleva

tionlongitu

de

(!,))(Figure

2B)

Azimuth

longitu

de,

eleva

tionlatitude:

(!,.)(Fick;

Figure

2C)

Azimuth

latitude,

eleva

tionlongitu

de

(",))(H

elm

holtz;

Figure

2D)

Azimuth

latitude,

eleva

tionlatitude:

(",.)(Figure

2E)

(x,y)

tan!

tan)

tan!

tan.se

c!

tan"se

c)

tan)

sin"=¾ðco

s2.2 sin"Þ

sin.=¾ðco

s2"2 sin.Þ

(!,))

arctanðxÞ

arctany

!

arctan

tan.

cos!

��

arctanðtan"Isec)Þ

)!¼

arcsinðsin"se

c.Þ

arcsinðsin

.se

c"Þ

(!,.)

arctanðxÞ

arctan

y=ffiffiffiffiffiffiffiffi

ffiffiffiffiffiffix2

þ1

p�

!.¼

arctantan)co

s!

ðÞ

arcsin

sin"

cos.

��

.

arcsin

sin"

cos.

��

.

(",))

arctan

x=ffiffiffiffiffiffiffiffi

ffiffiffiffiffiffiy2þ1

p�

arctanyðÞ

arctantan!co

s)

ðÞ

)"¼

arcsinsin!co

s.

ðÞ

arctanðta

n.se

c!Þ

"

arcsin

sin.

cos"

��

(",.)

arctanðx=¾ð1þy2ÞÞ

arctanðy=¾ð1þx2ÞÞ

arctanðta

nð!Þcosð)ÞÞ

arctanðta

nð)Þcosð!ÞÞ

arcsinco

s.sin!

ðÞ

."¼

".¼

arcsinco

s"sin)

ðÞ

Table

A4.

Relatio

nsh

ipsbetweenthedifferentretin

alc

oordinate

systemssh

ownin

Figure

2.

Journal of Vision (2009) 0(0):1, 1–37 Read, Phillipson, & Glennerster 27

Page 28: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

18611862 while the expression for the right eye is the same but with1863 the signs of i and m% reversed:

xR ¼ jmc1 þm%1ð Þ: Pþ iX

� �mc3 þm%3ð Þ: Pþ iX

� � : ðB10Þ18641865

1866 Thus, there are two distinct sources of retinal disparity. One1867 of them arises from the fact that the eyes are in different1868 locations in the head and contributes terms in i. The other arises1869 from the fact that the eyes may point in different directions1870 and contributes terms in m%. We shall see these two sources1871 emerging in all our future expressions for binocular disparity.1872 We now make the approximation that both sources of1873 disparity, i and m%, are small. We carry out a Taylor1874 expansion in which we retain only first-order terms of these1875 quantities. To do this, it is helpful to introduce dummy1876 quantities s and j, wherem%j = (sj and i = (j, and the variable1877 ( is assumed to be so small that we can ignore terms in (2:

xL ¼ jmc1 j (s1ð Þ: Pj (jX

� �mc3 j (s3ð Þ: Pj (jX

� �, j

mc1:P

mc3:P

�1j

(jmc1:X

mc1:Pj

(s1:P

mc1:Pþ (

jmc3:X

mc3:P

þ (s3:P

mc3:Pþ O (2

� ��: ðB11Þ

18781879

1880 Now removing the dummy variables, we have an expres-1881 sion for xL under the small-eye-difference approximation:

xL ,jmc1:P

mc3:P

�1þ imc3:Xþm%3:P

� �mc3:P

jimc1:X þm%1:P� �

mc1:Pþ O (2

� ��: ðB12Þ

18821883

1884 Again, the expression for xR is the same but with the1885 signs of i and r% reversed. The expressions for y are the1886 same except with subscripts 1 replaced with 2. We can1887 therefore derive the following expressions for the cyclo-1888 pean position of the image:

xc ¼ xR þ xL2

,jmc1:P

mc3:P; yc ¼ yR þ yL

2,j

mc2:P

mc3:P

ðB13Þ18891890 while for the Cartesian disparity, we obtain

x$ ¼ xRj xL , 2mc1:P

mc3:P

�imc3:Xþm%3:P� �

mc3:P

jimc1:Xþm%1:P� �

mc1:P

�ðB14Þ

y$ ¼ yR j yL , 2mc2:P

mc3:P

�imc3:Xþm%3:P� �

mc3:P

jimc2:Xþm%2:P� �

mc2:P

�:

18911892Expressions for mcj.X were given in Equation B5. Now,1893instead of specifying P = (X, Y, Z) in head-centered1894coordinates, we move to the gaze-centered coordinate1895system (U, W, S) in which an object’s position is specified1896relative to the cyclopean gaze direction (Equation A7):

P ¼ Umc1 þWmc2 þ Smc3: ðB15Þ18971898

1899Now recall that the inner product of any difference1900vector m%j with the corresponding cyclopean vector mcj is1901identically zero (Equation B1). Thus, the term m%3.P is1902independent of the object’s distance measured along the1903cyclopean gaze direction, S:

m%3:P ¼ Um%3:mc1 þWm%3:mc2: ðB16Þ19041905

1906Using the relationships between the various m vectors1907(Equations B1–B3), we obtain

xc ,jU

S; yc ,j

W

S; ðB17Þ

19081909which is in fact obvious given the definition of the1910cyclopean retina and the cyclopean gaze-centered coor-1911dinate system. For the disparity, we obtain

x$ ,I

S2UM13

c j SM11c

� �j

2

S2

�U2 þ S2� �

m%1:m

c3

þUWm%2:mc3 þ SWm%1:mc2

y$ ,

I

S2WM13

c j SM12c

� �j

2

S2

�W2 þ S2� �

m%2:m

c3

þUWm%1:mc3 j SUm%1:mc2

: ðB18Þ

19121913

1914Expressions for the vector inner products, valid under1915the approximation we are considering, were given in1916Equations B4 and B5. Substituting these, using the small1917angle approximation for the % quantities, we obtain the1918following expressions for an object’s horizontal and1919vertical disparities in Cartesian planar coordinates,1920expressed as a function of its spatial location in gaze-1921centered coordinates:

x$ ,I

S

U

SHsin

c jHcosc Tcos

c

� �þ U2

S2þ1

� �Tcosc j

UW

S2Tsinc

�H$

jU2

S2þ 1

� �Hcos

c Tsinc þ UW

S2Hcos

c Tcosc þ W

SHsin

c

�V$j

W

ST$

y$ ,I

S

W

SHsin

c þ Tsinc Hcos

c

� �þ UW

S2Tcosc j

W2

S2þ 1

� �Tsinc

�H$

þ U

SHsin

c jW2

S2þ 1

� �Hcos

c Tcosc j

UW

S2Hcos

c Tsinc

�V$ þ U

ST$;

ðB19Þ

Journal of Vision (2009) 0(0):1, 1–37 Read, Phillipson, & Glennerster 28

Page 29: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

19221923 where to save space we have again defined Tccos = cosTc,

1924 and so on.1925 Here, the disparity is expressed as a function of the1926 object’s position in space, (U, W, S). However, this is not1927 very useful, since the brain has no direct access to this. It1928 is more useful to express disparities in terms of (xc, yc),1929 the position on the cyclopean retina or equivalently the1930 visual direction currently under consideration, together1931 with the distance to the object along the cyclopean gaze, S.1932 The brain has direct access to the retinal position (xc, yc),1933 leaving distance S as the sole unknown, to be deduced1934 from the disparity. Then we obtain the following1935 expressions for an object’s horizontal and vertical dispar-1936 ities in Cartesian planar coordinates, expressed as a1937 function of its retinal location in Cartesian planar1938 coordinates:

x$ ,j xcHsinc þ Hcos

c Tcosc

� � ISþ x2c þ 1� �

Tcosc j xcycT

sinc

� H$

þ ycHsinc j x2c þ 1

� �Hcos

c Tsinc j xcycH

cosc Tcos

c

� V$ þ ycT$

y$ ,j ycHsinc j Tsin

c Hcosc

� � ISþ xcycT

cosc j y2c þ 1

� �Tsinc

� H$

j xcHsinc þ y2c þ 1

� �Hcos

c Tcosc þ xcycH

cosc Tsin

c

� V$ j xcT$:

ðB20Þ

19391940

1941

1942 Disparity in retinal longitude

1943 Azimuth longitude and elevation longitude on the retina,1944 ! and ), are simply related to the planar coordinates x1945 and y:

! ¼ arctanðxÞ; ) ¼ arctanðyÞ: ðB21Þ

19461947

1948 From the approximation to xL given in Equation B12,1949 we have

!L ,jarctan

"mc1:P

mc3:P

1þ imc3:Xþm%3:P

� �mc3:P

jimc1:Xþm%1:P� �

mc1:P

!#: ðB22Þ

19501951

1952 With the Taylor expansion for arctan, this becomes

!L ,jarctanmc1:P

mc3:P

�j

"imc3:Xþm%3:P� �

mc3:P

jðimc1:Xþmc1:PÞ

mc1:P

#mc1:Pð Þ mc3:Pð Þ

mc3:Pð Þ2 þ mc1:Pð Þ2 : ðB23Þ

19531954As before, the analogous expression for !R is the same1955but with the signs of i and m% swapped. Thus, we obtain

!c ,jarctanmc1:P

mc3:P

�,jarctan

U

SðB24Þ

19561957and

!% ,imc3:Xþm%3:P� �

mc3:Pj

imc1:Xþm%1:P� �

mc1:P

" #

� mc1:Pð Þ mc3:Pð Þmc3:Pð Þ2 þ mc1:Pð Þ2 : ðB25Þ

19581959

1960We similarly obtain the following equation for the1961elevation-longitude cyclopean position and disparity:

)c ,jarctan mc2:Pmc3:P

�,jarctan

W

S

)% ,imc3:Xþm%3:P� �

mc3:Pj

imc2:Xþm%2:P� �

mc2:P

" #

� mc2:Pð Þ mc3:Pð Þmc3:Pð Þ2 þ mc2:Pð Þ2 : ðB26Þ

19621963

1964Again substituting for m, we obtain, in terms of an1965object’s spatial location in gaze-centered coordinates:

!$ ,1

S2 þ U2

UHsinc j SHcos

c Tcosc

� I þ S2 þ U2ð ÞTcos

c jUWTsinc

� H$

j S2 þ U2ð ÞHcosc Tsin

c þ UWHcosc Tcos

c þWSHsinc

� V$ jWST$

( )

)$ ,1

W2 þ S2

WHsinc þ STsin

c Hcosc

� I þ j W2 þ S2ð ÞTsin

c þ UWTcosc

� H$

j W2 þ S2ð ÞHcosc Tcos

c þ UWHcosc Tsin

c j USHsinc

h iV$ þ UST$

8<:

9=;:

ðB27Þ19661967

1968We now re-express these disparities in terms of the1969object’s retinal location in azimuth-longitude/elevation-1970longitude coordinates. From U , jStan!c, we have

S2

S2 þ U2,

S2

S2 þ S2tan2!c¼ cos2!c: ðB28Þ

19711972Similarly,S2

S2 þW2, cos2)c.

1973We therefore arrive at the following expressions for longitude1974disparity expressed as a function of position on the cyclopean1975retina in azimuth-longitude/elevation-longitude coordinates:

!$ , jI

Scos!c Hcos

c Tcosc cos!c þ Hsin

c sin!c� �

þ Tcosc j sin!ccos!ctan)cT

sinc

� H$

j Hcosc Tsin

c þ sin!ccos!ctan)cHcosc Tcos

c j cos2!ctan)cHsinc

� � V$ þ cos2!ctan)cT$

)$ , cos2)c Tsinc Hcos

c jHsinc tan)c

� IS

þ tan!csin)ccos)cTcosc j Tsin

c

� H$

j Hcosc Tcos

c þ tan!csin)ccos)cHcosc Tsin

c þ tan!ccos2)cH

sinc

� � V$ j tan!ccos

2)cT$: ðB29Þ

Journal of Vision (2009) 0(0):1, 1–37 Read, Phillipson, & Glennerster 29

Page 30: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

19761977 Alternatively, we may wish to express azimuth-longitude1978 disparity as a function of retinal location in an azimuth-1979 longitude/elevation-latitude coordinate system. Elevation1980 latitude . is related to azimuth longitude ! and elevation1981 longitude ) as

tan. ¼ tan):cos!: ðB30Þ

19821983

1984 Thus, it is easy to replace )c in Equation B29 with .c:

!$ ,jI

Scos!c Hcos

c Tcosc cos!c þ Hsin

c sin!c� �

þ Tcosc j sin!ctan.cT

sinc

� H$

j Hcosc Tsin

c þ sin!ctan.cHcosc Tcos

c jcos!ctan.cHsinc

� V$

þ cos!ctan.c½ �T$: ðB31Þ

19851986

1987 Similarly, we can express elevation-longitude disparity1988 )$ as a function of retinal location in an azimuth-latitude/1989 elevation-longitude coordinate system (", )). Using tan" =1990 tan!cos), Equation B29 becomes

)$ , cos2)c Tsinc Hcos

c j Hsinc tan)c

� IS

þ tan"csin)cTcosc j Tsin

c

� H$

j Hcosc Tcos

c þ tan"csin)cHcosc Tsin

c þ tan"ccos)cHsinc

� V$

j tan"ccos)cT$: ðB32Þ

199119921993

1994 Disparity in retinal latitude

1995 Azimuth latitude and elevation latitude on the retina,1996 " and ., are related to the planar coordinates x and y as

" ¼ arctanðx=¾ðy2 þ 1ÞÞ; . ¼ arctanðy=¾ðx2 þ 1ÞÞ:ðB33Þ

19971998

1999 From the approximation to xL and yL given in2000 Equation B12, we have

"L , jarctanmL1: PjNLð Þffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

mL2: PjNLð Þ½ �2 þ mL3: PjNLð Þ½ �2q264

375:

ðB34Þ20012002

2003 Again doing the Taylor expansion, we obtain the2004 following expressions for horizontal and vertical retinal

2005latitude disparities in terms of an object’s spatial location2006in gaze-centered coordinates:

"$ ¼ 2Sm%3:mc1 þWm%2:mc1ð Þffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

W2 þ S2p

þ IM12

c UW þM13c USjM11

c W2 þ S2ð Þ� �U2 þW2 þ S2ð Þ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

W2 þ S2p

.$ ¼ 2Sm%3:mc2 þ Um%1:mc2ð Þffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

U2 þ S2p

þ IM11

c UW þM13c WS j M12

c U2 þ S2ð Þ� �U2 þW2 þ S2ð Þ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

U2 þ S2p :

ðB35Þ

20072008

2009Substituting for the various entries in the rotation2010matrix, Mc, m%, and mc, we obtain

2m%1:mc2 , T$ þ Hsinc V$

2m%2:mc3 , Tsinc H$ þ Hcos

c Tcosc V$

2m%1:mc3 ,jTcosc H$ þ Hcos

c Tsinc V$

"$ ¼ jITsinc Hcos

c UWjHsinc USþ Tcos

c Hcosc W2 þ S2ð Þ� �

U2 þW2 þ S2ð Þ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiW2 þ S2

p

jjSTcos

c H$ þ SHcosc Tsin

c V$ þWHsinc V$ þWT$

� �ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiW2 þ S2

p

.$ ¼ IHcos

c Tcosc UW þ Hsin

c WSþ Tsinc Hcos

c U2 þ S2ð Þ� �U2 þW2 þ S2ð Þ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

U2 þ S2p

jSTsin

c H$ þ SHcosc Tcos

c V$ j UHsinc V$ j UT$

� �ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiU2 þ S2

p :

ðB36Þ20112012

2013To express azimuth-latitude disparity as a function of2014position on the cyclopean retina in azimuth-latitude/2015elevation-longitude coordinates ("c, )c), we use the2016following relationships:

W ¼ jStan)c; U ¼ jStan"csec)c: ðB37Þ

20172018

2019This yields

"$ ¼ jI

Scos"ccos)cðTsin

c Hcosc sin"csin)c þ Hsin

c sin"ccos)c

þ Tcosc Hcos

c cos"cÞ þ ðTcosc cos)cÞH$

þ cos)cðtan)cHsinc jHcos

c Tsinc ÞV$ þ ðsin)cÞT$: ðB38Þ

20202021

2022Similarly, if we express elevation-latitude disparity as2023a function of position on the cyclopean retina in

Journal of Vision (2009) 0(0):1, 1–37 Read, Phillipson, & Glennerster 30

Page 31: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

Horizontal disparity Most general expressionst6.1

In planar Cartesian retinal coordinatesas a function of spatial positionin gaze-centered coordinates

x$,IS

USHsin

c jHcosc Tcos

c

� �þ U2

S2þ 1

� �Tcosc j

UWS2

Tsinc

�H$

jU2

S2þ 1

� �Hcos

c Tsinc þ UW

S2Hcos

c Tcosc þW

SHsin

c

�V$j

WS

T$

t6.2

In planar Cartesian retinal coordinatesas a function of retinal locationin planar Cartesian coordinates

x$,j xcHsinc þ Hcos

c Tcosc

� � ISþ x2c þ 1� �

Tcosc jxcycT

sinc

� H$

þ ycHsinc j x2c þ 1

� �Hcos

c Tsinc jxcycHcos

c Tcosc

� V$ þ ycT$

t6.3

In azimuth longitude,as a function of spatial locationin gaze-centered coordinates

!$, 1S2þU2

UHsinc jSHcos

c Tcosc

� I þ S2 þ U2

� �Tcosc jUWT sin

c

� H$

j S2 þ U2� �

Hcosc Tsin

c þ UWHcosc Tcos

c þWSHsinc

� V$jWST$

� �t6.4

In azimuth longitude,as a function of retinal locationin azimuth-longitude/elevation-longitude coordinates

!$,jIScos!c Hcos

c Tcosc cos!c þ Hsin

c sin!c� �þ Tcos

c jsin!ccos!ctan)cTsinc

� H$

j Hcosc Tsin

c þ sin!ccos!ctan)cHcosc Tcos

c jcos2!ctan)cHsinc

� V$

þ cos2!ctan)cT$

t6.5

In azimuth longitude,as a function of retinal locationin azimuth-longitude/elevation-latitude coordinates

!$,jIScos!c Hcos

c Tcosc cos!c þ Hsin

c sin!c� �þ Tcos

c jsin!ctan.cTsinc

� �H$

j Hcosc Tsin

c þ Tcosc Hcos

c sin!ctan.cjHsinc cos!ctan.c

� �V$ þ cos!ctan.cð ÞT$

!$,jIRsec.c Hcos

c Tcosc cos!c þ Hsin

c sin!c� �þ Tcos

c jsin!ctan.cTsinc

� �H$

j Hcosc Tsin

c þ Tcosc Hcos

c sin!ctan.cjHsinc cos!ctan.c

� �V$ þ cos!ctan.cð ÞT$

t6.6

In azimuth latitude,as a function of spatial locationin gaze-centered coordinates

"$ ¼ jITsinc Hcos

c UWjHsinc USþ Tcos

c Hcosc W 2 þ S2� �� �

U2 þW 2 þ S2ð Þ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiW 2 þ S2

p

jjSTcos

c H$ þ SHcosc Tsin

c V$ þWHsinc V$ þWT$

� �ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiW 2 þ S2

p

t6.7

In azimuth latitude,as a function of retinal locationin azimuth-latitude/elevation-longitude coordinates

"$ ¼ jIR

Tsinc Hcos

c sin"csin)c þ Hsinc sin"ccos)c þ Tcos

c Hcosc cos"c

� �þ Tcos

c cos)c

� �H$ þ cos)c tan)cH

sinc jHcos

c Tsinc

� �V$ þ sin)cð ÞT$

t6.8

t6.9 Table C1. Expressions for horizontal disparity in different coordinate systems. These are correct to first order in interocular distance I/S (I/R)and in the vergence angles H$, V$, and T$. They hold all over the retina and for any cyclopean gaze Hc, elevation Vc, or overallcycloversion Tc.

Journal of Vision (2009) 0(0):1, 1–37 Read, Phillipson, & Glennerster 31

Page 32: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

Vertical disparity Most general expressionst7.1

In planar Cartesian retinal coordinatesas a function of spatial positionin gaze-centered coordinates

y$,IS

WS

Hsinc þ Tsin

c Hcosc

� �þ UW

S2Tcosc j

W 2

S2þ 1

� �Tsinc

�H$

þ USHsin

c jW 2

S2þ 1

� �Hcos

c Tcosc j

UWS2

Hcosc Tsin

c

�V$ þ U

ST$

t7.2

In planar Cartesian retinal coordinatesas a function of retinal locationin planar Cartesian coordinates

y$,j ycHsinc jTsin

c Hcosc

� � ISþ xcycT

cosc j y2

c þ 1� �

Tsinc

� H$

j xcHsinc þ y2

c þ 1� �

Hcosc Tcos

c þ xcycHcosc Tsin

c

� V$jxcT$

t7.3

In elevation longitude,as a function of spatial locationin gaze-centered coordinates )$,

1W 2þS2

WHsinc þ STsin

c Hcosc

� I þ j W 2 þ S2

� �Tsinc þ UWT cos

c

� H$

j W 2 þ S2� �

Hcosc Tcos

c þ UWHcosc Tsin

c jUSHsinc

� V$ þ UST$

� �t7.4

In elevation longitude,as a function of retinal locationin azimuth-longitude/elevation-longitude coordinates

)$,cos2)c Tsin

c Hcosc jHsin

c tan)c

� ISþ tan!csin)ccos)cT

cosc jTsin

c

� H$

j Hcosc Tcos

c þ tan!csin)ccos)cHcosc Tsin

c þ tan!ccos2)cHsinc

� V$

jtan!ccos2)cT$

t7.5

In elevation longitude,as a function of retinal locationin azimuth-latitude/elevation-longitude coordinates

)$,cos2)c Tsin

c Hcosc jHsin

c tan)c

� ISþ tan"csin)cT

cosc jTsin

c

� H$

j Hcosc Tcos

c þ tan"csin)cHcosc Tsin

c þ tan"ccos)cHsinc

� V$jtan"ccos)cT$

)$,IRcos)c

cos"cTsinc Hcos

c jHsinc tan)c

� þ tan"csin)cTcosc jTsin

c

� H$

j Hcosc Tcos

c þ tan"csin)cHcosc Tsin

c þ tan"ccos)cHsinc

� V$jtan"ccos)cT$

t7.6

In elevation latitude,as a function of spatial locationin gaze-centered coordinates

.$ ¼ IHcos

c Tcosc UW þ Hsin

c WSþ Tsinc Hcos

c U2 þ S2� �� �

U2 þW 2 þ S2ð ÞffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiU2 þ S2

p

jST sin

c H$ þ SHcosc Tcos

c V$jUHsinc V$jUT$

� �ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiU2 þ S2

p

t7.7

In elevation latitude,as a function of retinal locationin azimuth-longitude/elevation-latitude coordinates

.$ ¼ IScos!ccos.c Tcos

c Hcosc sin!csin.cjHsin

c cos!csin.c þ Tsinc Hcos

c cos.c� �

jTsinc H$cos!cj Hcos

c Tcosc cos!c þ sin!cHsin

c

� �V$jsin!cT$

.$ ¼ IR

Tcosc Hcos

c sin!csin.cjHsinc cos!csin.c þ Tsin

c Hcosc cos.c

� �jTsin

c H$cos!cj Hcosc Tcos

c cos!c þ sin!cHsinc

� �V$jsin!cT$

t7.8

t7.9 Table C2. Expressions for vertical disparity in different coordinate systems. These are correct to first order in interocular distance I/S (I/R)and in the vergence angles H$, V$, and T$. They hold all over the retina and for any cyclopean gaze Hc, elevation Vc, or overallcycloversion Tc.

Journal of Vision (2009) 0(0):1, 1–37 Read, Phillipson, & Glennerster 32

Page 33: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

Horizontal disparity With zero overall cycloversion, Tc = 0t8.1

In planar Cartesian retinal coordinatesas a function of spatial positionin gaze-centered coordinates

x$,IS

USHsin

c jHcosc

� �þ U2

S2þ 1

� �H$

jUWS2

Hcosc þW

SHsin

c

�V$j

WS

T$

t8.2

In planar Cartesian retinal coordinatesas a function of retinal locationin planar Cartesian coordinates

x$,j xcHsinc þ Hcos

c

� � ISþ x2c þ 1� �

H$

þ ycHsinc jxcycHcos

c

� �V$ þ ycT$

t8.3

In azimuth longitude,as a function of spatial locationin gaze-centered coordinates

!$,1

S2 þ U2

UHsinc jSHcos

c

� I þ S2 þ U2� �

H$

j UWHcosc þWSHsin

c

� V$jWST$

� �t8.4

In azimuth longitude,as a function of retinal locationin azimuth-longitude/elevation-longitude coordinates

!$,jIScos!ccos Hcj!cð Þ þ H$

þV$cos!ctan)csin Hcj!cð Þ þ T$cos2!ctan)c

t8.5

In azimuth longitude,as a function of retinal locationin azimuth-longitude/elevation-latitude coordinates

!$,jI

Scos!ccos Hcj!cð Þ þ H$ þ V$tan.csin Hcj!cð Þ þ T$cos!ctan.c

!$,jIRsec.ccos Hcj!cð Þ þ H$ þ V$tan.csin Hcj!cð Þ þ cos!ctan.cð ÞT$

t8.6

In azimuth latitude,as a function of spatial locationin gaze-centered coordinates

"$ ¼ jIjHsin

c USþ Hcosc W 2 þ S2� �� �

U2 þW 2 þ S2ð ÞffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiW 2 þ S2

p jjSH$ þWHsin

c V$ þWT$

� �ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiW 2 þ S2

pt8.7

In azimuth latitude,as a function of retinal locationin azimuth-latitude/elevation-longitude coordinates

"$ ¼ jIR

Hsinc sin"ccos)c þ Hcos

c cos"c� �

þ H$Tcosc cos)c þ V$H

sinc cos)ctan)c þ T$sin)c

t8.8

t8.9 Table C3. Expressions for horizontal disparity in different coordinate systems. These are correct to first order in interocular distance I/S (I/R)and in the vergence angles H$, V$, and T$. They hold all over the retina and for any cyclopean gaze Hc or elevation Vc, provided there isno overall cycloversion, Tc = 0.

Vertical disparity With zero overall cycloversion, Tc = 0t9.1

In planar Cartesian retinal coordinates as a functionof spatial position in gaze-centered coordinates

y$,IWS2

Hsinc þ H$

UWS2

þ USHsin

c jW 2

S2þ 1

� �Hcos

c

�V$ þ U

ST$

t9.2

In planar Cartesian retinal coordinates as a functionof retinal location in planar Cartesian coordinates

y$,jISycH

sinc þ H$xcycj xcH

sinc þ y2

c þ 1� �

Hcosc

� V$jxcT$

t9.3

In elevation longitude, as a function of spatial locationin gaze-centered coordinates

)$,1

W 2 þ S2WHsin

c

� I þ UWH$j W 2 þ S2

� �Hcos

c jUSHsinc

� V$ þ UST$

� �t9.4

In elevation longitude, as a function of retinal locationin azimuth-longitude/elevation-longitude coordinates

)$,jISHsin

c sin)ccos)c þ H$tan!csin)ccos)c

jV$ Hcosc þ tan!ccos2)cH

sinc

� �jtan!ccos2)cT$

t9.5

In elevation longitude, as a function of retinal locationin azimuth-latitude/elevation-longitude coordinates

)$,jISHsin

c sin)ccos)c þ H$tan"csin)c

jV$ Hcosc þ tan"ccos)cH

sinc

� �jtan"ccos)cT$

)$,jIRHsin

c

sin)c

cos"cþ H$tan"csin)c

jV$ Hcosc þ tan"ccos)cH

sinc

� �jtan"ccos)cT$

t9.6

In elevation latitude, as a function of spatial locationin gaze-centered coordinates .$ ¼ I

Hcosc UW þ Hsin

c WS� �

U2 þW 2 þ S2ð Þ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiU2 þ S2

p jSHcos

c V$jUHsinc V$jUT$

� �ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiU2 þ S2

pt9.7

In elevation latitude, as a function of retinal locationin azimuth-longitude/elevation-latitude coordinates

.$ ¼ jIRsin.csin Hcj!cð ÞjV$cos Hcj!cð Þjsin!cT$

t9.8

t9.9 Table C4. Expressions for vertical disparity in different coordinate systems. These are correct to first order in interocular distance I and inthe vergence angles H$, V$, and T$. They hold all over the retina and for any cyclopean gaze Hc or elevation Vc, provided there is nooverall cycloversion, Tc = 0.

Journal of Vision (2009) 0(0):1, 1–37 Read, Phillipson, & Glennerster 33

Page 34: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

Horizontal disparity For zero torsion and vertical vergence errort10.1

In planar Cartesian retinal coordinates as a functionof retinal location in planar Cartesian coordinates

x$,j xcHsinc þ Hcos

c

� � ISþ x2c þ 1� �

H$t10.2

In planar Cartesian retinal coordinates as a functionof spatial position in gaze-centered coordinates

x$,IS2

UHsinc jSHcos

c

� �þ U2 þ S2

S2H$

t10.3

In azimuth longitude, as a functionof spatial location in gaze-centered coordinates

!$,I

S2 þ U2UHsin

c jSHcosc

� �þ H$

t10.4

In azimuth longitude, as a function of retinal locationin azimuth-longitude/elevation-longitude coordinates

!$,jIScos!ccos !cjHcð Þ þ H$

t10.5

In azimuth longitude, as a function of retinal locationin azimuth-longitude/elevation-latitude coordinates

(same as above since !$ is then independent of retinal elevation)t10.6

In azimuth latitude, as a function of spatial locationin gaze-centered coordinates "$ ¼ I

Hsinc USjHcos

c W 2 þ S2� �� �

U2 þW 2 þ S2ð Þ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiW 2 þ S2

p þ SH$ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiW 2 þ S2

pt10.7

In azimuth latitude, as a function of retinal locationin azimuth-latitude/elevation-longitude coordinates "$ ¼ j

IScos"ccos)c Hsin

c sin"ccos)c þ Hcosc cos"c

� �þ cos)cH$

t10.8

t10.9 Table C5. Expressions for horizontal disparity in different coordinate systems. These are correct to first order in interocular distance I/S (I/R)and in the convergence angle H$. They assume cycloversion, cyclovergence, and vertical vergence are all zero: Tc = T$ = V$ = 0. Theyhold all over the retina and for any cyclopean gaze Hc or elevation Vc.

Vertical disparity For zero torsion and vertical vergence errort11.1

In planar Cartesian retinal coordinatesas a function of retinal locationin planar Cartesian coordinates

y$,yc jHsinc

ISþ xcH$

� �t11.2

In planar Cartesian retinal coordinatesas a function of spatial positionin gaze-centered coordinates

y$,WS2

IHsinc þ UH$

� �t11.3

In elevation longitude,as a function of spatial locationin gaze-centered coordinates

)$,W

W 2 þ S2IsinHc þ UH$ð Þt11.4

In elevation longitude,as a function of retinal locationin azimuth-longitude/elevation-longitude coordinates

)$,sin)ccos)c jISsinHc þ H$tan!c

� �t11.5

In elevation longitude,as a function of retinal locationin azimuth-latitude/elevation-longitude coordinates

)$,sin)c j IS sinHccos)c þ H$tan"c

� �t11.6

In elevation latitude,as a function of spatial locationin gaze-centered coordinates

.$ ¼ IW UcosHc þ SsinHcð Þ

U2 þW 2 þ S2ð Þ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiU2 þ S2

pt11.7

In elevation latitude,as a function of retinal locationin azimuth-longitude/elevation-latitude coordinates

.$ ¼ ISsin.ccos.ccos!csin !cjHcð Þ

.$ ¼ I

Rsin.csin !cjHcð Þ

t11.8

t11.9 Table C6. Expressions for vertical disparity in different coordinate systems. These are correct to first order in interocular distance I/S (I/R)and in the convergence angle H$. They assume cycloversion, cyclovergence, and vertical vergence are all zero: Tc = T$ = V$ = 0. Theyhold all over the retina and for any cyclopean gaze Hc or elevation Vc.

Journal of Vision (2009) 0(0):1, 1–37 Read, Phillipson, & Glennerster 34

Page 35: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

2024 azimuth-longitude/elevation-latitude coordinates (!c, .c),2025 we obtain

.$ ¼ I

Scos!ccos.cð Tcos

c Hcosc sin!csin.c j Hsin

c cos!csin.c

þ Tsinc Hcos

c cos.cÞj Tsinc H$cos!c

j Hcosc Tcos

c cos!c þ sin!cHsinc

� �V$ j sin!cT$: ðB39Þ

20262027

2028 This expression simplifies slightly if we replace S, the2029 distance component along the cyclopean line of sight,2030 with R, the shortest distance from the origin to the viewed2031 point. R2 = U2 + W2 + S2, and hence

S ¼ Rcos!ccos.c ¼ Rcos"ccos)c: ðB40Þ

20322033

2034 Then we have

.$ ¼ I

RTcosc Hcos

c sin!csin.c j Hsinc cos!csin.c þ Tsin

c Hcosc cos.c

� �j Tsin

c H$cos!c j Hcosc Tcos

c cos!c þ sin!cHsinc

� �V$ j sin!cT$:

ðB41Þ203520362037

20382039 Appendix C: Tables of2040 expressions for horizontal and2041 vertical disparities in different2042 coordinate systems

2043 Most general

2044 The expressions in Tables C1 and C2 assume that the2045 cyclovergence between the eyes, T$, is small. They do not2046 assume anything about the overall cycloversion, Tc.2047 Cycloversion rotates the eyes in the head, mixing up2048 vertical and horizontal disparities. This can occur when2049 the head tilts over, so that the interocular axis is no longer2050 horizontal with respect to gravity. In the extreme case of2051 Tc = 90-, the vertical and horizontal directions have2052 actually swapped over (y Y x and x Y jy). One can2053 verify from the above results that the expressions for2054 vertical and horizontal disparities also swap over (i.e.,2055 x$ with Tc = 90- is the same as y$ with Tc = 0, after2056 replacing y with x and x with jy), a quick “sanity check”2057 on the results.

2058 Zero overall cycloversion

2059 Table C3 and Table C4.

2060Zero overall cycloversion, cyclovergence,2061and vertical vergence error

2062Table C5 and Table C6.

20632064Acknowledgments

2065This research was supported by the Royal Society2066(University Research Fellowship UF041260 to JCAR),2067MRC (New Investigator Award 80154 to JCAR), the2068Wellcome Trust (Grant 086526/A/08/Z to AG), and the2069EPSRC (Neuroinformatics Doctoral Training Centre stu-2070dentship to GPP).

2071Commercial relationships: none.2072Corresponding author: Jenny C. A. Read.2073Email: [email protected]: Henry Wellcome Building, Framlington Place,2075Newcastle upon Tyne NE2 4HH, UK.

20762077References

2078Adams, W., Frisby, J. P., Buckley, D., Garding, J.,2079Hippisley-Cox, S. D., & Porrill, J. (1996). Pooling2080of vertical disparities by the human visual system.2081Perception, 25, 165–176. [PubMed]

2082Allison, R. S., Rogers, B. J., & Bradshaw, M. F. (2003).2083Geometric and induced effects in binocular stereopsis2084and motion parallax. Vision Research, 43, 1879–1893.2085[PubMed]

2086Backus, B. T., & Banks, M. S. (1999). Estimator2087reliability and distance scaling in stereoscopic slant2088perception. Perception, 28, 217–242. [PubMed]

2089Backus, B. T., Banks, M. S., van Ee, R., & Crowell, J. A.2090(1999). Horizontal and vertical disparity, eye posi-2091tion, and stereoscopic slant perception. Vision2092Research, 39, 1143–1170. [PubMed]

2093Banks, M. S., & Backus, B. T. (1998). Extra-retinal and2094perspective cues cause the small range of the induced2095effect. Vision Research, 38, 187–194. [PubMed]

2096Banks, M. S., Backus, B. T., & Banks, R. S. (2002). Is2097vertical disparity used to determine azimuth? Vision2098Research, 42, 801–807. [PubMed]

2099Banks, M. S., Hooge, I. T., & Backus, B. T. (2001).2100Perceiving slant about a horizontal axis from stereopsis.2101Journal of Vision, 1(2):1, 55–79, http://journalofvision.2102org/1/2/1/, doi:10.1167/1.2.1. [PubMed] [Article]

2103Barlow, H. B., Blakemore, C., & Pettigrew, J. D.2104(1967). The neural mechanisms of binocular depth2105discrimination. The Journal of Physiology, 193,2106327–342.

Journal of Vision (2009) 0(0):1, 1–37 Read, Phillipson, & Glennerster 35

Page 36: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

2107 Berends, E. M., & Erkelens, C. J. (2001). Strength of2108 depth effects induced by three types of vertical2109 disparity. Vision Research, 41, 37–45. [PubMed]

2110 Berends, E. M., van Ee, R., & Erkelens, C. J. (2002).2111 Vertical disparity can alter perceived direction.2112 Perception, 31, 1323–1333. [PubMed]

2113 Bishop, P. O. (1989). Vertical disparity, egocentric distance2114 and stereoscopic depth constancy: A new interpreta-2115 tion. Proceedings of the Royal Society of London B:2116 Biological Sciences, 237, 445–469. [PubMed]

2117 Bishop, P. O., Kozak, W., & Vakkur, G. J. (1962). Some2118 quantitative aspects of the cat’s eye: Axis and plane2119 of reference, visual field co-ordinates and optics. The2120 Journal of Physiology, 163, 466–502. [PubMed]2121 [Article]

2122 Brenner, E., Smeets, J. B., & Landy, M. S. (2001). How2123 vertical disparities assist judgements of distance.2124 Vision Research, 41, 3455–3465. [PubMed]2125

2126 Carpenter, R. H. (1988). Movements of the eyes (2nd ed.).2127 Pion Ltd.AQ4

2128 Clement, R. A. (1992). Gaze angle explanations of the2129 induced effect. Perception, 21, 355–357. [PubMed]

2130 Cumming, B. G. (2002). An unexpected specialization for2131 horizontal disparity in primate primary visual cortex.2132 Nature, 418, 633–636. [PubMed]2133

2134 Cumming, B. G., Johnston, E. B., & Parker, A. J. (1991).2135 Vertical disparities and perception of three-dimensional2136 shape. Nature, 349, 411–413. [PubMed]

2137 Duke, P. A., & Howard, I. P. (2005). Vertical-disparity2138 gradients are processed independently in different depth2139 planes. Vision Research, 45, 2025–2035. [PubMed]

2140 Durand, J. B., Celebrini, S., & Trotter, Y. (2007). Neural2141 bases of stereopsis across visual field of the alert2142 macaque monkey. Cerebral Cortex, 17, 1260–1273.2143 [PubMed]2144

2145 Durand, J. B., Zhu, S., Celebrini, S., & Trotter, Y. (2002).2146 Neurons in parafoveal areas V1 and V2 encode2147 vertical and horizontal disparities. Journal of Neuro-2148 physiology, 88, 2874–2879. [PubMed] [Article]2149

2150 Erkelens, C. J., & van Ee, R. (1998). A computational2151 model of depth perception based on head-centric2152 disparity. Vision Research, 38, 2999–3018. [PubMed]2153

2154 Friedman, R. B., Kaye, M. G., & Richards, W. (1978).2155 Effect of vertical disparity upon stereoscopic depth.2156 Vision Research, 18, 351–352. [PubMed]2157

2158 Frisby, J. P., Buckley, D., Grant, H., Garding, J., Horsman,2159 J. M., Hippisley-Cox, S. D., et al. (1999). An2160 orientation anisotropy in the effects of scaling vertical2161 disparities. Vision Research, 39, 481–492. [PubMed]2162

2163 Garding, J., Porrill, J., Mayhew, J. E., & Frisby, J. P.2164 (1995). Stereopsis, vertical disparity and relief trans-2165 formations. Vision Research, 35, 703–722. [PubMed]

2166Gillam, B., Chambers, D., & Lawergren, B. (1988). The2167role of vertical disparity in the scaling of stereoscopic2168depth perception: An empirical and theoretical study.2169Perception & Psychophysics, 44, 473–483. [PubMed]

2170Gillam, B., & Lawergren, B. (1983). The induced effect,2171vertical disparity, and stereoscopic theory. Perception2172& Psychophysics, 340, 121–130. [PubMed]

2173Gonzalez, F., Justo, M. S., Bermudez, M. A., & Perez, R.2174(2003). Sensitivity to horizontal and vertical disparity2175and orientation preference in areas V1 and V2 of the2176monkey. Neuroreport, 14, 829–832. [PubMed]

2177Gur, M., & Snodderly, D. M. (1997). Visual receptive2178fields of neurons in primary visual cortex (V1) move2179in space with the eye movements of fixation. Vision2180Research, 37, 257–265. [PubMed]

2181Hartley, R., & Zisserman, A. (2000).Multiple view geometry2182in computer vision. Cambridge, UK: Cambridge2183University Press.

2184Helmholtz, H. v. (1925). Treatise on physiological optics.2185Rochester, NY: Optical Society of America.

2186Hibbard, P. B. (2007). A statistical model of binocular2187disparity. Visual Cognition, 15, 149–165.

2188Howard, I. P. (2002). Seeing in depth, volume 1: Basic2189mechanisms. Ontario, Canada: I. Porteous.

2190Howard, I. P., Allison, R. S., & Zacher, J. E. (1997). The2191dynamics of vertical vergence. Experimental Brain2192Research, 116, 153–159. [PubMed]

2193Howard, I. P., & Rogers, B. J. (2002). Seeing in depth, volume21942: Depth perception. Ontario, Canada: I. Porteous.

2195Kaneko, H., & Howard, I. P. (1996). Relative size2196disparities and the perception of surface slant. Vision2197Research, 36, 1919–1930. [PubMed]

2198Kaneko, H., & Howard, I. P. (1997a). Spatial limitation of2199vertical-size disparity processing. Vision Research,220037, 2871–2878. [PubMed]

2201Kaneko, H., & Howard, I. P. (1997b). Spatial properties of2202shear disparity processing. Vision Research, 37,2203315–323. [PubMed]

2204Koenderink, J. J., & van Doorn, A. J. (1976). Geometry of2205binocular vision and a model for stereopsis. Bio-2206logical Cybernetics, 21, 29–35. [PubMed]

2207Liu, L., Stevenson, S. B., & Schor, C. M. (1994). A polar2208coordinate system for describing binocular disparity.2209Vision Research, 34, 1205–1222. [PubMed]

2210Liu, Y., Bovik, A. C., & Cormack, L. K. (2008).2211Disparity statistics in natural scenes. Journal of2212Vision, 8(11):19, 1–14, http://journalofvision.org/22138/11/19/, doi:10.1167/8.11.19. [PubMed] [Article] 2214

2215Longuet-Higgins, H. C. (1981). A computer algorithm for2216reconstructing a scene from two projections. Nature,2217293, 133–135. 2218

Journal of Vision (2009) 0(0):1, 1–37 Read, Phillipson, & Glennerster 36

Page 37: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

2219 Longuet-Higgins, H. C. (1982). The role of the vertical2220 dimension in stereoscopic vision. Perception, 11,2221 377–386. [PubMed]2222

2223 Matthews, N., Meng, X., Xu, P., & Qian, N. (2003). A2224 physiological theory of depth perception from vertical2225 disparity. Vision Research, 43, 85–99. [PubMed]2226

2227 Mayhew, J. E. (1982). The interpretation of stereo-disparity2228 information: The computation of surface orientation2229 and depth. Perception, 11, 387–403. [PubMed]2230

2231 Mayhew, J. E., & Longuet-Higgins, H. C. (1982). A2232 computational model of binocular depth perception.2233 Nature, 297, 376–378. [PubMed]2234

2235 Minken, A. W., & Van Gisbergen, J. A. (1994). A three-2236 dimensional analysis of vergence movements at2237 various levels of elevation. Experimental Brain2238 Research, 101, 331–345. [PubMed]2239

2240 Mok, D., Ro, A., Cadera, W., Crawford, J. D., & Vilis, T.2241 (1992). Rotation of Listing’s plane during vergence.2242 Vision Research, 32, 2055–2064. [PubMed]2243

2244 Ogle, K. N. (1952). Space perception and vertical2245 disparity. Journal of the Optical Society of America,2246 42, 145–146. [PubMed]2247

2248 Pierce, B. J., & Howard, I. P. (1997). Types of size2249 disparity and the perception of surface slant. Percep-2250 tion, 26, 1503–1517.2251

2252 Porrill, J., Mayhew, J. E. W., & Frisby, J. P. (1990).2253 Cyclotorsion, conformal invariance and induced2254 effects in stereoscopic vision. In S. Ullman &2255 W. Richards (Eds.), Image understanding 19892256 (Vol. AI Vision Research Unit, pp. 185–196).2257 Norwood, NJ: Ablex Publishing.2258

2259 Read, J. C. A., & Cumming, B. G. (2003). Measuring V12260 receptive fields despite eye movements in awake2261 monkeys. Journal of Neurophysiology.AQ5

2262 Read, J. C. A., & Cumming, B. G. (2004). Understanding2263 the cortical specialization for horizontal disparity.2264 Neural Computation, 16, 1983–2020. [PubMed]2265 [Article]2266

2267 Read, J. C. A., & Cumming, B. G. (2006). Does visual2268 perception require vertical disparity detectors? Journal2269 of Vision, 6(12):1, 1323–1355, http://journalofvision.2270 org/6/12/1/, doi:10.1167/6.12.1. [PubMed] [Article]

2271 Rogers, B. J., & Bradshaw, M. F. (1993). Vertical2272 disparities, differential perspective and binocular2273 stereopsis. Nature, 361, 253–255. [PubMed]

2274 Rogers, B. J., & Bradshaw, M. F. (1995). Disparity2275 scaling and the perception of frontoparallel surfaces.2276 Perception, 24, 155–179. [PubMed]

2277Rogers, B. J., & Cagenello, R. (1989). Disparity curvature2278and the perception of three-dimensional surfaces.2279Nature, 339, 135–137. [PubMed]

2280Rogers, B. J., & Koenderink, J. (1986). Monocular2281aniseikonia: A motion parallax analogue of the2282disparity-induced effect. Nature, 322, 62–63.2283[PubMed]

2284Schreiber, K., Crawford, J. D., Fetter, M., & Tweed, D.2285(2001). The motor side of depth vision. Nature, 410,2286819–822. [PubMed]

2287Serrano-Pedraza, I., Phillipson, G. P., & Read, J. C. A.2288(submitted for publication). A specialization for2289vertical disparity discontinuities. Journal of Vision. AQ6

2290Serrano-Pedraza, I., & Read, J. C. A. (2009). Stereo vision2291requires an explicit encoding of vertical disparity.2292Journal of Vision, 9(4):3, 1–13, http://journalofvision.2293org/9/4/3/, doi:10.1167/9.4.3. [PubMed] [Article]

2294Somani, R. A., DeSouza, J. F., Tweed, D., & Vilis, T.2295(1998). Visual test of Listing’s law during vergence.2296Vision Research, 38, 911–923. [PubMed]

2297Stenton, S. P., Frisby, J. P., & Mayhew, J. E. (1984).2298Vertical disparity pooling and the induced effect.2299Nature, 309, 622–623. [PubMed]

2300Stevenson, S. B., & Schor, C. M. (1997). Human stereo2301matching is not restricted to epipolar lines. Vision2302Research, 37, 2717–2723. [PubMed]

2303Tweed, D. (1997a). Kinematic principles of three-2304dimensional gaze control. In M. Fetter, T. P.2305Haslwanter, H. Misslisch, & D. Tweed (Eds.), Three-2306dimensional kinematics of eye, head and limb move-2307ments. Amsterdam: Harwood Academic Publishers.

2308Tweed, D. (1997b). Three-dimensional model of the2309human eye-head saccadic system. Journal of Neuro-2310physiology, 77, 654–666. [PubMed] [Article]

2311Tweed, D. (1997c). Visual-motor optimization in binoc-2312ular control. Vision Research, 37, 1939–1951.2313[PubMed]

2314Van Rijn, L. J., & Van den Berg, A. V. (1993). Binocular2315eye orientation during fixations: Listing’s law2316extended to include eye vergence. Vision Research,231733, 691–708. [PubMed]

2318Westheimer, G. (1978). Vertical disparity detection: Is2319there an induced size effect? Investigative Ophthal-2320mology Vision Science, 17, 545–551. [PubMed]2321[Article]

2322Williams, T. D. (1970). Vertical disparity in depth2323perception. American Journal of Optometry and2324Archives of American Academy of Optometry, 47,2325339–344. [PubMed]

2326

Journal of Vision (2009) 0(0):1, 1–37 Read, Phillipson, & Glennerster 37

Page 38: Latitude and longitude vertical disparitiessxs05ag/pub/rpg2009/rpg2009_preprint.pdf · 12 Latitude and longitude vertical disparities 3 45 Jenny C. A. Read Institute of Neuroscience,

AUTHOR QUERIES

AUTHOR PLEASE ANSWER ALL QUERIES

AQ1: Please check if the changes made here are correct. If not, kindly make the necessary adjustments.

AQ2: Please check if the changes made to this sentence are correct. If not, kindly make the necessary

adjustments.

AQ3: Figure 11 was inserted here. Please check if appropriate. If not, kindly make the necessary adjustments.

AQ4: Please provide publisher location here.

AQ5: Please provide volume number and page range here.

AQ6: Please update the publication status of Serrano-Pedraza et al. (submitted for publication).

END OF AUTHOR QUERIES