![]() | Martin S BanksShow email addressmartybanks@berkeley.edu http://bankslab.berkeley.edu/. | Optometry UC Berkeley, United States | Optometry & Vision Science, University of California, Berkeley, CA, USA. | ... |
Is this your profile? Claim your profile Copy URL Embed Link to your profile |
Martin S Banks:Expert Impact
Concepts for whichMartin S Bankshas direct influence:Focus cues,Stereo displays,Eye movements,Slant contrast,Visual discomfort,Binocular vision,Motion artifacts,Contrast sensitivity.
Martin S Banks:KOL impact
Concepts related to the work of other authors for whichfor which Martin S Banks has influence:Optic flow,Multisensory integration,Visual perception,Eye movements,Virtual reality,Contrast sensitivity,Binocular vision.
KOL Resume for Martin S Banks
Year | |
---|---|
2021 | http://bankslab.berkeley.edu/. Optometry UC Berkeley, United States University of California |
2020 | UC Berkeley |
2019 | School of Optometry, University of California at Berkeley |
2018 | Optometry & Vision Science, University of California, Berkeley, Berkeley, CA, USA |
2017 | University of California, Berkeley, CA |
2016 | UC Berkeley – UCSF Graduate Program in Bioengineering, Berkeley, CA 94720, USA University of California, Berkeley, Berkeley CA |
2015 | School of Optometry, University of California, Berkeley, Berkeley, CA 94720, USA. UC Berkeley - UCSF Graduate Program in Bioengineering, Berkeley, CA, USA |
2014 | Vision Science Program, University of California, Berkeley, California 94720; and UC Berkeley – UCSF Graduate Program in Bioengineering Univ. of California, Berkeley (United States) |
2013 | Vision Science Program, School of Optometry, University of Berkeley, Berkeley, CA, USA |
2012 | Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720, USA Vision Science Program, University of California, Berkeley |
2011 | Vision Science Program, UC Berkeley, USA Department of Psychology, University of California, Berkeley, Berkeley, California 94720, |
2010 | Vision Science Program, University of California, Berkeley, Berkeley, California 94720, and School of Optometry, Department of Psychology, Helen Wills Neuroscience Center, UC Berkeley, Berkeley, CA, USA |
2009 | Vision Science Program, Department of Psychology and Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USA |
2008 | UC Berkeley, School of Optometry, Berkeley, CA, USA University of California, Berkeley. |
2006 | Vision Science Program, University of California, 94720, Berkeley, CA, USA |
2005 | Vision Science Program, School of Optometry and Helen Wills Neuroscience Institute, Department of Psychology, University of California, Berkeley, CA, USA |
2004 | Vision Science Program, School of Optometry, and Helen Wills Neuroscience Institute, University of California, Berkeley, California 94720-2020, USA. Department of Psychology and Wills Neuroscience Institute, University of California, Berkeley, CA, USA |
2003 | University of California, Berkeley, Vision Science Program, School of Optometry, Berkeley, CA 94720-2020 USA |
2002 | Department of Psychology, University of California, Berkeley, CA 94720-1650, USA |
2001 | Vision Science Program and Department of Psychology, University of California, Berkeley, CA, USA |
2000 | Vision Science Program & School of Optometry, University of California, 94720-2020, Berkeley, California, USA University of California, Berkeley, California |
1999 | Department of Psychology, University of California at Berkeley Author to whom all correspondence and requests for reprints should be addressed. |
1998 | Department of Psychology, School of Optometry, University of California, Berkeley, CA 94720–2020, USA |
1996 | School of Optometry and the Department of Psychology, University of California, Berkeley, California |
1994 | School of Optometry and Department of Psychology, University of California—Berkeley, Berkeley, CA 94720, U.S.A. |
1993 | School of Optometry, University of California, 94720, Berkeley, CA |
Concept | World rank |
---|---|
contributions preneural mechanisms | #1 |
optics wellfocused eye | #1 |
smaller errors acuity | #1 |
nat neurosci sentences | #1 |
coloropponent pathways | #1 |
retinal coordinates points | #1 |
optics receptor properties | #1 |
fixation preference paradigm | #1 |
plane displays | #1 |
intended saccade amplitude | #1 |
vision science state | #1 |
mismatch visual discomfort | #1 |
maximumlikelihood integrator model | #1 |
horizontally panoramic view | #1 |
gratings differing | #1 |
radiallyoriented gratings implications | #1 |
visibility motion artifacts | #1 |
convexity female form | #1 |
stereopsis natural environment | #1 |
herings laws | #1 |
averaging conflict | #1 |
closer footholds | #1 |
heading perception conditions | #1 |
footholds locomotion | #1 |
visual direction task | #1 |
points horopter | #1 |
extraretinal cyclovergence | #1 |
presented depth cues | #1 |
early nonlinearity | #1 |
realistic depthdependent blur | #1 |
reducing vergence | #1 |
neonatal contrast sensitivity | #1 |
gravity monocular cue | #1 |
verticalshear disparities signals | #1 |
intended scene blur | #1 |
peripheral spatial vision | #1 |
focus cues accommodation | #1 |
pairings patterns | #1 |
spectrum facelike | #1 |
stereopsis explained | #1 |
temporal frequencies conditions | #1 |
portrayed object | #1 |
single vep paradigm | #1 |
vergence accommodation mismatch | #1 |
effective focus cues | #1 |
contrast spatial scale | #1 |
movements fuse | #1 |
vergenceaccommodation conflicts | #1 |
discrimination observer | #1 |
Sign-in to see all concepts, it's free! | |
Prominent publications by Martin S Banks
How does the visual system combine information from different depth cues to estimate three-dimensional scene parameters? We tested a maximum-likelihood estimation (MLE) model of cue combination for perspective (texture) and binocular disparity cues to surface slant. By factoring the reliability of each cue into the combination process, MLE provides more reliable estimates of slant than would be available from either cue alone. We measured the reliability of each cue in isolation across a ...
Known for Cue Combination | Mle Model | Texture Disparity | Slant Estimation | Distance Increases |
Depth information from focus cues--accommodation and the gradient of retinal blur--is typically incorrect in three-dimensional (3-D) displays because the light comes from a planar display surface. If the visual system incorporates information from focus cues into its calculation of 3-D scene parameters, this could cause distortions in perceived depth even when the 2-D retinal images are geometrically correct. In Experiment 1 we measured the direct contribution of focus cues to perceived ...
Known for Focus Cues | Perceived Depth | Binocular Vision | Focal Distance | Slant Estimates |
Vestibular Heading Discrimination and Sensitivity to Linear Acceleration in Head and World Coordinates
[ PUBLICATION ]
Effective navigation and locomotion depend critically on an observer's ability to judge direction of linear self-motion, i.e., heading. The vestibular cue to heading is the direction of inertial acceleration that accompanies transient linear movements. This cue is transduced by the otolith organs. The otoliths also respond to gravitational acceleration, so vestibular heading discrimination could depend on (1) the direction of movement in head coordinates (i.e., relative to the otoliths), ...
Known for Heading Discrimination | Linear Acceleration | Movement Direction | Female Head | Motion Perception |
Estimating depth from binocular disparity is extremely precise, and the cue does not depend on statistical regularities in the environment. Thus, disparity is commonly regarded as the best visual cue for determining 3D layout. But depth from disparity is only precise near where one is looking; it is quite imprecise elsewhere. Away from fixation, vision resorts to using other depth cues-e.g., linear perspective, familiar size, aerial perspective. But those cues depend on statistical ...
Known for Blur Disparity | Complementary Cues | Binocular Vision | Fixation Depth | Visual Space |
The ability to judge heading during tracking eye movements has recently been examined by several investigators. To assess the use of retinal-image and extra-retinal information in this task, the previous work has compared heading judgments with executed as opposed to simulated eye movements. For eye movement velocities greater than 1 deg/sec, observers seem to require the eye-velocity information provided by extra-retinal signals that accompany tracking eye movements. When those signals ...
Known for Eye Movements | Rotational Flow | Simulated Rotation | Heading Estimates | Extraretinal Signals |
Three-dimensional (3D) displays have become important for many applications including vision research, operation of remote devices, medical imaging, surgical training, scientific visualization, virtual prototyping, and more. In many of these applications, it is important for the graphic image to create a faithful impression of the 3D structure of the portrayed object or scene. Unfortunately, 3D displays often yield distortions in perceived 3D structure compared with the percepts of the ...
Known for Visual Performance | Accommodation Conflicts | Focus Cues | Discomfort Fatigue | Depicted Scene |
WHEN a person walks through a rigid environment while holding eyes and head fixed, the pattern of retinal motion flows radially away from a point, the focus of expansion (Fig. la)1,2. Under such conditions of translation, heading corresponds to the focus of expansion and people identify it readily3. But when making an eye/head movement to track an object off to the side, retinal motion is no longer radial (Fig. 1b)4. Heading perception in such situations has been modelled in two ways. ...
Known for Eye Movements | Heading Perception | Retinal Motion | Head Movement | Flow Field |
Effective movement planning should take into account the consequences of possible errors in executing a planned movement. These errors can result from either sensory uncertainty or variability in movement planning and production. We examined the ability of humans to compensate for variability in sensory estimation and movement production under conditions in which variability is increased artificially by the experimenter. Subjects rapidly pointed at a target region that had an adjacent ...
Known for Movement Variability | Stimulus Configurations | Finger Position | Subjects Target Region | Sensory Uncertainty |
Recent increased usage of stereo displays has been accompanied by public concern about potential adverse effects associated with prolonged viewing of stereo imagery. There are numerous potential sources of adverse effects, but we focused on how vergence-accommodation conflicts in stereo displays affect visual discomfort and fatigue. In one experiment, we examined the effect of viewing distance on discomfort and fatigue. We found that conflicts of a given dioptric value were slightly less ...
Known for Stereo Displays | Visual Discomfort | Fatigue Experiment | Ocular Adult | Mobile Devices |
Natural-Scene Statistics Predict How the Figure–Ground Cue of Convexity Affects Human Depth Perception
[ PUBLICATION ]
The shape of the contour separating two regions strongly influences judgments of which region is "figure" and which is "ground." Convexity and other figure-ground cues are generally assumed to indicate only which region is nearer, but nothing about how much the regions are separated in depth. To determine the depth information conveyed by convexity, we examined natural scenes and found that depth steps across surfaces with convex silhouettes are likely to be larger than steps across ...
Known for Depth Perception | Scene Statistics | Figure Ground | Binocular Disparity | Psychophysical Experiment |
The otoliths are stimulated in the same fashion by gravitational and inertial forces, so otolith signals are ambiguous indicators of self-orientation. The ambiguity can be resolved with added visual information indicating orientation and acceleration with respect to the earth. Here we present a Bayesian model of the statistically optimal combination of noisy vestibular and visual signals. Likelihoods associated with sensory measurements are represented in an orientation/acceleration ...
Known for Bayesian Model | Visual Cues | Gravitoinertial Force | Otolith Signal | Motion Platform |
The slant of a stereoscopically defined surface cannot be determined solely from horizontal disparities or from derived quantities such as horizontal size ratio (HSR). There are four other signals that, in combination with horizontal disparity, could in principle allow an unambiguous estimate of slant: the vergence and version of the eyes, the vertical size ratio (VSR), and the horizontal gradient of VSR. Another useful signal is provided by perspective slant cues. The determination of ...
Known for Eye Position | Stereoscopic Slant | Perspective Cues | Horizontal Disparity | Human Observers |
We examined the ability to use optic flow to judge heading when different parts of the retina are stimulated and when the specified heading is in different directions relative to the display. To do so, we manipulated retinal eccentricity (the angle between the fovea and the center of the stimulus) and heading eccentricity (the angle between the specified heading and the center of the stimulus) independently. Observers viewed two sequences of moving dots that simulated translation through ...
Known for Optic Flow | Retinal Eccentricity | Visual Fields | Retina Heading | Moving Dots |
When a small frontoparallel surface (a test strip) is surrounded by a larger slanted surface (an inducer), the test strip is perceived as slanted in the direction opposite to the inducer. This has been called the depth-contrast effect, but we call it the slant-contrast effect. In nearly all demonstrations of this effect, the inducer's slant is specified by stereoscopic signals; and other signals, such as the texture gradient, specify that it is frontoparallel. We present a theory of ...
Known for Slant Contrast | Test Strip | Surface Inducer | Linear Combination | Depth Perception |
Humans and many animals make frequent saccades requiring coordinated movements of the eyes. When landing on the new fixation point, the eyes must converge accurately or double images will be perceived. We asked whether the visual system uses statistical regularities in the natural environment to aid eye alignment at the end of saccades. We measured the distribution of naturally occurring disparities in different parts of the visual field. The central tendency of the distributions was ...
Known for Eye Movements | Natural Environment | Visual Field | Statistical Regularities | Horizontal Vergence |