Home > Preview
The flashcards below were created by user
on FreezingBlue Flashcards.
Describe the steps of the auditory pathway from the ganglion cells.
- 1. Ganglion cells send axons through auditory nerve to terminate at cochlea nucleus.
- 2. Cochlea nucleus --> Superior Olivary Complex (SOC) in the brain stem
- here, signals from both ears are integrated (quite a big difference from visual system, where integration only happens at cortex)
- 3. SOC --> Inferior colliculus (in midbrain, involved in localisation)
- 4. Inferior colliculus --> medial Geniculate nucleus
- 5. to the primary auditory cortex
- Binaural combination of signals occur at all stages from SOC up.
Tonotopy in BM is preserved in the __ __ such that the __ __ __ is ___ organised in a way that mirrors the BM.
- ascending pathway
- primary auditory cortex
The main cues we use for sound localisation come from...?
- aural disparities
- (because ears sit on opposite sides of head, we hear slighlty differnt things from each ear)
What coordinate frame can we use to describe the points away from the ears?
- Sphere centred around the intersection of the interaural axis (geometrical construct of straight line passing through both ears) and the midline of head
- we can describe any point relative to this by using azimuth and elevation and its distance.
- This shows how sound sources located away from the midline are at different physical distances from the two ears. Implications for intensity and time of arrival.
What are the two main different ways aural disparities can help with the localisation of sound? Which is better for which frequencies?
- 1. Interaural intensity difference (IID)
- Effect is greater for high frequency sounds
- 2. Interaural timing difference (ITD)
- Better at low frequency sounds (ambiguous at high freq >1500Hz) max diff 650 micro secs
How does the sound intensity differ between different ears?
- Inverse square law - sound intensity lowers in proportion to square of distance. However, unless sound is very close, this alone will not change the intensity much.
- Acoustic shadow: the head creates a 'shadow' - ear on side of head away from source will receive less intense sound because head gets in the way (max diff about 20dB)
- Magnitude of shadowing effect greater for higher freq sounds because they get attenuated more easily by obstacles.
Who is important in coming up with a theory about localising sound and what was the theory?
- Lord Rayleigh
- Duplex theory
- interaural intensity difference for high freq sounds, and interawural timing difference for low freq sound
What did Lord Rayleigh do to test the idea that interaural timing differences can be enough to produce the perception of auditory location?
- Used 2 tuning forks (one of which slightly mistuned)
- Binaural beats - subjects experienced sound moving around in head - showed that a little difference in freq does give vital cues
Why are timing difference unreliable at higher frequencies?
- because of aliasing problem
- brain can't distinguish between which sound peaks come from which ear
What do further psychophysical experiments show?
- Using headphone presentation, two cues can be studied in isolation
- show that ability to discriminate angle depends very much on frequency (eg. at 30deg azimuth at 9000Hz, discrimination is about 13 degrees) (From Mills 1958)
- pure tone/directly in front: differences of 1 degree can be detected at 1000Hz (ITD - 10microsec; IID - 0.5dB)
- The cues can be combined together or put into opposition (cue trading)
- Relative importance of both cues still debated. In reality, we probably experience both as we often hear broadband sounds that have both high and low freq (eg speech).
How does the Superior Olivary Complex support sound localisation?
- Medial SOC: timing differences. Individuals cells prefer particular delays between left and right ears
- Lateral SOC: intensity differences. Different firing rates of inputs from left and right cochlear nuclei.
Who originally proposed a neural mechanism for detecting timing differences between sounds from different ears? What is it like?
- Lloyd Jeffress (early 1950s)
- based on idea that signals from contralateral cochlear nucleus would have to travel further and neural trasmission has finite speed.
- A coincidence detector will fire when it received concurrent inputs from both ears
- The delay from a particular ear would allow the coincidence detector to detect the azimuth of sound source.
- Subsequent research have revealed that anatomical structure of medial superior olive is similar to wiring diagram predicted by Jeffress.
Simply relying on IID and ITDs, however, may not be enough to figure out where the sound comes from. Why?
- Interaural differences do not indicate whether sounds are coming from front or behind, or above or below.
- Aural ambiguity, the cone of confusion
Broadly speaking, what two cues can we talk about when considering localisation?
- Binaurial cues (from the two ears) esp for azimuth?
- Spectral cues (info about localisation contained in differences in the distribution/spectrum of frequencies that reach ear from different locations) esp for elevation
How do we deal with this ambiguity?
- Moving the head: provides more information
- Head Related Transfer Function: information gained from patterns of reflections, filtering and resonance that come from...
- outer ear (pinna cues), shape of head, shape of upper torso (we can also call this spectral cues)
- A listener calibrated to filtering properties of own audtory system can use this to localise sounds (esp for sound source elevation)
How do Pinna cues work? Give some studies.
- Batteau (1967): normal stereophonic recordings from headphons made listener feel the source to be inside their heads. However, if microphone was encased in casts of the pinnae, the sound felt like it came from outside the head.
- Gardner & Gardner (1973): elevation discrimination much poorer if pinnae were filled with molding compound
- King (2001): placing microphones inside ear. Broadband sound at elevations 15deg above and below head. ITD and IID are same (obviously), but microphone picked up different patterns of frequencies for the two locations.
- Hofman et al (1998): mould inserted to change pinna shape. First few days, the elevation localisation is terrible. But after 19 days, the estimation is pretty good. When the mould is removed, the ear is still good at estimating location. Training with the moulds seem to create a new set of correlations between spectral cues and location.
We can simulate different physical __ for any sound based on one set of measurements of the __ __ __ __.
- Head Related Transfer Function
What are other benefits of having 2 ears?
- Signals will be more detectable as the same noise in each ear will cancel out
- If noise is similar in both ears, it helps for signal to be slightly different in each ear. In particular...
- 1. phase of signal change in one ear
- 2. signal turned off in one ear
- this disparity between signal and background noise will mean they are attributed to different locations in space.
- the cocktail party effect