Sound Localization

by Elaine Fu and Audra Hansard

 

Table of Contents

 

 

The Abstract:

            This research was to find a relationship between the frequency of sound and the ability to localize sound.  Several subjects were blindfolded, asked to listen to different frequencies, then indicate where the sound was coming from.  The answers where recorded and by measuring the number of degrees that they were off, their localization abilities were determined.  It was found that the lower frequency was easier to detect than the high frequency.

Table of Contents

Introduction:

In this paper, we will explore how the frequency of a sound given affects the ability of the listener to locate where the sound occurred.  For the purpose of this experiment we will call the point in time when the sound waves are created a sound event.  This is to clarify the difference between the creation of sound and the moment in time when the acoustic nerve in the ear is stimulated, or when the sound is heard, which we will call an auditory event (Blauert 1983).

The creation of sound is made when there is a vibration through a medium, in this experiment the medium is air.  The number of occurrences of the vibrating particles in the medium is what we call frequency. Frequency is measured as the amount of complete vibrations the particle makes in the medium during a certain unit of time.  As sound waves are traveling through air, the frequency remains constant because the vibrations are caused by the particles’ nearest neighbor.  In other words, when the first particle vibrates at 500 Hz, it influences the second particle, which will also vibrate at 500 Hz.  This is important to this experiment because if the frequencies were to change while the subjects where to listen to the sound, it might enhance/undermine their ability to localize sound.

Sound localization defined by Glorig is the “determination, by auditory responses alone, of the direction from which sound comes to the ear”.  Research in sound localization began over a century ago.  The purpose of earlier studies was to determine which nerve signals that one’s brain was receiving from the acoustic nerve, carried the information about the origins of the sound event (Gourevitch 1987).  These early researches encountered many difficulties such as controlling the many variables present in the subjects’ auditory space.  For example, each person has different musical abilities, which affects their sound perception.  Also the problem of localization blur is hard to avoid.  This is when the space where the sound source exists is less distinguished than the auditory space (Blauert 1983).

            An important aspect of this experiment is the anatomy of the human ear.  How the sound signals in the ear canals are processed is the most important part of the spatial hearing for the subject.  The ear is made up of three parts, the external ear, which we can see on a person’s head, the middle ear that is used to equalize air pressure and sound shock in order to protect the inner ear, and the inner ear, which gathers the sensory data to send to the brain (Glorig 1985).  The inner ear is what contains the acoustic nerve and is the body organ we will be testing for.  We will try to determine its accuracy in locating the origins of the various sound events. 

Generally humans can hear pitches as low as 16 Hz and up to 20kHz.  Our problem is to determine the accuracy of localization when a high frequency is given versus a low frequency.  Our hypothesis is that the more precise localization of sound events occurs at a lower frequency (4500 Hz) than at a higher frequency (10,000 Hz). 

Table of Contents

Materials / Procedure:

Here is how the experiment was done.  We used four subjects (people), a blind fold, a speaker, a frequency generator, a meter stick, and tape.  We only used one speaker instead of multiple speakers to concentrate the origin of sound.  Multiple speakers would have made the sound harder to locate because there would have been multiple sources of origin of sound.  An X was taped to the floor that was 40.25 in away from the speaker and frequency generator.  The subjects stood on the X with a blindfold and spun themselves around in both directions and then stop randomly, facing any direction.

Then the frequency generator was slowly turned on to eliminate any possible noise caused by the on-off button.  Two frequencies were given the high frequency at 10,000 Hz and the low frequency at 4500 Hz.  Each frequency was tested three times on each subject with a total of twelve tests per frequency level.  The subjects were asked to listen for the direction of the sound and point with the tip of their foot towards the direction that it was coming from (the subject’s foot is on the X).  We put the meter stick down on the floor in the direction that the subject’s foot was pointing at so that one end of the meter stick was touching the tip of the subject’s foot and the angle of the meter stick was parallel with the subject’s foot.  The distance from the subject’s foot to the speaker was measured.  After the measurements, each subject on each test was asked for their level of certainty in determining the origin of sound on a scale of 1 to 10 with 1 being the most uncertain and 10 being the most certain.  If the subject located the direction of the sound without fault then the distance from the tip of their foot to the speaker was 40.25 in away, the exact distance that the X was away from the speaker.  If the subject was off then the distance that they pointed at and the distance of the X to the speaker would form a triangle.  The distance that the subject pointed at would be the hypotenuse and the distance between the speaker and the X would be the leg of the triangle.

Using the hypotenuse and the leg, the number of degrees that the subject is standing from was calculated (figure 3).  This means that if the subject is able to locate the origin of sound perfectly, the degree they are off would be zero but if they faulted in their localization then the degrees off would be greater than 0°.   How much the subjects were off would be reflected in the degrees (the higher the degrees off the further off they were in their localization).  The equation that we used to calculate the degree is

Cos q = (leg) / (hypotenuse)

Cos q = (40.25 in) / (hypotenuse)

Cos –1 [(40.25 in) / (hypotenuse)] = q

Table of Contents

Data (10,000 Hz):

 

Hypotenuse (in)

Degrees off

Certainty level

Subject A

42.5

0.3268°

8

 

40.3

0.0498°

8

 

41.25

0.2206°

8.5

Subject B

41.75

0.2689°

8

 

41.5

0.2461°

8

 

40.75

0.1568°

8

Subject C

40.25

0.0°

7

 

42

0.2896°

8

 

41.5

0.2461°

10

Subject D

40.25

0.0°

9

 

40.25

0.0°

9

 

41.7

0.2645°

8

 10K Hz Text File

Data (4,500 Hz):

 

Hypotenuse (in)

Degrees off

Certainty level

Subject A

40.7

0.1488°

9

 

40.25

0.0°

10

 

40.8

0.1644°

10

Subject B

40.25

0.0°

8

 

40.9

0.1785°

9

 

41.2

0.2152°

8

Subject C

42

0.2896°

8.5

 

41

0.1915°

8.5

 

40.25

0.0°

9

Subject D

40.25

0.0°

9

 

40.9

0.1785°

9

 

41.2

0.2152°

10

4.5K Hz Text File

 

Table of Contents

Graphs

Table of Contents

Analysis / Conclusion:

When we started the project, we had a hard time determining a hypothesis.  We assumed that a high frequency would give off a sound that would have more decibels than a low frequency, hence making the high frequency easier to detect.  Consequently, our initial hypothesis was that a higher pitch would be easier to detect.  This initial hypothesis changed however when we considered the concept of why humans cannot hear dog whistles.  Through research, we discovered that dog whistles produce sounds that are too high (above 20,000 Hz) for the human ear to hear.  This is because the human ear cannot hear above 20,000 Hz.  When frequency reaches above 20,000 Hz, the wavelengths become too short for humans to sense, but lower frequencies can be heard for two reasons.  Low frequencies have long wavelengths, which have the ability to bend around obstacles more easily than short wavelengths.  The long wavelengths are easier to detect because of the separation of the ears and the head are both obstacles for the waves to overcome.  The other reason is that because of the longer wavelengths of lower frequencies, there is a delay between the sound reaching the ear closest to the speaker and the ear that is farther away from the speaker.  This delay can actually create an interaural time difference that helps in sound localization.  If the sound reaches one ear first and for a millisecond does not reach the other ear, the brain will have an indication that the sound is coming from the direction of the ear that hears the sound first.  The delay, however, is almost nonexistent in high frequencies because when the wavelengths becomes shorter than the delay, both ears can hear the sound almost simultaneously.  Thus, the lack of delay creates the confusion that sound is coming from many different directions.  Based on these facts, our final hypothesis came out to be that lower frequency is easier to detect than higher frequency.

            From the data points, it looks like that the lower frequency (4,500 Hz) barely wins the contest of who-is-easier-to-find over the higher frequency (10,000 Hz).  This surprised us.  The data almost appeared as if low and high frequencies were seemingly equal in a subject’s ability to localize the sound.  The means of the degrees off between the 10,000 Hz and the 4,500 Hz are 0.199° and 0.132°, respectively.  The means of the certainty level between the 10,000 Hz and the 4,500 Hz are 8.29 and 9, respectively.  The standard deviation of the 10,000 Hz and the 4,500 Hz are 0.186 and 0.099, respectively.  All of the higher frequency values were extremely close to the lower frequency values.  We had expected a greater variation of values between the two frequencies.  Even though the data came out not fully proving our expectations, we still believe that our initial expectations were correct for the following reasons.  In conducting this research, we discovered many uncontrollable variables and errors.  The least controllable aspect and largest cause of error in our experiment were the environment and the equipment.   Due to budget cut backs to our already low funding, we were unable to obtain the most desirable environment and equipment.  In the most scientifically accurate environment, the space used to conduct the experiment should be a cubed shaped roomed without any reflective surfaces.  This means that the room should be evenly constructed on all sides and free from all clutter or protruding surface area (i.e. no furniture, no columns etc.).  However, our low budget led us to use a high school science classroom that was rectangular-shaped and contained large objects of various shapes and sizes such as counter tops, desks, and other scientific equipment.  The shape of the room used detracted from the accuracy of our data because when the length of the room is longer than the width, the sound waves that travel along the length must travel farther.  Thus, the wavelengths that traveled along the length took longer to reflect off of the walls than the waves that traveled the width of the room.  This leads to the subjects hearing the width-reflected waves before the waves reflected by the length, distorting the reflective pattern heard.  The objects in the room also change the auditory event of the subjects in relation to the sound event.  When sound waves hit the objects such as desks and counters they bounced off in all directions, easily disturbing the path of the sound waves.  The waves that were originally suppose to bounce off of the walls and into the subject’s ear are now traveling elsewhere.  This reflection of waves off of the objects also caused the sound waves to collide with each other, interrupting the path of the sound waves even more. 

The other uncontrolled environmental disturbance was the temperature of the room.  Air molecules move slower in cooler temperatures than in warmer temperatures.  The room we used was cool, which means that the sound waves were traveling slower than they would under warmer circumstances.  Sound waves are created through vibrations in the air and since the air molecules were moving relatively slow due to the temperature of the room, the sound was able to reach each ear with a longer delay in between.  In previous paragraphs we discussed that a delay between the two ears actually helps in localizing sound.  So since this delay was extended because of the cool temperature, the ability to localize sound also increased.  In other words the ability to localize sound is easiest when the 4,500 Hz is played in a cool environment whereas the localization is most difficult when the 10,000 Hz is played in a warm environment.  So because the 10,000 Hz waves were slowed down, the subjects were better able to detect the 10,000 Hz than hypothesized, making the data for the 4,500 Hz and the 10,000 Hz similar.

The airflow and noise from the ventilation system in the room also interrupted the sound waves.  The flow of air moves the sound vibrations in the air towards the direction that the ventilation was blowing.  What was suppose to be heard by the subject is now carried away with the air currents.  The sound waves from the ventilation system changed the pattern of the frequency sound waves.  Some noise waves from the ventilation clashed with the frequency waves while others may have mixed in with the frequency waves, making the distinction of the source of sound harder to identify.  We tried to eliminate this problem by blocking the air with large pieces of cardboard, but realized that although the air was strong enough to move the sound waves, they were too weak for humans to feel.  We could have blocked off an area of the room but the air would have still traveled in through cracks and the top of the block off area. 

The uncontrollable factor of error in our subjects was the subjects’ earlier established perception of sound.  First of all, the subjects had different prior experiences with music and sound.  Some subjects were music students, which gave them a better background knowledge of sound perception and localization.  Because of their background, they were more experienced in using their ears as a tool of observation.  This relates to the other subjects’ lack of ability to locate sounds because they abuse their ears by listening to extremely loud sounds on such things as headphones, which weakens a person’s ability to hear with precision by harming the small hairs in the ear that are used to absorb the sound waves.  Consequently, some subjects had a prior advantage over others in sound localization.

 Table of Contents

Bibliography:

Blauert, Jens.  Spatial Hearing: The Psychophysics of Human Sound Localization.  Cambridge, Mass:  MIT Press.  1983.

Franke, JL.  “Sound Localization.”.  Dec.23, 1996.  Dec. 8, 2002.  <http://whistlepig.cs.indiana.edu:31415/q700/node2.html#SECTION00011000000000000000>.

GelFand, Stanley A.  Hearing: an Introduction to Psychological and Physiological Acoustics 3rd ed.  New York, NY:  Marcel Dekker, Inc.  1998.

Glorig, Aram.  Noise and Your Ear.  New York, NY:  Grune & Stratton.  1958.

Leight, Howard Company.  “Education”.  1998.  Dec. 7, 2002. <http://www.howardleight.com/Industrial/education/NoiseLevelsAndFrequency.html>.

Pickles, James O.  An Introduction to the Physiology of Hearing 2nd ed.  San Diego, CA:  Academic Press Limited.  1988.

Snyder, Jeff.  “Sound Waves”.  Dec. 30, 2002.  <http://csunix1.lvc.edu/~snyder/1ch2.html>.

Yost, William A. and Gourevitch, George.  Directional Hearing.  New York, NY: Springer-Verlag.  1987.

                                                                                                    Table of Contents

Links

Synchronized Structured Sound : This site delivers excellent factual information on sound localization.  It shows pictures of different sound localization experiments and equations that can calculate the amplitude of sounds based on the speaker's degree from the listener. 

 

Basic Studies of Human Sound Localization : As the title says, this is a short but informative paragraph on the very basics of sound localization for humans.  It touches on all important aspects of sound localization.

 

3 Sound Localization : A very informative site that explains interaural intensity difference, interaural time difference, head-related transfer function, techniques for calculating interaural time delay, and much more.

 

Sound Localization PPT : A power point that gives easy to follow information on why sound localization is important.  Each slide gives a different aspect of sound localization that was informative yet easy to understand.

 

Monaural Hearing and Sound Localization : This site talks about our human hearing and also sound localization.  It sites many different experiments that were done in regards to monaural hearing and sound localization.  The site is very effective in explaining both subjects.

                                                                                                               Table of Contents