Senior Thesis 7-2-16

34
Washington State University PHASED ARRAY OF SPEAKERS A senior thesis submitted to the faculty of the Department of Physics and Astronomy by Alan S. Hartquist Fred Gittes, Advisor Submitted in Partial Fulfillment of the Requirements for the Degree of Bachelor of Science Spring 2016

Transcript of Senior Thesis 7-2-16

Page 1: Senior Thesis 7-2-16

Washington State University

PHASED ARRAY OF SPEAKERS

A senior thesis submitted to the faculty of the

Department of Physics and Astronomy

by

Alan S. Hartquist

Fred Gittes, Advisor

Submitted in Partial Fulfillment

of the Requirements

for the Degree of

Bachelor of Science

Spring 2016

Page 2: Senior Thesis 7-2-16

Abstract

A phased array takes advantage of the principles of wave interference. We willdevelop the classical model of the interference of waves, and explore how it can beextended to a phased array. Once we have established the model in detail we willuse it to explore and predict the intensity patterns of acoustic waves propagatingfrom an array of speakers.

ii

Page 3: Senior Thesis 7-2-16

Table of Contents

List of Figures iv

Chapter 11

1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

Chapter 24

2.1 Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Chapter 310

3.1 Theoretical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

Chapter 416

4.1 Custom Apparatus . . . . . . . . . . . . . . . . . . . . . . . . . . . 164.2 Innards of Arduino . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

Chapter 528

5.1 Findings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285.2 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

iii

Page 4: Senior Thesis 7-2-16

List of Figures

1.1 Basic model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Two point source interference . . . . . . . . . . . . . . . . . . . . . 3

2.1 Out of phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2 In phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.3 N point source interference . . . . . . . . . . . . . . . . . . . . . . . 6

3.1 9 sources at 1000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . 123.2 9 sources at 5000Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . 133.3 50cm spacing at 1000Hz . . . . . . . . . . . . . . . . . . . . . . . . 14

4.1 Custom apparatus . . . . . . . . . . . . . . . . . . . . . . . . . . . 164.2 Oscilloscope 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174.3 Arduino Due microcontroller . . . . . . . . . . . . . . . . . . . . . . 184.4 Remote control (Top) . . . . . . . . . . . . . . . . . . . . . . . . . . 184.5 First Attempted Handler Function . . . . . . . . . . . . . . . . . . 214.6 Quarter Phaseshift Diagram . . . . . . . . . . . . . . . . . . . . . . 234.7 Quarter Phaseshift Handler Function . . . . . . . . . . . . . . . . . 234.8 Half Phaseshift Diagram . . . . . . . . . . . . . . . . . . . . . . . . 244.9 Half Phaseshift Function . . . . . . . . . . . . . . . . . . . . . . . . 244.10 Oscilloscope Screenshot With Resolution of 2 . . . . . . . . . . . . . 254.11 Phaseshift Mode Function . . . . . . . . . . . . . . . . . . . . . . . 264.12 Final Handler Function . . . . . . . . . . . . . . . . . . . . . . . . . 27

5.1 Zero Tdelay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

iv

Page 5: Senior Thesis 7-2-16

Chapter 11.1 Introduction

For my project I constructed a phased array of speakers. The purpose of this

project was to understand, and then observe the effects of the interference of

sound waves.

A phased array takes advantage of the simple idea of the interference of waves.

By setting up an array of sources, one can offset each of their phases such that some

parts of the wave front cancel out, while other parts add up. Once you have used

this interference to construct a beam, you can change the relative phase between

each source which will allow you to steer the beam.

Modern radar often employs phased arrays. In the case of radar one uses

electromagnetic waves. Without localizing the wave front to a small area radar

would not be as useful because it is harder to distinguish what you are detecting.

Is it the airplane in the sky, or is it the building across the street? We will cover

this in a bit more detail later. To start we need to understand waves on the most

basic level.

1.2 Overview

In Figure 1.1a we see a schematic example of the most basic type of phased array,

here constructed out of eight coherent point sources a distance d away from each

other. Coherence means that the all of the sources are of the same amplitude and

rate.

In Figure 1.1a all of the point sources are in phase, which means they all pulse

Page 6: Senior Thesis 7-2-16

2

Figure 1.1. Basic model

at the same rate. This creates a wave front which is parallel to their orientation

and travels to the right. In Figure 1.1b the array has been fitted with a time delay,

τ , between each source respectively from the top one to the bottom one. This

means that the array has been phased, and the effect is to change the direction of

the wave front.

The interference of waves can add together constructively or deconstructively.

Observe point p1, here we have constructive interference, and at point p2 we have

deconstructive interference. Look again at Figure 1.1b, by changing the time delay

between each source, you can control the direction of the wave front, and by adding

more sources to your array you can make the wave front become smaller in width

till it becomes a very powerful beam in one direction, and complete destructive

interference in all other directions. We will come back to this; for now, we are

just trying to paint a more intuitive picture. In my project the speakers are to be

thought of point sources like these, and they emit sound waves which are known

to be pressure waves in the medium of air.

In Figure 1.2 we see a diagram depicting two point sources (green dots at the

bottom) interfering with each other. Imagine that these are the first two speakers

Page 7: Senior Thesis 7-2-16

3

on our phased array from Figure 1.1. The red dots are associated with places

which have constructive interference. The blue dots represent areas where we have

deconstructive interference. These are places where we would have a high pressure

and low pressure zones.

Figure 1.2. Two point source interference

Page 8: Senior Thesis 7-2-16

Chapter 22.1 Formulation

The interference of two pressure waves can be demonstrated mathematically by

adding up with the wave functions of two individual waves. The wave functions

are given as:

P1(x, t) = P0 cos(kx− ωt) (2.1)

P2(x, t) = P0 cos(kx− ωt+ φ) (2.2)

where P (x, t) is the amplitude of the wave at position x, P0 is the maximum

amplitude of the wave, k is the wave number, ω is the angular frequency, t is the

time, and φ is the phase. If we graph these two functions together we can observe

how they would superpose.

In Figure 2.1 and Figure 2.2 we see a blue, orange, and green lines; these

correspond to P1, P2, and their sum, respectively. Figure 2.1 shows the two waves

out of phase (adding a phase shift of π to P2. As you can see, the green line

demonstrates that they then would cancel out with each other. In Figure 2.2 we

see that they do not have a phase shift, and the two waves add constructively (In

Figure 2.2 the blue line has been covered up by the orange).

The interference of these two waves is the essence of what we are doing with

the phased array, but instead of just two waves we are taking into account all of

the interference due to the many sources interacting with each other. This may

seem like a daunting task, but it can be mathematically derived.

Figure 2.3 shows a general schematic which reflects my experimental apparatus.

Imagine that you are hovering above and looking down at the phased array. There

Page 9: Senior Thesis 7-2-16

5

Figure 2.1. Out of phase

Figure 2.2. In phase

are N point sources with a distance d between each one. My objective was to know

what is the amplitude of the pressure wave at an angle θ off of the middle axis.

The variables r1, r2, and rN are representing the different path lengths between

two adjacent sources. ∆r shows the path length difference. D is some distance to

where we would observe the pressure wave, and Y is that distance off of the middle

Page 10: Senior Thesis 7-2-16

6

axis. Geometry demands that as D goes to infinity. The length ∆r is given by

d sin θ = ∆r (2.3)

Figure 2.3. N point source interference

This is where we make our first assumption, that D >> d, which is the far

field approximation. ∆r plays a very important role in this experiment. This is

because if the path difference is equal to the wavelength of our sources, then they

would be in phase. Therefore we can now define a new variable δ as

δ = k∆r (2.4)

where k = 2π/λ. This is what we call the phase. Assuming that all of the sources

are coherent with each other, from the diagram you can see that the second source

is δ off from the first, the third source is 2δ off from the first source, and so on.

We could now put in a lag between each source so that we could then control the

direction of the beam, as discussed in chapter one. We would define this total lag

as:

tn = to + nτ (2.5)

where τ is the time delay between each adjacent source. We can add this to our

phase term in Eq 2.4, where f is frequency of the source to get

δ = k∆r + 2πfτ (2.6)

Page 11: Senior Thesis 7-2-16

7

Our phase variable then can be defined as:

δ = 2π[dλ

sin(θ) + fτ]

(2.7)

Now we have everything we need to make a single equation that expresses the

amplitude of the sound wave given the angle θ off of the x axis.

The first, second, and last (Nth) source can be defined as

P1 = P0 cos(kx1 − ωt) (2.8)

P2 = P0 cos(kx2 − ωt) (2.9)

PN = P0 cos(kxN − ωt) (2.10)

P2 and PN can then be redefined in terms of P1:

P2 = P0 cos(kx1 − ωt+ δ) (2.11)

PN = P0 cos(kx1 − ωt+Nδ) (2.12)

Now we can sum them all up into one term:

P = P1 + P2 + ...+ PN−1 + PN (2.13)

P =N−1∑n=0

P0 cos(kx1 − ωt+ nδ) (2.14)

Now that we have an expression for the amplitude of the combination of all of

the separate waves, we want to now manipulate this expression more conveniently

by treating it as the real part of a sum of complex exponentials:

P =N−1∑n=0

P0ei(kx1−ωt+nδ) = P0e

i(kx1−ωt)N−1∑n=0

einδ (2.15)

Here we use an exponential trick,

P = P0ei(kx1−ωt)

N−1∑n=0

(eiδ)n (2.16)

Page 12: Senior Thesis 7-2-16

8

following which our P can be expressed as a truncated geometric series by

N−1∑n=0

xn =1− xN

1− x(2.17)

where x = eiδ. We can express our full equation as:

P = P0ei(kx1−ωt)1− (eiδ)N

1− eiδ(2.18)

where now 1− (eiδ)N and 1− eiδ can be expanded into

eiδN2 (e

−iδN2 − e

iδN2 ) (2.19)

eiδ2 (e

−iδ2 − e

iδ2 ) (2.20)

The quantities in parentheses turn into 2i sin(Nδ/2) and 2i sin(δ/2), respectively,

due to Euler’s Formula which states

2i sin(x) = eix − e−ix (2.21)

Plugging this all back into Eq 2.19 gives us:

P = P0 ei(kx1−ωt) × sin(Nδ/2)

sin(δ/2)(2.22)

where we have dropped the constant phase factor ei(N−1)δ/2. We can then re-express

the real part of this complex expression as a cosine wave,

P = P0sin(Nδ/2)

sin(δ/2)cos(kx1 − ωt) (2.23)

so that the amplitude of the wave is

P0sin(Nδ/2)

sin(δ/2)(2.24)

Another important property of all waves is that the intensity of the wave is propor-

tional to the amplitude of the wave squared. Therefore we now define the intensity

Page 13: Senior Thesis 7-2-16

9

of N sources as:

I(δ) = Imaxsin2(Nδ/2)

sin2(δ/2)(2.25)

If we plug in our previous expression for δ, we will obtain our main result:

I(θ) = Imax

sin2Nπ[(d/λ) sin θ + fτ

]sin2 π

[(d/λ) sin θ + fτ

] . (2.26)

Page 14: Senior Thesis 7-2-16

Chapter 33.1 Theoretical Results

Equation 2.26, which restated is

I(θ) = Imaxsin2(Nδ/2)

sin2(δ/2), δ = 2π

[d sin θ

λ+ fτ

](3.1)

gives us what the intensity of the sound wave is given an angle θ off of the per-

pendicular axis. From the small angle approximation we could then find I(θ) in

terms of y and D (Defined earlier in Chapter 2). However, let’s not make that

assumption, and keep it in terms of θ. we will also assume for the moment that

τ = 0, i.e. there is no time delay between sources.

With τ = 0, both θ and δ = 2π(d/λ) sin θ both become zero at zero angle. In

this case the squared sine functions in both the numerator and the denominator of

Equation 3.1 both become zero. Addditionally, in the denominator, the function

sin2 πδ becomes zero at the additional angles for which δ/2 = ±π,±2π, etc.

In the case of θ → 0, if we use L’Hospital’s Rule for both a numerator and a

denominator becoming zero, we find that

sin(Nx)

sinx→ N as x→ 0 (3.2)

which implies that

I(θ) → N2Imax as θ → 0. (3.3)

Therefore by adding just a few more sources (increasing N), we can dramatically

Page 15: Senior Thesis 7-2-16

11

increase the intensity of the wave in the forward direction.

In the other cases where the denominator becomes zero, δ/2 = ±π,±2π, etc,

the numerator will fortunately also become zero as well. These are actually just

like the situation at δ = 0. For example, near δ/2 = π we can write δ/2 = π = x

and use a trig identity to write

sin2N(π + x)

sin2(π + x)=

[cos(Nπ) sin(Nx)]2

[cos(π) sin(x)]2=

sin2(Nx)

sin2 x(3.4)

which implies that

I → N2Imax as δ/2→ π (3.5)

so there is a peak with the same intensity as at θ = 0.

In a functioning phased array, however, one should choose d/λ large enough

that these other peaks of intensity lie (using Equation 3.1) at angles |θ| > π/2,

i.e. they are pushed beyond the actual physical angular range in front of the array.

This way one has a single peak to work with.

The phase offset between adjacent speakers, including an imposed sequential

time delay of τ , is

δ = 2π(d/λ) sin θ + fτ

The product fτ expresses the time delay as a fraction of a full cycle. (A delay of

fτ = 1 would be indistinguishable from the situation without delay.) With f = 10

kHz and d = 0.1 m (and N = 9), we have

d

λ=fd

vs≈ 104Hz · 10−1m

(1/3)× 103m/s= 3× (

f

10kHz)

Using the summation

n=N−1∑n=0

xn = S−xN+1S =1− xN

1− x;

the distant resultant wave at any given angle is

ψ(δ) =n=N−1∑n=0

e−inδ =1− e−iNδ

1− e−iδ

Page 16: Senior Thesis 7-2-16

12

Figure 3.1. 9 sources at 1000Hz

e−iNδ/2eiNδ/2 − e−iNδ/2

eiδ/2 − e−iδ/2

If we drop the prefactor, the overall phase becomes that along a line out of the

array center. Then

ψ(δ) =sin(Nδ/2)

sin(δ/2)

With no time delay (fτ = 0), principal peaks (of amplitude ψ = N) appear at

δ = 0,±2π, etc., meaning

sin θ = 0,λ

d, ...

Page 17: Senior Thesis 7-2-16

13

Figure 3.2. 9 sources at 5000Hz

= (10kHz/f)× 0, 1/3, 2/3, ...

⇒ θ ≈ 0◦, 20◦, 42◦, 90◦(at10kHz)

⇒ θ ≈ 0◦, 42◦(at5kHz)

Values λ/d > 1 will not give an observed peak. A time delay τ will shift each

principal peak by approximately a fraction fτ of the angle to the next peak.

Each spike is bounded by adjacent zeros at Nδ/2 = ±π, i.e. at δ/2π = ±1/N ,

or

(d/λ) sin θ + fτ = ±1/N.

Page 18: Senior Thesis 7-2-16

14

Figure 3.3. 50cm spacing at 1000Hz

The full width ∆θFW of a peak is found from

2/N ≈ δ′(θ)×∆θFW = (d/λ) cos θ ∆θFW

∆θFW =2(λ/d)

Ncos θ=

2/27

cos θ× 10kHz

f.

The width of the central peak is then

∆θFW ≈ 4.25◦ × 10kHz

f

Page 19: Senior Thesis 7-2-16

15

The next few images are generated from the use of Mathematica. The hori-

zontal axis is in terms of θ in degrees, and the vertical axis is intensity. Figure

3.1 predicts what should be observed for nine coherent point sources (speakers) in

phase with a zero time delay at a frequency of 1000 Hz with a spacing of 10cm.

Figure 3.2 and Figure 3.3 shows how if we change frequency or the distance sep-

arating each source, it effects the intensity as a function of θ. We find that more

sources dramatically increases the intensity. We also find that if we increase the

separation between each source we get narrower beams, this also happens when

we increase the frequency.

Page 20: Senior Thesis 7-2-16

Chapter 44.1 Custom Apparatus

To test these predictions a custom phased array of speakers was constructed. I

investigated other phased array implementations and found that to keep my project

down to a reasonable budget I would have to build it out of less sophisticated

parts than that of commercially available phased arrays. I decided on using the

prototyping platform of Arduino.

Figure 4.1. Custom apparatus

The Arduino processor is designed to make prototyping and building a project

accessible to people that do not have a vigorous background in electronics and soft-

ware codding. It also has a wide variety of electronical components already prebuilt

and programmable. A great way to think about the Arduino is a very advanced

programmable Lego set. The first step was to obtain and mount nine speakers

onto a wooden base. I then purchased an Arduino Due, which is a microcontroller

board based on the Atmel SAM3X8E ARM Cortex-M3 CPU.

Page 21: Senior Thesis 7-2-16

17

Figure 4.2. Oscilloscope 1

I then wrote a code which allowed me to send a high and low voltage to each

of the individual speakers.

To generate sound the Arduino is instructed to rapidly turn individual speakers

on and off. Speakers really just have two states, which is either on or off. In real

audio applications an analog voltage is used to drive a speaker. In this case I have

elected to simplify driving speakers by utilizing just two states; a high voltage and

a low voltage. The Arduino Due uses a crystal oscillator for precise timing. The

processor includes several programmable timers that can be configured to generate

interrupts at precise intervals. The phased array application utilizes a hardware

timer to trigger the generation of sound from each speaker. The sound frequency

is one of two critical variables that the Arduino is programmed to manipulate, the

other being the time delay between each speaker.

Now that I had an array of speakers with which I could manipulate the fre-

quency produced, I had to check to make sure that they were coherent –that is–

synchronized in time, and emitting the same pressure waves. I found that the best

Page 22: Senior Thesis 7-2-16

18

Figure 4.3. Arduino Due microcontroller

Figure 4.4. Remote control (Top)

way to test for synchronization would be to hook them up to an oscilloscope and

observe if each output pin to the speakers were in sync. The reason why we cannot

assume this is because it takes time for the processor to execute the commands

given to it. I hooked the oscilloscope up to the first two speaker outputs, and

coded in a time delay of 30 microseconds between them. From Figure 4.2 it is

Page 23: Senior Thesis 7-2-16

19

observed that the delta x reads 30.26 microseconds. This means that the micro

controller takes about .26 microseconds to execute its commands. This means that

the speakers are not exactly coherent. But .26 microseconds is small enough that

its effects will be relatively small. The .26 microseconds could be compensated

for in software however, I have elected not to do so. The second thing I needed

to check was to make sure thatall of the speakers had approximately the same

volume. This would ensure that the wave intensity would be about the same. I

did this by turning each speaker on independently, and then using the program

Audacity and a directional microphone. Audacity is an audio software package

which can be used as a decibel recorder, as well as a spectrogram. By placing the

microphone the same distance away in front of each individual speaker I was able

to calibrate all of them to have about the same decibel reading. At this point I

have a phased array which can play at any frequency between about 20Hz to about

20,000Hz (which is about the maximum the human ear can hear up to). At this

point all of the speakers were found to be coherent in amplitude and timing (as

good as you can with the equipment presently at hand).

4.2 Innards of Arduino

In this project I wrote a very extensive Arduino sketch. The overall objective

was to use an Arduino Mini Pro as a remote controller which would use radios

to communicate with the Arduino Due, located on the phased array. The remote

control used a couple of rotary encoders, switches, and buttons to allow the user to

traverse through the user interface I created using a LCD screen fitted to the top of

the remote control. Much of this work was not essential to model the interference

of sound waves; but made the phased array more accessible to the user.

Almost all of the code in my project was used to control this user interface.

Its full of countless checks which are designed so that the user doesnt do anything

by mistake. The point that I am trying to make is that the code which actually

controls the speakers, and inputs the time delay is not very big. It is small and

simple, however it takes advantage of a very sophisticated idea in hardware. We

now will discuss what is called the Interrupt Routine. This is the most single

important part of my project which enables the individual speakers to become a

Page 24: Senior Thesis 7-2-16

20

single phased array.

In a basic Arduino sketch you have your main loop function. The Arduino

executes all of the commands you give it, in the order in which you placed your

commands. For example of you had a simple circuit of three LED lights; you could

write a sketch that turns on the first, second, and then third light in a row. It

may seem like that they all turned on at the same time, however this is not the

case. They actually turned on individually, one at a time. In many applications

this is not important, however it makes things more complicated when you want

to control things simultaneously. One way to get around this problem, is by using

the interrupt routine. The whole idea behind an interrupt routine is that you can

in a sense do more than one thing at a time. From a programmers perspective,

the interrupt routine provides an illusion of more than one task executing at time.

In this project the Arduino is controlling nine speakers and managing the user

interface. In reality, the Arduino is only performing one of those two functions at

any given instant in time.

In Figure 4.5 we see a clip of code from the Arduino Due I made. The function

is called firstHandler which was the first attempt at using software to control

the array. This function is passed into the interrupt routine of the Arduino and

continues to run in the background until its sent a different command which either

updates or disables it. The main purpose of this function is to send a pulse to each

speaker, and then stall for some amount of time called Tdelay which is given in

the unit of microseconds using the delayMicroseconds function. The rate at which

firstHandler is called by the interrupt routine determines the frequency of the

sound coming from the speakers. The Tdelay and rate of calling the firstHandler

function (which we defined as frequency) can be directly manipulated by the users

input via remote control.

In a normal Arduino sketch you would call a command that says something like

digitalWrite (1, HIGH) which would set the voltage at pin 1 to HIGH (around 3.3V

in the case of the Arduino Due). This command is a prebuilt function located the

Arduino library, and it is universally used over all platforms of Arduino devices. In

the case where one does not need speed and almost immediate response, than this

function is reliable for almost all applications. However if one is concerned with

cutting down as much computing time as possible and also maximizing efficiency

Page 25: Senior Thesis 7-2-16

21

Figure 4.5. First Attempted Handler Function

than it may be useful to find a different strategy for controlling the Arduino. So

instead of using the digitalWrite function, we elected to use a different method

using commands similar to the first line given in in firstHandler function shown in

Figure 4.5

REG PIOC ODSRˆ = 0x1 << 29;

You may notice that this is not actually a function, but a manipulation of an

actual variable. This variable is actually the register that controls pin 1 in the

Arduino Due, which is connected to the first speaker. This in essence is direct

manipulation, verses using a built in function to execute a command. This strategy

functions to cut down as much computing time as possible. This method proved

to be dramatically more efficient than my earliest attempts and increased our

resolution of simultaneity by several orders of magnitude. This command functions

by toggling the state of the pin to HIGH or to LOW each time the interrupt routine

is called. In other words, a speaker that is currently driven with a low voltage will

be driven with a high voltage. This means that it requires that the interrupt

routine to be called twice for the speaker to be turned on and then turned off,

or for a pulse to be sent to the speaker. The sound frequency of each speaker is

therefore half of the rate at which we call the interrupt routine. We then can input

a slight time delay between each consecutive speaker to direct the speakers sound

intensity lobes coming from the array. There is a subtle problem which I found

Page 26: Senior Thesis 7-2-16

22

when using this method for controlling the array. If we tried to input a time delay

which was more than a ninth of the period, than we would see that the software

would lock up and the apparatus would no longer respond. This is because the

interrupt routine would be triggered again before the firstHandler function was

finished running. The sum of all of the delayMirosecond fucntions made us stay in

the interrupt routine for an incredible portion of the total time, and if the Tdelay

variable became too big, we would stay in the interrupt routine indefinitely. This

would mean that we would be infinitely stuck in the interrupt routine and meant

we could no longer instruct the apparatus with the remote control. This is because

remotes instructions are given in the main loop of the program, which again, would

no longer be accessed due to unable to leave the interrupt routine. To get around

this problem we found that we had to somehow trigger the speakers individually so

that we would not get stuck in the interrupt routine. We realized that by drawing

event phase shift diagrams we would be able to find a working pattern which had

longer phase shift that just one ninth of the period of the waveform. This is because

the waveforms should eventually become repetitious. By analyzing the diagrams

we would be able to hard code the pattern into the interrupt routine. Figure 4.6

shows the one quarter waveform.

The vertical numbers on the left correspond to the speaker number, the numbers

at the top correspond to events in time. Event 0 can be identified as the initial

conditions of the state of the speakers. For example speaker 1 through 4 start as

low, then speakers 5 through 8 are initially high, leaving speaker 9 as initially low.

Event 1 allows us to see that we would want to toggle speaker 1, 5, and 9. The

same goes for events 2 through 5. For a full period we would need to go all the

way to event 9, however we see that since we are toggling; event 1 and event 5 are

equivalent. This means that we only need to account for events 1 through 5, and

then call this twice as fast to obtain a full period.

Figure 4.7 shows how we can rewrite our firstHandler0 from before to become

a hardcoded version of a quarter phase shift which here I renamed as Phase-

ShiftHandlerA. As you can see I now have rewritten it so that we come into the

interrupt routine and only do one event at a time however toggling more than

one speaker at a time. Also notice that we no longer need the delayMicroseconds

function. Due to the way we structured the code, it is no longer needed, and the

Page 27: Senior Thesis 7-2-16

23

Figure 4.6. Quarter Phaseshift Diagram

frequency of the sound which is produced is now determined by how fast we called

the interrupt routine. Without the use of the delayMicroseconds function we no

longer waste time sitting in the interrupt routine. We now actually spend the ma-

jority of the time in the main loop of the program. This has potentially improved

the performance of the communications to the remote control and the array. As

another example Figure 4.7 and 4.8 show that same process for creating a phase

shift of one half phase shift. As you can see from both of the figures, we do not

need to do anything for event 2 or for event 4.

Figure 4.7. Quarter Phaseshift Handler Function

Figure 4.9 shows the oscilloscope display of speaker 1 and speaker 2 using the

one half phase sift where the yellow, teal, and red waveforms represent speaker 1,

Page 28: Senior Thesis 7-2-16

24

Figure 4.8. Half Phaseshift Diagram

Figure 4.9. Half Phaseshift Function

speaker 2 and the addition of the two respectably. Now that we found that we

could make fully working phase shifts by drawing the using a diagram, and then

hardcoding them into the Arduino Program. We now found that there was a way

to write a general code which could produce these patterns without having to hard

code them into the interrupt routines. We did this by writing down and comparing

which speakers we needed to toggle during each event for all of the patterns we

found using the phase shift diagrams. We came up with the following Table.

Where the phase shift of the waveform is one over the resolution. So the

resolution of 4 corresponds to a quarter phase shift. From this we found that we

could create a program that makes a general pattern for a given resolution by using

Page 29: Senior Thesis 7-2-16

25

Figure 4.10. Oscilloscope Screenshot With Resolution of 2

Table 4.1. Toggled Speakers for Various Event by Resolution

the modulo operator. Take a look at Table 2 and 3 and compare them to Table 1.

The speaker mod resolution outputs numbers which correspond to the same sets

of which speakers to toggle in a specific event given in Table 1.

Table 4.2. Modulus Table for Resolution of 4

For example, when resolution is 4, we see that S

At this point we are now able to pick whatever phase shift we wished at will

Page 30: Senior Thesis 7-2-16

26

Table 4.3. Modulus Table for Resolution of 3

Figure 4.11. Phaseshift Mode Function

without any limitations besides that of the maximum frequency of about 72kHz.

This is a dramatic improvement. Since humans can only hear up to about 20KHz,

and my microphone claims only to be able to be sensitive to about 16KHz, it seems

that we are no longer limited by the phased array apparatus.

Page 31: Senior Thesis 7-2-16

27

Figure 4.12. Final Handler Function

Page 32: Senior Thesis 7-2-16

Chapter 55.1 Findings

Now that the phased array was built it was time to start experimenting, and see if

we could observe any indication of interference. Figure 5.1 shows a plot of a rough

decibel reading vs location parallel to array. The microphone was placed in front

of the center of the array 62cm away from it. 96cm is the center of the array, 38cm

is one side, and 154cm is the other side (the array is about a meter across). It was

very clear that when one stands in front of the array and moves around you can

detect that you are moving in and out of places of high intensity and low intensity.

This indicates that there is indeed constructive and deconstructive interference.

However Figure 5.1 does not accurately model what we expected from Chapter 4

simulations. One would expect there to be a maximum peak located at the center

of the array regardless of the distance away observed (in this case at the location

of about 96cm).

5.2 Assumptions

When it came to trying to make a physical model of the interference of sound

waves we did not get exactly what we expected, however it was encouraging to see

that there was indeed some sort of interference, even if it was not as predictable as

we hoped. This is probably due to many assumptions which we made throughout

this project. One of the main assumptions that we made was that we had nine

coherent point sources. That might been too big of an assumption to make for

our cheap speakers. I’m not sure how you could quantify how close they were to

Page 33: Senior Thesis 7-2-16

29

Figure 5.1. Zero Tdelay

ideal point sources. We also already noticed earlier that it is almost impossible to

make our sources have completely the same intensity, as well as be completely in

sync which each other. We also assumed that we are in the far field which would

make our wave paths parallel with each other, which at a distance of a couple of

meters is probably not correct. We also used the small angle approximation in our

derivation, which would break down as we approach larger and larger angles. We

are also approximating the wave function cosine wave with the square wave which

the Arduino produces (high voltage to low voltage). The last major thing that

may have affected our results was that of reflection. In most of the experiments

we did we were in a large room, however the waves emitted by the array would

bounce back off of the walls, and then further interfere which the waves then being

emitted later. This could probably produce very complicated interference patterns

all over the room in which we were testing.

5.3 Conclusion

In order to model the interference of sound wave more appropriately I would recom-

mend getting very sophisticated (and very expensive) microcontrollers that would

produce signals very close to almost completely in synchronization. I would also

recommend doing these experiments in a room with sound absorbing material, like

Page 34: Senior Thesis 7-2-16

30

in a recording studio to minimize the reflecting waves off of the surrounding area.

I also think that it would be very important on resigning how to create the sound

waves. I passed a high and low voltage to the speakers which made a click, I ap-

proximated this speed of which the speakers clicked to the frequency of the sound

produced. However this probably caused overtones due to the fact that the clicks

were made up of different fundamental frequencies. It would be better to make a

digital fundamental tone, and then pass that to the array of speakers. This would

insure that we would only be dealing with the fundamental frequency.