Uploaded by Віталій Бартош

Korpel A. - Acousto-Optics (2nd ed., 1996).0-8247-9771-X

advertisement
IICOUSTO-OPTICS
OPTICAL ENGINEEFUNG
Series Editor
Brian J. Thompson
Distinguished UniversityProfessor
Professor of Optics
Provost Emeritus
University of Rochester
Rochester, New York
1. Electron and Ion Microscopy and Microanalysis: Principles and Applications, Lawrence E. Murr
2. Acousto-OpticSignalProcessing:TheoryandImplementation,
edited
by Norman J. Berg and John N. Lee
3. Electro-optic and Acousto-Optic Scanning
and
Deflection,
Milton
Gottlieb, Clive L. M. Ireland, and John Martin Ley
4. Single-ModeFiberOptics:PrinciplesandApplications,
Luc B. Jeunhomrne
5. PulseCodeFormats
for FiberOptical Data Communication:Basic
Principles and Applications, David J, Morris
6. Optical Materials: An Introduction t o Selection
and
Application,
Solomon Musikant
7. Infrared Methods for GaseousMeasurements:TheoryandPractice,
edited by Joda Wormhoudt
8. LaserBeamScanning:Opto-MechanicalDevices,Systems,and
Data
Storage Optics, edited by Gerald F, Marshall
9. Opto-Mechanical Systems Design, Paul R. Yoder, Jr.
10. Optical Fiber Splices and Connectors: Theory and Methods,
Calvin M.
Miller with Stephen C. Mettler and Ian A. White
1 1. Laser Spectroscopy and Its Applications, edited by Leon J. Radziem-,
ski, Richard W. Solarz and Jeffrey A. Paisner
12. InfraredOptoelectronics:DevicesandApplications,
WilliamNunley
and J. Scott Bechtel
13. Integrated Optical Circuits and Components: Design and Applications,
edited by Lynn D. Hutcheson
14. Handbook of Molecular Lasers, edited by Peter K. Cheo
15. Handbook of Optical Fibers and Cables, Hiroshi Murata
16.Acousto-Optics, AdrianKorpel
17. Procedures in Applied Optics, John Strong
18. Handbook of Solid-state Lasers, edited by Peter K. Cheo
19.OpticalComputing:
Digital andSymbolic, edited by RaymondArrathoon
20. Laser Applications in Physical Chemistry, edited by D. K. Evans
21. .Laser-Induced Plasmas and Applications,
edited by Leon J. Radziernski and DavidA. Crerners
22.
Infrared
Technology
Fundamentals, Irving J. Spiro
and
Monroe
Schlessinger
23. Single-Mode Fiber Optics: Principles and Applications, Second Edition,
Revised and Expanded, Luc B. Jeunhornrne
24. Image Analysis Applications, edited by Rangachar Kasturi and Mohan
M. Trivedi
25. Photoconductivity: Art,
Science,andTechnology, N. V. Joshi
26. Principles of Optical Circuit Engineering, Mark A. Mentzer
27. Lens Design, Milton Laikin
28. Optical Components,Systems,andMeasurementTechniques,
Rajpal
S. Sirohi and M. P. Kothiyal
29. Electronand IonMicroscopy andMicroanalysis:Principlesand
Applications, Second Edition, Revised and Expanded, Lawrence E. Murr
30. Handbook of Infrared Optical Materials, edited by Paul Klocek
31 Optical Scanning, edited by Gerald F.Marshall
32. Polymers for Lightwave and Integrated Optics: Technology and Applications, edited by Lawrence A. Hornak
33. Electro-Optical Displays, edited by Moharnrnad A. Karirn
34. Mathematical Morphology in Image Processing, edited by Edward R.
Dougherty
35.Opto-Mechanical SystemsDesign:SecondEdition,RevisedandExpanded, Paul R. Yoder, Jr.
36. Polarized Light: Fundamentals and Applications, Edward Collett
37. Rare Earth Doped Fiber Lasers and Amplifiers,
edited by Michel J. F.
Digonnet
38. Speckle Metrology, edited by Rajpal S. Sirohi
39. OrganicPhotoreceptors for Imaging Systems, Paul M. Borsenberger
and David S. Weiss
40. Photonic Switchingand Interconnects, edited by Abdellatif Marrakchi
41. Designand Fabrication of Acousto-Optic Devices, edited by Akis P.
Goutzoulis and Dennis
R. Pape
42. Digital Image Processing Methods, edited by Edward R. Dougherty
43. VisualScienceandEngineering:
Models andApplications, edited by
D. H. Kelly
44. Handbook of Lens Design, Daniel Malacara and Zacarias Malacara
45. Photonic Devices and Systems, edited by Robert G. Hunsperger
.
46.InfraredTechnology
Fundamentals:SecondEdition,RevisedandExpanded, edited by Monroe Schlessinger
47. Spatial Light Modulator Technology:Materials,Devices,andApplications, edited by Uzi Efron
48. Lens Design: Second Edition, Revised and Expanded, Milton Laikin
49. Thin Films for Optical Systems, edited by Francois R. Flory
50. Tunable Laser Applications, edited by F. J. Duarte
5 1. Acousto-Optic Signal Processing: Theory and Implementation, Second
Edition, edited by NormanJ. Berg and John M. Pellegrino
52. Handbook of Nonlinear Optics, Richard L. Sutherland
53. Handbook of Optical Fibers and Cables: Second Edition, Hiroshi Murata
54. Optical Storage and Retrieval: Memory, Neural Networks, and Fractals,
edited by Francis T. S. Yu and Suganda Jutamulia
55. Devices for Optoelectronics, Wallace B. Leigh
56. Practical Design and Production of Optical Thin Films, Ronald R. Willey
57. Acousto-Optics: Second Edition, Adrian Korpel
Additional Volumes in Preparation
SECOND EDITION
ADRIAN KORPEL
Korpel Arts and Sciences
Iowa City, Iowa
.Marcel Dekker, Inc.
New York. Basel Hong
Kong
Library of Congress Cataloging-in-Publication Data
Korpel, Adrian
Acousto-optics l Adrian Korpel. -2nd ed.
p. cm. -(Optical engineering ;57)
Includes index.
ISBN 0-8247-9771-X (alk. paper)
1.Acoustooptics. I. Title. II. Series:Opticalengineering
(Marcel Dekker, Inc.) ;v. 57
QC220.5.K67
1996
621.382'84~20
96-41103
CIP
The publisher offersdiscounts on this book when ordered in bulk quautities. For
more information, write to Special SalesD'rofessional Marketing at the address
below.
This book is printed on acid-free paper.
Copyright 0 1W by MARCEL DEKKER, INC. AU Rights Reserved.
Neither this book nor any part may be reproduced or transmitted in any form or
by any means, electronic or mechanical, including photocopying, microfilming,
andrecording,orbyanyinformationstorageandretrievalsystem,without
permission in writing from the publisher.
MARCEL DEKKER,INC.
270 Madison Avenue, New York, New York 10016
Current printing (last digit):
1 0 9 8 7 6 5 4 3 2 1
PRINTED IN TRE UNITED STATES OF AMERICA
To
Loni and Pat
in loving memory
This Page Intentionally Left Blank
From the Series Editor
Acousto-optics is an important subfield of optical science and engineering,
and as such is quite properly well represented in our series on Optical
Engineering. This representation includes the fundamentals and important
applications including signal processing, scanning, spectrum analyzers,
tuned filters, and imaging systems. The fundamentals and the underpinning
for these and other applications were thoroughly covered in thefirst.edition
of Acousto-Optics, which was published in our series in 1988 (volume 16).
Now we are pleasedto bring out a new edition of this important treatise that
covers both the basic concepts and how these basic concepts are used in
practical devices, subsystems,, and systems. Many new sections have been
added to both the theoryand practice parts of the book.
As the editor of this series I wrote in the foreword of another book that
acousto-optics is a confusing termto the uninitiated, but it, in fact, refers
to
a well-documented subfield ofboth optics and acoustics. It refers, of course,
to the interactions of light and sound, between lightwaves and sound waves,
and between photons and phonons. More specifically, it refersto the control
and modification of a light beam by an acoustic beam. Thus, wehave
devices that use sound to modulate, deflect, refractand diffract light.
Acousto-Optics takes the mystery and confusion out of the subject and
provides a firm foundation forfurther study and applications.
Brian J. Thompson
V
This Page Intentionally Left Blank
Preface to the Second Edition
Since 1988, when the first edition of this book appeared, acousto-optics
- at
one point declared moribund by many- has gone through a minor
renaissance, both in theoryand in practice. Part of this renaissance is dueto
much increased contact with scientists from the former Soviet Union, whose
original and sophisticated research gave a new impetus to the field. Another
important factor is the increased interest in signal processing and acoustooptic tunable filters. Finally, the development of more powerful formalisms
and algorithms has also contributed substantially.
In this second edition I have tried
to take these developmentsinto account.
At the same time I have put more emphasis on applications and numerical
methods. The aim of the book is, however, unchanged: to generate more
insight rather than supply convenient recipes.
The heuristic approach (Chapter 3) has remained largely unchanged,
except for a new, separate sub-section on the near-Bragg regime. In the
formal .approach (Chapter 4), I have added a section on interaction with
curved sound wave fronts, because far more is known now than in 1988. In
Chapter 5, four new sections on the numerical approach have been addedto
the original six. These include the carrier-less split step method, which is the
most powerful and convenient simulation tool available. The .number of
selected applications discussed (Chapter 6) has also been increased by four,
vii
viii
Preface to the Second Edition
and a substantial treatment of fundamental signalprocessing isnow
included. Similarly, a fairly complete discussion of beam steering has been
added to the section on light deflectors. The Quasi-Theorem, a particular
favorite ofmine, has beengiven its own section. The remaining new
application sections deal with spectrum analysis, optical phase and
amplitude measurements, and schlierenimaging. Anotherimportant
application, acousto-optic tunable filters, is treated in a new section
following anisotropic Bragg diffraction in Chapter 8. Anoverviewof
spectral formalisms commonly used in the analysis of interaction has also
been added to this chapter. The appendix now contains, in addition to the
summary ofresearch and designformulas, a brief description of the
stationary phase method.
I would like to thank all my colleagues, especially in Russia, for interesting
and sometimes heated discussions. Together with my students' persistent
questioning, these have greatly helped me in writing this second edition.
Adrian Korpel
Preface to the First ,Edition
It is only fair that a writer show his prejudices to the reader so as to give
timely warning. My prejudices are those of an engineerlapplied physicist- I
preferBabylonianoverEuclideanmathematics,
to borrow Richard
Feynman's classification. Simple pictures and analogies fascinate
me, while I
abhor existence theorems, uniqueness proofs, and opaque equations.
On the other hand, I do realize that one person's mathematics is another
person's physics, and some of my best friends are Euclideans. Many a time
they have shown my intuition to be wrong and humbled me, but each timeI
managed to turn their symbolism into my reality by way of revenge. In that
same context, I owe a debt of gratitude to my students who, not yet having
acquired glib and dangerous intuition, ask me many painful but pertinent
questions.
During my apprenticeship, Iwas fortunate inhaving teachers and
associates who insisted on heuristic explanations, so we could get on with
the exciting businessof inventing. It was only later, after the first excitement
had worn off,that we began the serious businessof figuring out with formal
mathematics why we were right or wrong, and what could be doneto further
improve or save the situation.
It is in this spirit that I decided to write this book: firstto show the reader
heuristically how simple it really is, next to present the essentials of the
ix
X
to
Preface
the First Edition
formal theory, and finally, in a kind of dialectical synthesis, to develop new
ideas, concepts, theories, inventions, and devices. Following this scheme, I
have left out many details that, although important in their own right,
obscure the essence of the matter
as Isee it. However, I have tried to give the
appropriate references, so that the reader can readily pursue certain lines of
inquiry that are of particular
relevance.
In my professional career, Ihave found that having two or more points of
view at one’s disposal is a necessary condition for both the release and
domestication ofwild ideas. With a bit of luck added, a sufficient condition
will then exist for making inventions and discoveries. I never feel this so
keenly as when, reading about someone else’s invention, the thought occurs:
“How beautifully simple: why didn’t I think of it?” If that turns out to be
also the reader’s reaction,Iwill have succeeded in what Iset out to do.
The question naturally comes up
as to what audience this book is aimed at.
It is perhaps easierto answer this first in a negative sense: the book may not
be suitable - in the sense of providing quick answers - for readers, such as
project managers, system engineers, etc., whose interest in acousto-optics is
somewhat peripheral to their main activities and responsibilities. The book
should, however, be of value to the seriously involved engineerat the device
or research level and to the graduate student. More generally it should
interest anyone who wishesto really understand acousto-optics (as opposed
to knowing about it), because of either scientificcuriosity or creative
necessity.
I never intended to write a book on the subject at all, being on the whole
inclined to avoid honorary, time-consuming commitments. Looking back,
however, writing this book was actually a great deal of fun,so I would like
to thank my wife, Loni, who made me do it. I also wantto thank my Belgian
Euclidean friends, Bob Mertens and Willy Hereman, for their critical
support, and my students DavidMehrl and Hong Huey Lin for their
hesitantly incisive comments.
In addition, I would like to express my appreciation to the Von Humboldt
Foundation and Adolph Lohmann for providing me with some quiet time
and space for reflection, and
to the National Science Foundation for funding
a substantialpart of my own research described in this book.
As regards the preparation of the manuscript, I thank the computer
industry forinventingwordprocessors,Kay
Chambers and Margaret
Korpel for doing the drawings, Joost Korpel for helping me with computer
programming, and Rosemarie Krist, the production editor, for making me
stick to my Calvinist work ethic.
Most of all, perhaps, Iought to thank my dog, Bear, whose fierce loyalty
and boundless admiration kept me going during timesof doubt and
frustration.
Adrian Korpel
Contents
From the SeriesEditor
Brian J. Thompson
V
Preface to the Second Edition
vii
Preface to the First Edition
ix
1.
Introduction
1
2.
Historical Background
2.1. The Pre-Laser Era
2.2 The Post-Laser Era
References
5
3.
The Heuristic Approach
3.1 The Sound Field as a Thin Phase Grating
3.2 The Sound Field as a Thick Phase Grating
3.3 The Sound Field as a Plane-Wave Composition
References
5
16
29
35
35
52
66
83
xi
Contents
xii
4.
5.
6.
The Formal Approach
4.1 Introduction
4.2 CoupledModeAnalysis
4.3 NormalModeAnalysis
4.4 TheGeneralizedRaman-NathEquations
4.5Weak
ScatteringAnalysis
4.6WeakPlane-Wave
Interaction Analysis
4.7 StrongPlane-WaveInteractionAnalysis
4.8 Feynman Diagram Path Integral Method
4.9 Eikonal Theory of Bragg Diffraction
4.10 Strong Interaction with Curved Sound Wavefronts
4.1 1 Vector Analysis
References
85
85
86
88
91
93
96
98
106
109
118
125
132
The Numerical Approach
5.1 Truncation of the Raman-Nath Equations
5.2 NumericalIntegration
5.3 ExactSolutions
5.4 MultipleBraggIncidence
5.5 The NOA Method
5.6SuccessiveDiffraction
5.7CascadedBraggDiffraction
5.8 TheCarrierlessSplit-stepMethod
5.9 The Fourier Transform Approach
5.10 Monte Carlo Simulation
References
135
Selected Applications
6.1Weak Interaction of Gaussian Beams
6.2 Strong Bragg Diffraction of a Gaussian Light Beam
by a Sound Column
6.3 Bandwidth and Resolution of Light Deflector
6.4 Resolution of Spectrum Analyzer
6.5 BandwidthofModulator
6.6 TheQuasiTheorem
6.7 Optical Phase and Amplitude Measurement
6.8Bragg DiffractionIntermodulationProducts
6.9BraggDiffractionImaging
6.10 Bragg Diffraction Sampling
6. l 1 Schlieren Imaging
6.12 Probing of Surface AcousticWaves
6.13SignalProcessing
References
169
136
137
139
140
143
144
146
147
154
156
167
169
172
175
182
184
189
193
200
206
219
223
233
236
253
Contents
7.
8.
...
x111
RelatedFields and Materials
7.1 Acoustics
7.2OpticalAnisotropy
7.3 Elasto-Optics
References
257
Special
Topics
8.1 Anisotropic BraggDiffraction
8.2Acousto-OpticTunableFilters
8.3 Large Bragg Angle Interaction
8.4 Acousto-Optic Sound Amplification and Generation
8.5 Three-DimensionalInteraction
8.6 SpectralFormalisms
References
277
Appendix A: Summary of Research and Design Formulas
257
267
270
276
277
285
288
292
295
303
308
377
General Parameters
Raman-Nath Diffraction
Bragg Diffraction
Bragg Modulator
Bragg Deflector
Weak Interaction of Arbitrary Fields in Terms
of Plane Waves
Strong Interactionof Arbitrary Fields
Eikonal Theory
Vector Equations
References
311
312
313
314
315
316
317
318
319
320
B The Stationary Phase Method
References
327
Appendix
322
Appendix C: Symbols and Definitions
323
Index
327
This Page Intentionally Left Blank
l
Introduction
Hyperbolically speaking, the developmentof acousto-optics has been
characterized by misunderstanding, confusion, and re-invention. Even the
very name “acousto-optics” is misleading: It evokes images of audible
sounds focused by lenses, or light emitted by loudspeakers. As to reinvention, perhaps in no other fieldhave so few principlesbeenrediscovered, re-applied, or re-named by so many people.
The principal reason for all of this is that acousto-optics, born from the
desire to measure thermal sound fluctuations, developed very quicklyinto a
purely mathematical, academic subject and then, 40 years later, chan.ged
rather suddenly into a technological area of major importance.
As an academic subject, acousto-optics has been, and continues to be,
studied by countless mathematicians using a dazzling variety of beautiful
analytic tools. It has been forced into a framework of rigid boundary
conditions and stately canonical equations. Existence and uniqueness proofs
abound, and solutions are obtainedby baroque transformations, sometimes
for parameter regionsthat have little connections with physical reality.
It is true that during the academic phase afew physical experiments were
performed, but onecannot escape the feelingthat these served only to check
the mathematics-that they were carried
out with physical quantities (i.e.,
real sound, real light) only because computers were
not yet available.
1
2
Chapter I
Then, in the1960s when laser light was ready
to be used, it became evident
that photons, having no charge, were difficult to control and that acoustooptics provided a wayout of the difficulty. After40 years, the academic bride
was ready for her worldly groom, but, alas, her canonical garb was to
intimidating forhim. He reneged on the weddingand started looking around
for girls ofhisown
kind. In less metaphoric language:Scientists and
engineers began to reformulate the theory of acousto-optics in terms that
were more meaningful to them and more relevant to their purpose.
The ultimate irony is that acousto-optics started out as a very earthy
discipline. The original prediction of light scattering by sound was made in
1922 by Brillouin, a well-known physicist who followed up some work that
had been begun earlier by Einstein. Brillouin’s analysis of the predicted
effect was stated entirely in terms of physics, unobscured by any esoteric
mathematical notions. However, as very few contemporary scientists have
the leisure to read papers that are more than 40 years old, and even fewer
are able to read them in French, Brillouin’s theories also were re-invented.
It
is no exaggeration to say that most concepts used in modern acousto-optics
(synchronous plane wave interaction, weak scattering, re-scattering,
frequency shifting, etc.) were originated with Brillouin.
Santayana, the American philosopher, once said, “Those who cannot
remember the past are condemned to repeat it.” The context in which he
made that remark is definitely broader than acousto-optics, but I take the
liberty of nominating the latter subject as a particularly illustrative example
of Santayana’s dictum.
To follow up on our earlier metaphor, the present situation in the1990s is
that both bride and groom have gone their separateways. Occasionally they
meet, but their very idioms have diverged
to the pointwhere communication
is almost impossible. This is a real pity, because both have much to learn
from each other, if they would only spend the effort and the time. Thelatter
quantity, however, is in short supply generally nowadays, and the former is,
as usual, highly directed by temperament: To a mathematician, the real
world is not necessarily fascinating;to anengineer, the conceptual worldnot
necessarily relevant.
This bookaimsat
reconciliation or, at least, improved mutual
understanding. It provides neither cookbook-type recipes for doing acoustooptics nor esoteric theories irrelevant to engineering. Rather, it attempts to
show to the practicing engineer the physics behind the mathematics and so
enlarge his mental store of heuristic concepts. In my own experience, it is
exactly this process that makesinvention, the essenceofengineering,
possible by free associationof the heuristic concepts so acquired.
Of necessity, the above philosophy implies that this book will be most
useful to those people who have a deep interest or a compelling need to
Introduction
3
really understand acousto-optics at some intuitive level, either to satisfy
their curiosity or stimulate their creativity in research
or design.
As remarked before, a lot of what gavebirth to present-day acousto-optics
lies thoroughly forgotten in the dusty volumes of the not too distant past,
together with the names of the people who created it. This book therefore
opens witha kind of “Roots” chapter, a historical background that attempts
to rectify this situation. This is not just to re-establish priority (although I
do mention the names of individual contributors rather than hide them
between the brackets of locally anonymous references), but also because
what happened in the pastis worth knowing, can save much timeand effort,
and may lead to genuinely new extrapolation ratherthan stale duplication.
It is, of course, impossible to docomplete justiceto everybody involved in
the evolution of acousto-optics. I have therefore concentrated on what, in
my opinion, are seminalideas,genuine “firsts” but this procedure, of
necessity, reflects my own preferences and biases. Yet any other method is
out of the question fora book of finite size. Thus, I apologize to those who
feel that they or others they know of have unjustly been left out. Please, do ..
write to me about it; perhaps such omissions can be corrected in a future
edition.
In regard to the more technical contents of the book, I decided from the
beginning that I did not wish to duplicate either the many already existing
practical guides to acousto-optics or the thorough theoretical treatises on
the subject. Rather,I have selected and presented the material insuch.a way
as to maximize the acquisition of practical insight, that
if is not too pedantic
a phrase. In keeping with what was said in the preface,
I have therefore relied
heavily on analogies, case histories, and multiple points of view whenever
possible. By the same token, I have tried to present a heuristic explanation
prior to giving a rigorous analysis.
Chapter 3 provides a heuristic discussion of acousto-optics based on the
mixture of ray and wave optics
so familiar to most investigators ina hurry to
tentatively explain experimental results. Surprisingly, this method turns out
to be versatile enough to explain, even quantitatively, most of acoustooptics, including the weak interaction of light and sound fields of arbitrary
shape. It also givesinsight into the physicsofmany
acousto-optics
cookbook rules and models, such as the diffraction criteria, the validity of
the grating model,the applicability of x-ray diffraction analogs, etc.
Chapter 4 presents the formal treatment of acousto-optics. Many of the
heuristic‘ results of the previous chapter will be confirmed here, and the
normal mode and coupled mode approach will be contrasted to each other
and to more general methods applicable to arbitrary sound and light fields.
Among the latter are the eikonal, or ray description of interaction, and the
Feynman diagradpath integral formulation.
Chapter l
4
Chapter 5 is devoted to numerical methods. Although this is somewhat
removed from the main emphasis of the book, I feel that computers do
present unique and convenient opportunities for simulation-and therefore
acquisition ofinsight-so
that a discussion of numerical methods is
warranted and even desirable.
Chapter 6 presents selected applications. These are not necessarily chosen
for their technological relevance, but rather must be seen as case histories
that illustrate the concepts treated in the preceding chapters.An effort has
been made to give at least two explanations for each device
or phenomenon
described in order to bring out the complementary nature of the physical
concepts.
Chapter 7 is entitled “Materials” and gives a brief overview of the
essential elements of acoustics, optical anisotropy, and elasto-optics. The
treatment is not intended to be exhaustive, but should give the reader
enough of a background to critically read literature of a more specialized
nature and address pertinent questionsto the experts in these areas.
Chapter 8 discusses some miscellaneous items that fall somewhat outside
the scope of the book, but are relevant to device applications or are
otherwise of intrinsic interest.
In Appendix A, I have given a summary of useful research and design
formulas with referencesto relevant sections and equations in the main text.
In order to avoid symbol proliferation, I have, where necessary, sacrificed
mathematical rigor to notational simplicity in a manner familiar to most
physicists and engineers. In addition, a list of symbols with definitions has
been provided.
Finally, some technical notes about conventions used in this book are
needed. As most acousto-optic applications are connected with electrical
engineering, I have
followed
the electrical engineering
convention
throughout, i.e., the harmonic real quantity e(t, r) is represented by the
phasor E(r) such that
e(t,r)= Re[E(r)exp(jwt)] wherej = &
l
(1.1)
This is somewhat awkward for physicists who have to get used to the fact
that a diverging spherical wave is represented by exp(-jkr) rather than
exp(jkr). I feel, however, that this is still more practical than electrical
engineers havingto refer to the admittanceof a capacitance as -joC.
2
Historical Background
2.1 THEPRE-LASER€RA
In 1922 Ikon Brillouin, the French physicist, was dealing with the question
of whether the spectrum of thermal sound fluctuations in liquids or solids
could perhaps be determined by analyzing the light or x-rays they scattered
[l]. The modelheusedwas
one inwhich the sound induces density
variations, and these, in turn, cause fluctuations in the dielectric constant.
Using a small perturbation approximation to the wave equation (physically,
weak interaction), he formulated the problem in terms of a distribution of
scattering sources, polarized by the incident light and modulated in space
and time by the thermal soundwaves. (This simple model is feasible because
the sound frequencyisassumed
to bemuchsmaller
than the light
frequency.) Each scatterer emits a spherical
wave and the total scattered field
is obtained by summing the contributions of all the sources, emitted at the
right instants to reach the points of observation at the time of observation.
In short, he used what we would now
call a retarded potential Green’s
function method, in the first
Born approximation.
Before applying this method to the entire spectrum of thermal sound
fluctuations, Brillouin tried it out on the simplified case of a single sound
wave interacting with a single lightwave. Indeed, even before doing that, he
used a still simpler geometric picture of this interaction because, in his
own
5
Chapter 2
6
words, "One may already anticipate results which will be confirmed later
by
a more precise calculation." Figure 2.1showsa modern version [2]of
Brillouin's geometric picture. It shows the sound waves propagating upward
and the light incident from the left. The angles of incidence must be so
chosen as to ensure constructive interferenceof the light beams reflected off
the crests of the sound wave. This is, of course, also the condition for x-ray
diffraction, the derivationof which may be found in any elementary physics
textbook. It leads to critical anglesof incidence
h
sin(&")= p 2a
where p is an integer, A the sound wavelength,and A. the light wavelength in
the medium. The angle h",p = 1 is called the Bragg angle. The angle of
l
,,,,,~,,
I
Figure 2.1 AcousticBraggdiffractionshowingcritical angles fordown-shifted
interaction (top)and upshifted interaction (bottom). (FromRef. 2.) 0 1981 IEEE.
Historical Background
7
reflectionistwice the Braggangle. (For simplicity, Fig. 2.1shows no
refraction at the boundary of the medium.)
Brillouin himself refers to the analogy of optimal diffraction by a grating
in discussing eq. (2.1). He makes the crucial observation, however,
that a
sound wave is a sinusoidal grating and that therefore we should only expect
two critical angles, i.e., for
p = + 1 and p = - 1. As for the sound, its velocity is
so small comparedto that of the light that, for purposes of analysis,we may
suppose itto stand still. Its only effect, according to Brillouin, isto impart a
Doppler shift that he calculates to be equal to the sound frequency, positive
for p = 1 (lower part of Fig. 2.1, sound moving toward the observerof the
scattered beam) and negative for p = - 1 (upper part of Fig. 2.1, sound
moving away from the observer). In modern terminology, we speak of the
upshifted, or plus one order; the downshifted, or minus one order; and the
undiffracted, or zero order. The phenomenon as such as called Bragg
+
dvfraction.
Following the analysis of the geometrical picture, Brillouin carries
out his
perturbation calculation and finds essentially the same results, provided the
volume of interaction is taken to be sufficiently large. The condition he
derivesis one of whatwouldnowbecalled
synchronous interaction,
meaning that successive contributions to the scattered beam be all in phase.
The geometrical interpretation of that condition is given by eq. (2.1) for
Ipl=l.
Another result foundby Brillouin is that, with the assumptionof a simple
isotropic change in refractive index through density variations, the scattered
light is of the same polarization as the incident light. In a later monograph
[3], he shows that this is true to a very good approximation, even in the case
of strong interaction where the perturbation theory fails. The underlying
physical reason for this behavior that
is only the induced change in density of
dipoles is considered, not the change in collective orientation or in direction
of the individual dipole moment. If we take the latter phenomena into
account, we find that, in the most general case, there will also occur an
induced birefringencethat, as we will see later,may sometimes be putto good
use. Nevertheless, most of the fundamental principles of acousto-optics may
be demonstrated by ignoring polarizationand using scalar formulations.
Finally, Brillouin suggestedthat the results he had obtained be verified
by
using manmade sound, according to the piezo-electric method invented by
Langevin. The range of sound wavelengths usable for this purpose, he
remarked, stretched from infinityto half the wavelength of light (for smaller
sound wavelengths, synchronous interaction is no longer possible); for a
typical liquid, one would require electrical signals with an electromagnetic
free-space wavelength longer than 9 cm. “These conditions are perfectly
realizable,” he concluded.
8
Chapter 2
In spiteofBrillouin’soptimism,it
took 10 more yearsbefore the
experimentshe had suggestedwereactuallyperformed.
In 1932 the
Americans Debye and Sears [4] and the French team of Lucas and Biquard
[5] found that Brillouin’s predictionswere wrong.
[2]. It was
Figure 2.2 shows a modern drawing of the essential experiment
found that (1) the predicted critical angles did not appearto exist and (2) by
increasing the sound strength, numerous orders appeared rather than the
two expected on the basis of Brillouin’s calculations.In regard to (2), Debye
and Sears correctly surmised and calculated
that the nonexistence of critical
angles was due to the fact that the interaction lengthwas too small. In their
own words “Taking into account, however, that the dimensions of the
illuminated volume of the liquid are finite it can easily be shown
that in our
case Bragg’s reflection angle innot sharply definedand that reflection should
occur over an appreciable angular range.” In this context, it is interesting
that, because of this fact, modern Bragg diffraction devices have any useful
bandwidth at all.
nu
TRANSDUCER
f
Figure 2.2 MultipleordersgeneratedintypicalDebye-Sears,Lucas-Biquard
experiment. (From Ref. 2.) 01981 IEEE.
Historical Background
9
Debye and Sears also worked out a criteria for what is now called the
Bragg regime: The ratio LUA2 should be large compared to unity. In their
case,whenworkingwith
toluene at afrequencyof
10 MHz, alight
wavelength of0.5 p,
and an interaction lengthof 1 cm, the ratio was about
0.5; this they said with considerable understatement “cannot be considered
as large.” In 1967Klein andCook
did somenumerical computer
simulations and found a somewhat more quantitative criterion [6].Defining
a quality factorQ
Q=K2LIk=2wLAJA2=2wxDebye-Sears ratio
(2.2)
they concluded that more than 90% of the incident light could be Braggdiffracted, i.e., diffracted into one order, if Q were larger than about 2w. We
shall see later that the actual fractional tolerance around the Bragg angle
equals about 4 d Q . A value of Qa2w or 4w is therefore often used as a
criterion for Bragg-angle operation.
Returning now to 1932, Debye and Sears could not satisfactorily explain
the presence of multiple orders. They did use a model of the sound column
as a periodic phase grating, but didnot obtain results that agreed with their
measurements. Wenow know that, although their sound column was too
thin to be treated as an analog of an x-ray diffraction crystal, it was too
thick to be considered a simple phase grating.
Noting that the angular spacing between orders was a multiple of the
primary deflection angle 2@E(shown in Fig. 2.2 as AJA for small angles),
Debye and Sears next surmised that even and odd harmonics of the sound
frequencywerepresentin
the medium.Lucas and Biquard, however,
pointed out that the vibration modes of the quartz transducer favored odd
harmonics only. They themselves favored a model based on calculated ray
trajectories. A drawing of such ray trajectories is shown in Fig. 2.3 [ 5 , 7 . In
this model, the crests of the
waves act as lenses for the incident light, thereby
creating a nonsinusoidal amplitude distribution in the exit plane. This, in
turn, should leadto multiple orders. As Lucas and Biquard did
not calculate
the intensity of the orders, their theory could
not be confirmed. To a certain
extent, they were on the right track, though, as was later confirmed by
Nomoto [ 8 ] , who,following up the original idea, managed to obtain
intensity distributions of orders that wereinroughagreementwith
experiment. The missing element in both theories, however, was the phase
of
the rays. A complete ray theory that included the phase was ultimately
developed by Berry [9]. Although Berry’s theory gives implicit expressions
for the amplitude of the orders (even taking into account the number of
caustics, a particular ray has crossed), his method is by his own admission
too involved for numerical calculations, It does, however, offer a beautiful
Chapter 2
10
+K
l
2
3
4
6
6
Figure 2.3 Raytrajectoriesinsoundfield.Thesoundispropagatingupward.
(From Ref. 7.)
example of the elegant solution of a problem by an, at first glance,
impossible method.
Let us now return to the situation in 1932. Neither Debye-Sears nor
Lucas-Biquard had succeeded in explaining the appearance of multiple
orders. This was leftto Brillouin, who, ina 1933 monograph [3], put forward
the hypothesisthat multiple orders werethe result of rescattering. It was not
until much later that this suggestion was followed up quantitatively. In 1980
Korpel and Poon formulated an explicit, physics-oriented theory based on
the multiple scattering of the plane waves of light composing an arbitrary
light field, by the plane waves of sound composing an arbitrary sound field
[lo]. Previously, in 1960 Berry had used the same concept in a formal
mathematical operator formalism [g], applied to the rectangular sound
column shown in Figs. 2.1 and 2.2. Both theories made use of so-called
Feynman diagrams-Berry to illustrate his formalism, and Korpel-Poon to
visualize their physical picture.
In the preceding paragraph, Ihave purposely juxtaposed the physicsoriented and mathematics-oriented approach. The latter became dominant
at around 1932 and is characterized by, among other things, using the model
of a rectangular sound column with perfectly straight wavefronts, as shown
Historical Background
11
in Figs. 2.1 and 2.2. Although the model appears to give results that agree
fairly well with experiment in certain situations, this is somewhat of a
surprise, because it certainly doesnot represent the true nature of the sound
field. The latter is characterized by diffraction spreading, with the
wavefronts gradually acquiring phase curvature. Evenclose
to the
transducer, where the wavefronts are reasonably flat, there exist appreciable
variations of amplitude due to Fresnel diffraction. The effect of this can be
clearly seen in a 1941 schlieren picture by Osterhammel [l l] shown in
Fig. 2.4. The first weak scattering calculation for a “real” sound field was
carried out by Gordon [l21 in 1966.
Even Brillouin, who had started out from very general physical configurations, adopted the rectangular sound column in his 1933 monograph [3]
referred to above. In such a guise, the problem reduces to one of wave
propagation in perfectly periodic media with straight parallel boundaries.
Consequently, Brillouin assumed a solution for the entire light field that
consisted of an infinite sum ofwaves with corrugated wavefronts (i.e.,
periodic in the sound propagation direction) traveling in the direction of
incident light (normal to the sound column in his case), eachwave traveling
with its own phase velocity. He foundthat each corrugated wavefront could
be expressed by a Mathieu function. Hence, in principle, the problem was
solved exactly in terms of the eigenmodes of the perturbed medium. In an
engineering context, however, that statement does not mean very much,
because, due to the recalcitrant nature of Mathieu functions, no numerical
results are readily available; they certainly were
not in Brillouin’s time. With
some exaggeration, it could be said that the only positive statement to be
Figure 2.4 Schlieren picture of sound field closeto transducer. (From Ref 11.)
12
Chapter 2
made about Mathieu functions is that they are defined by the differential
equation of which they are a solution.
In reality, of course, something more is known about their properties
[13].
For instance, they are characterizedby two parametersand are only periodic
when these two parameters bear a certain relation to each other. In the
physics of Brillouin’s model, this meantthat there existed a relation between
the phase velocity of a wavefront whose corrugation was characterized
by a
certain Mathieu function and the strength of the sound field. This relation
was different for each wavefront in the eigenmode expansion, and no ready
analytic expressions were available. The only remedy left was to decompose
each Mathieu function(i.e., corrugated wavefront)into a Fourier series, take
the phase shift due to its propagation into account, and of all wavefronts
add up the Fourier coefficients pertaining to the same periodicity. As is not
difficult to guess, these coefficients represent exactly the amplitudes of the
orders of scattered light. The trouble with this procedure is again that no
analytic expressions are available for the coefficients (i.e., amplitudes of the
orders) in terms of one
of the parameters representing the Mathieu function
(i.e., strength of the sound). So Brillouin had to fallback on certain
asymptomatic expansions that were only valid for weak sound fields and
gave results that can be calculated much easier by the weak scattering
method he himself had used in his first paper [l]. The only additional
information obtained was that the higher-order terms in the asymptotic
expansion could be interpreted as representing the higher orders that were
indeed seen in the experiments.
I have treated the history of this development in some detail because it
explains why other investigators kept looking for easier
ways to calculate the
amplitude of the orders, in spite of the fact that Brillouin had found an
“exact” solution. Brillouin himselfwas perfectly well aware of this, because
in a foreword to a paper by Rytov [l41 that described such an attempt, he
remarks that ‘‘. . . these Mathieu functions are terribly inconvenient.” Rytov
himself observes that, because of the very nature of the problem (spatial
periodic modulation of the medium),
every rigorous methodto calculate the
field must somehowor other leadto Mathieu’s equation.
The next generationof researchersby and large abandoned the attempt
to
find an exact expression for the total field and concentrated instead on
finding relations between the amplitudes of the various orders. In other
words, they investigated the coupling between plane waves traversing the
medium (i.e., the normal modes of the unperturbed medium) rather
than the
orthogonal eigenmodes of Brillouin (i.e., the normal modes of the perturbed
medium). It was not until 1968 that Kuliasko, Mertens, and Leroy [15],
using these very same coupled plane wave relations, returned to the total
field concept that in their mathematical formalism was represented by a
Historical Background
13
“generating function.” A more complete theory along the same lines was
given by Plancke-Schuytens and Mertens [161, and the final stepWas taken
by Hereman [17], who,starting directly from Maxwell’s equations and Using
a more general approach ‘than Brillouin, derived the same generating
function and the same exact solution as Mertens and co-workers. ASRytov
had predicted, the solution was expressed in terms of Mathieu functions.
However, by this time, more tables were available for these functions and
their Fourier coefficients, so that they were no longer so intractable as in
Brillouin’s time.
It should be noticed in passing that the development sketched above still
adhered strictly to-the rectangular soundcolumn model and hence
represents a continuation of the mat he ma ti^^" school of acousto-optics.In
fact, until the inventionof the laser abruptly pushed acousto-opticsinto the
real world, no “physics” school could be saidto exist.
As remarked before, after 1932 the main emphasis shiftedto finding more
direct ways of calculating amplitudes of separate orders, and a variety of
ingenious methods was proposed. I shall limit myself here to seminal
developments, of which the first one is undoubtedly the epochal work of
Ramanand Nath; In a series of papers [18-221 written during 1935-1936,
their theory evolved from a simple thin grating approximation
to an exact
derivation of the recurrence relations between orders: the celebrated
Raman-Nath equations.
It is of interest to consider their contribution at some length, because it
shows aspects of both mathematical and physical ingenuity as well as rather
surprising naivetk. In their first paper [18], they treat
athin sound column as
a phase grating that the rays traverse in straight lines. Because of the phase
shift suffered by each ray, the total wavefront is corrugated as it leaves the
sound field. A simple Fourier decomposition then leadsto the amplitude of
the various orders. In modem terms, they calculate the angular spectrumof
a field subjected to a phase filter [23].
Before beginning their analysis, Raman and Nath acknowledge that their
theory bears avery close analogyto the theory of the diffraction of a plane
wave (optical or acoustical) incident normally on a periodic surface,
developed by Lord Rayleigh[24]. (In retrospect this is aremarkable
statement, especiallywhen weseehowcavalierly
Rayleighis treated
nowadays by his re-discoverers.) They also invoke Rayleigh in order to
of reflection and argue that
object to Brillouin’s picture of the process as one
reflection is negligible if the variation of the refractive index is gradual
compared with the wavelengthof light.
Therefore, they themselves prefer “simple consideration of the regular
transmission of light in the medium and the phase changes accompanying
it.”
14
Chapter 2
In their second paper [19], Raman and Nath develop the case of oblique
incidence and present some clever physical reasoning that explains why the
whole effect disappears for certain angles of incidence: The accumulated
phase shift vanishes, because the ray traverses equal regions of enhanced and
diminished refractive index.
The third paper [20] deals with the Doppler shift imparted to the various
orders and also treats the case of a standing sound wave. The latter case is
dealt with in a rather complicated way in order to show that even and odd
orders show even and odd harmonics of the sound frequency in their
Doppler shift and to calculate their contribution. It seems to have totally
escaped Raman and Nath that a standing sound wave can be considered a
fixed grating whose phase-modulation index varies slowly (i.e., compared to
the light frequency) in time. Consequently, the results could have been
derived directly from the previous case by substituting a time-varying
accumulated phase shift in the place of a fixed one.
In their fourth paper [21], Raman and Nath took as a starting point
Helmholtz’s scalar equation with spatio/temporal variation of the
propagation constant; their fifth paper [22] was similar but dealt with
oblique incidence. Making use of the periodic nature of the sound field, they
decomposed the total field into orders propagating into specific directions
and derived the now famous Raman-Nath equations that describe the
mutual coupling of these plane waves by sound. Note that their model was
still the mathematical one of a rectangular column of sound. This model is
very similar to that of a hologram, with the exception that holographic
fringes do not move and may be slanted relative to the sides of the column.
In this context, it is of interest that many years later Raman-Nath type
diffraction calculations were repeated for thick holograms [25]. In regard to
more general configurations, in 1972 Korpel derived recursion relations
(generalized Raman-Nath relations) for the various frequency components
present in arbitrary sound- and light-field interaction [26]. This was later
formulated in terms of coupling between individual components of the
angular plane wave spectra of both fields [27].
In the last two papers of the series, Raman and Nath also pointed out that
their latest results indicated that, in general, the emerging wavefront was not
only corrugated in phase, but, if the grating was thick enough, also in
amplitude. As we have already seen before, the latter effect was considered
previously by Lucas and Biquard on the basis of ray bending [5], the former
effect was used by Raman-Nath themselves in their first paper on straight,
phase-delayed rays [18], and the two effects were ultimately combined by
Berry in a rigorous ray theory [9].
In a follow-up paper [28], Nath, starting from Maxwell’s equations,
showed that a scalar formulation such as had been used before was allowed
Historical Background
15
in view of the great difference between light and sound velocity. He also
considered the asymmetry of the diffraction phenomena at oblique incidence
and developed some approximate expressions based on the Raman-Nath
relations. In the same paper, there is a succinct statement about the
difference between the Raman-Nath approach and that of Brillouin. Nash
admits that the latter’s analysis is perfect but
. . . leads to complicated difficulties for, to find the diffraction effects in
any particular direction, one will have to find the effects due to all the
analysed waves. On the other hand, we have analysed the emerging
corrugated wave into a set of plane waves inclined to one another at the
characteristic diffracted angles. To find the diffraction effects in any
particular direction, one has only to consider the plane wave travelling
in that direction.
It is difficult to find a more lucid summary of the two basic approaches to
the problem; it is also unfortunate that this kind of verbal explanation has
largely fallen into disuse with the terse scientific “specialese” of the present
time.
To derive the recursion relations between orders of diffracted light,
Raman and Nath, using a fairly extensive mathematical analysis, needed
about 14 pages. Van Cittert, using a heuristic physical approach, did it in
two pages [29]. His method was simplicity itself: divide the sound field into
infinitesimally thin slices perpendicular to the direction of light propagation.
Each slice will act as a thin phase grating and, because of its infinitesimal
thickness, will generate from each incident plane wave only two additional
ones. The amplitudes of the two new waves will be proportional to the
amplitude of the incident wave, the strength of the sound field, and the
(infinitesimal) thickness of the grating along the angle of incidence. Carry
out this prescription for each plane wave in the distribution and the result is
an infinite set of differential recursion relations, the Raman-Nath equations.
Van Cittert is seldom quoted these days; I suspect that his lack of mathematical sophistication would be considered in very poor taste by many
ambitious young scientists. His method, however, has been adopted by
Hargrove [30] and later by Nomoto and Tarikai [31] in the form of
numerical algorithm based on successive diffraction.
From 1936 until the invention of the laser, a great many researchers
concentrated on various aspects of what was basically the same mathematical problem: the diffraction of light into discrete orders by a rectangular
column of sound. Most of the work concerned itself with obtaining
approximations to either the Brillouin or the Raman-Nath formulation of
the problem.
An exception is the work of Bhatia and Noble [32], who used the novel
Chapter 2
16
i
approach of expressingthe total field by the sum ofthe incident field and the
scattered field. The latter,of course, can be expressed as the contributionof
the scatterers (i.e., the sound field) acting on the total field itself. Thus, this
approach leads to an integral (actually integro-differential) equation that,
under the assumption that the scatterersact on the incident field only (Born
approximation), had already been solvedto a first orderby Brillouin.
As for other investigators, lack of space limits our discussion to the few
whose contributions contained somereally novel elements.
Extermann and Wannier [33], for instance, derived algebraic recursion
relations between the Fourier coefficients of the corrugated wavefronts of
Brillouin’s eigenmodes. The condition for solution of these equations leads
to the so-called Hill’s (infinite) determinant whose eigenvalues are related
to
the phase velocitiesof the eigenmodes. Mertens used a method
of separation
of variables [34] leading once more to Mathieu functions, and Phariseau
extended this theory to include oblique incidence[35].Finally, Wagner gave
a rigorous treatmentstarting from Maxwell’sequations [36].
What about solutions? At the beginning of the 1960s, the following were
known: (1) the strong interaction multiple-order solution for a thin phase
grating, derived by Raman and Nath [19]; (2) the strong interaction twoorder solution near the Bragg angle for a thick phase grating, derived by
Bhatia and Noble [32] and Phariseau [37l; and (3) the weak interaction -1-1
and - 1 order solution for an arbitrary thickness sound column, first given
by David [38].In addition, various approximations forother regions existed
that are, however, not relevant to our present purpose.
Concerning applications and techniques developed during the pre-laser
era, Mayer has given a concise review [39].The standard work remains
Bergmann’s Der Ultraschall[40]for those who read German and are lucky
enough to obtain one of the rare copies. An abbreviated English versionhas
also been published [41].
2.2
THEPOST-LASER ERA
During the 1960s, the character of acousto-optics changed completely. The
invention of the laser created a need for electronically manipulating coherent
light beams, for instance deflecting them. As photons have no charge, it is
obvious that this canonlybeachieved
by electronicallyvarying the
refractive index of the medium in which the light travels. This can be
accomplished directly through the electro-optic effect,or indirectly through
the acousto-optic effect. Thelatter method, however, has certain advantages,
which are almost immediately obvious. Deflection, for instance, is as if it
were built in through the dependence of the diffraction angle on acoustic
wavelength and, hence, acoustic frequency. Frequency shifting, extremely
Historical Background
17
important for heterodyning applications, issimilarly inherent in the
diffraction process through the Doppler shift. Modulation should be
possible by varying the amplitude of the electrical signal that excites the
acoustic wave. And, what is perhaps the most important aspect, the sound
cell, used witha modulated carrier, carriesan optical replica of an electronic
signal that is accessible simultaneouslyfor parallel optical processing. All of
these aspects were ultimately incorporated in devices during the period
1960-1980, a period that is characterized by a truly explosive growth of
research and development in acousto-optics.
It is usually forgotten, however, that most of these aspects were in some
less sophisticated form already used in measurements or applications prior
to the invention of the laser. Debye-Sears [4] and Lucas-Biquard [5], for
instance, had measured sound velocitiesby measuring angles of diffraction.
A particularly ingenious and beautiful method of displayingtwodimensional lociof sound velocities in any direction
and for any of the three
modes of sound propagation inarbitrary crystals was developed by Schaefer
and Bergmann in 1934 [40]. It was based on exciting sound waves in as many
modes and asmany directions as possible by the useof a crystal of
somewhat irregular shape. The resulting diffracted beams (one for each
mode and direction) were focused in the back focal plane of a lens. In this
plane then, each pointof light correspondsto a particular mode ina specific
direction. A series of picturesso obtained is shown inFig. 2.5.
As for the Doppler shift, Ali had measured this in 1936 by spectroscopic
methods [42], a measurement that was later repeated by Cummins and
Knable using laser heterodyning techniques [43]. Concerning modulation,
according to Rytov [14], an acousto-optic light modulatorwas conceived in
1934 by Mandelstam and co-workers, and around the same time a similar
device was apparently patentedby Carolus in Germany. The first published
description of a light modulator thisauthor is aware of was given by Lieben
in 1962 [44].
Parallel processing for display purposes was pioneered by Ocolicsanyi
[46]
and used in the Scophony large-screen
TV system of 1939 [46,47]. A modern
version of the latter was demonstrated in 1966 by Korpel and co-workers
[48], first in red and black using a He-Ne laser and later in color [49] with
the help ofa Krypton laser. In orderto increase the bandwidthof the sound
cell in these experiments (limited by virtue of the fact that long efficient
interaction lengths decrease the tolerance about the Bragg angle, as already
pointed out by Debye and Sears [4]), a device now called a beam-steering
deflector [48,50] had to be invented. In such a deflector, the acoustic beam is
made to track the requiredBragg angle by means of an acoustic transducer
phased array.
The first device for signal processing as such was developed by Rosenthal
18
Chapter 2
Figure 2.5 Schaefer-Bergmann patterns of sound waves propagating in X, Y, and
Z plane of a quartz crystal. (From Ref40.)
Historical Background
19
[51] who proposed, among manyother things, a (laserless) correlator using a
sound cell and a fixed mask.Later, signal processingusing optical
heterodyning was demonstrated independentlyby King and co-workers[52]
and by Whitman, Korpel, and Lotsoff [53,54]. Since then, interest in this
particular application has increased exponentially; extensive tutorialheview
articles and references may be found in [55-571.
The mechanism of beam deflection was analyzed by Korpel and coworkersin1965[58].Theyderived
the now well-known result that the
number of achievable resolvable angles was
equal to the productof frequency
swing and transit time of the sound through the light beam. For linear (i.e.,
nonrandom) scanning, Foster later demonstrated that this number could be
increased by about afactor of 10through theuse ofa traveling wave lens [59].
We have already mentioned beam-steering for larger bandwidth in scanners
[48,50]. Lean, Quate, and Shaw proposed a different approach that increased
frequency tolerance through the use
of a birefringent medium[60]. A scanner
of this kind was realized by Collins, Lean, and Shaw [61]. A good review of
scanning applicationsmay be found in Ref. 62.
The idea of deflecting a beam of light by changing the frequency of the
sound leads naturallyto the concept of an acousto-optic frequency analyzer.
The only difference between a beam deflector and a spectrum analyzer is
that in the former the various frequencies are applied sequentially, whereas
in the latter they are introduced simultaneously. In the area
of optical signal
processing, the spectrum analyzer concept was adopted rapidly with the
result that this field is now characterized by two methods of approach:
image field processing and Fourier plane processing.As was pointed out by
Korpel [S], and later demonstrated with Whitman [63], these two methods
(at least when heterodyning is used) are completely equivalent. They only
differ in the experimental configuration, because a single lens suffices to
transform an image plane into a Fourier transform planeand vice versa [23].
In the field of image display, the spectrum analyzer concept has found
application. Successive samples of a TV signal, for instance, can first be
transformed electronically into simultaneous radio frequency (RF) bursts,
whose frequency encodes positionand whose amplitude encodes brightness.
If these samples are now fed
into a soundcell, an entire TV line (or part of a
line) will be displayed simultaneously in the focal plane of a lens focusing the
diffracted beams at positions according to their frequency [64]. It is clear
that this method is the complement of the Scophony systemthat visualized
an image of the sound cell contents [47]. In yet another context, the entire
subject of frequency-position representation is the complete analog of that
of frequency-time representation pioneeredby Gabor [65].
So far I have said little about the further development of theory during
the post-laser era. This is not so much an oversight as a consequence of the
20
Chapter 2
fact that during this period theory and experiment cannot be put into
strictly separated categories. Each new device stimulated further theoretical
development and viceversa. This is perhaps best illustrated by the
development of the coupled plane-wave conceptand its applications.
It has already been remarked before that Brillouin, in his original work
[l], stated the conditions for phase synchronous interaction that make
acousto-optic diffraction possible. Ina graphical form, this condition is best
illustrated by the so-called wave vector diagram, already indicated by
Debye-Sears [4], but more formally developed by Kroll [66]. Figure 2.6
shows such a diagram for upshifted interaction in two dimensions. It is
obvious that it represents the wave vector condition
where ko (sometimes written ki) represents the incident planewave of light in
the medium, k+ the upshifted plane wave of light, and K the plane wave
responsible for the process.In physical terms, (2.3) means that there exists a
one-to-one correspondence between plane waves of sound and plane waves
of light. Now, itiswellknown
that eachfieldsatisfyingHelmholtz’s
equation can be uniquely decomposed into plane waves (if we neglect
evanescent waves for the moment): the angular plane-wave spectrum [23].
This is, of course, also true for the sound field and offers a possibility to
study (weak) interaction geometriesby means of this concept.
A semi-quantitative consideration of deflection and modulation devices
was carried out along these lines by Gordm [12]. According to these
considerations, the angular sensitivityof diffraction devices is intimately tied
up with the angular spectrum of the acoustic transducer. If, for instance, the
primary light beam is incident at an angle for which no sound waves of
appropriate strength are presentin
the angular spectrum, then no
Figure 2.6 Wave vector diagram for upshifted interaction.
Historical
appreciable diffraction effects will be observed. Because the required angle
between plane waves of sound and plane waves of light is dependent on the
sound frequency, these same considerations can be used to give a rough
prediction of the frequency bandwidthof acousto-optic devices. Also, many
predictions and observations, made many years ago, allof a sudden acquire
an obvious physical interpretation. The reader may remember the Debye
and Sears calculationthat indicated that the tolerance about the Bragg angle
decreased with the interaction length. This makes excellent sense from a
wave interaction point of view, because the width of theangular spectrum is
inversely proportional to the length of the acoustic transducer (interaction
length). Hence, the larger this length, the smaller the possibility of finding
a
suitable sound wave to interact with, when the direction of incident light is
varied.
Debye and Sears had also noticed that, on turning their sound cell, the
intensity of the diffracted light would go through many maxima
and minima
and grow gradually weaker. We now realize that what they were seeing was
actually theplane-wave angular spectrumof the soundthat they sampled by
turning the sound cell. In 1965 this was confirmed by Cohen and Gordon
[67] who repeated quantitatively the experiment with
modem equipment.
There is another implication hidden in eq. (2.3). As already discussed,
there exists a one-to-one correspondence of planewaves of sound and light.
Also, it turns out that the amplitude of the diffracted plane wave is
proportional to the product of the amplitude of the incident light wave and
the interacting soundwave. It therefore follows that the angular plane-wave
spectrum of diffracted light should be similar to that of the sound if the
incident light has a uniform, wide angular plane-wave spectrum from which
interacting waves can be selected. However, as remarked before, the angular
plane-wave spectrum of the sound is what makes up the soundfield. Hence,
if we illuminate a sound field with a converging or diverging wedge of light
(i.e., a wide uniform angular spectrum), then the diffracted light should
carry in some way an image of the soundfield. By proper optical processing, it
should then bepossible to make this imagevisible. This method of
visualization of acoustic fields, now called Bragg diffraction imaging, was
proposed and demonstrated by Korpel in 1966 [68]. Some of the first images
obtained in this way are shownin Fig. 2.7.Almost to illustrate the
convergence of ideas, the same method was independently developed by Tsai
and co-workers [69,70] and by Wade [71].
The reader should note that Bragg diffraction imaging is not the same as
schlieren imaging. With the latter, one visualizes
an axial cross sectionof the
sound, with the formera transverse cross section. (In fact, Braggdiffraction
imaging is really spatial imaging, because phase is preserved in the process.)
Schlieren imaging is, of course, veryimportant in its own right and plays
Figure 2.7 Acoustic images obtained by Bragg diffraction. (From Ref. 68.)
23
Historical Background
an important role in modern optical signal processing, for instance when the
contents of a sound cell have to be imaged on a maskor another sound cell.
It wasfirstusedin
an acousto-optic context by Hiedemannand
Osterhammel in1937[72].
In their first experiment, they still useda
conventional schlieren stop; later, Hiedemann [73] made use of the fact
that
ray bending already created an amplitude image, as predicted by Bachem
and co-workers [74]. But even without ray bending, a diffraction image
(Fresnel image) exists in front the
of sound cell, as was first demonstratedby
Nomoto [75]. The latter’s technique has been used in a modern setting by
Maloney [76]and by Korpel, Laub, and Sievering [77l.
Returning now to Bragg diffraction imaging, in order to explain the
method more satisfactorily, it was necessary to develop a plane-wave weak
interaction theory for arbitrary sound and light fields. This was carried out
by Korpel [26,78,79] and later by Carter [80]. The former also developed a
formal eikonal theory [26,81]that predicted the amplitude of diffracted rays
and confirmed the initially heuristic, ray-tracing method [68]. A further
experimental evolution of Bragg diffraction imaging is Bragg diffraction
sound probing, developed by Korpel,Kessler and Ahmed[82]. This
technique uses a focused light beam as an essentially three-dimensional,
phase-sensitive probe of the sound field. A multitransducer sound field
recorded in this way is shown in Fig. 2.8. The perhaps final step in this
field
was taken by Kessler who invented a pulsed version of Bragg diffraction
imaging that provided better depth discrimination [83].
I have chosen to describe at some length the evolution of Bragg diffraction imaging as a prime example of the interplay between theory and
la2
1
fs 4OMHZ
A = O.lmm
f@ 6.0
A x 0.6mm
1 cm out
f
Figure 2.8 Recording of multipletransducersoundfieldobtainedby
diffraction sampling. (From Ref.82.)
Bragg
24
Chapter 2
practice. The reason is that I am fairly familiar with it through my own
involvement, and also that it typifies the stimulating and hectic research
environment of that period.
Other, more theoretical subjects evolved at a rapid rate also. As for the
evolution of the plane-wave interaction theory, for instance, 1976
in Chu and
Tamir analyzed the strong interaction of a Gaussian light beam and a
rectangular sound beam using plane-wave concepts [84].This analysis was
later extended to optical beams of arbitrary profile by Chu and Kong in
1980 [85].These two theories still usedthe nonphysical model for the sound
field. A general theory of plane-wave strong interaction for arbitrary sound
and light fields had at about the same time been formulated by Korpel in
Ref. 27 and was later ,cast in explicit form by Korpel and Poon [lo].This
theory, in turn, was used by Pieper and Korpel to calculate interaction with
curved sound wavefronts[86].
The wave vector diagramof Fig. 2.6 illustrates phase synchronism leading
to upshifted interaction. If all vectors are multiplied by h/2n (where h is
Planck’s constant), then that same diagram illustrates momentum
conservation in photon-photon collision processes as first pointed out by
Kastler in 1964 [87].
In the samequantum mechanical context, the Doppler shift is a
consequence of thequantum energy conservation inherent inthe process
where ji and f+ denote light frequencies and F the sound frequency. It is
clear that, in the upshifted process, one phonon of sound is lost for every
diffracted photon generated. When the photons are of thermal origin, this
phenomenon is called Brillouin scattering[88].
The downshifted diffraction processis characterized by the phase
synchronism conditions
illustrated in the diagram of Fig. 2.9. In quantum-mechanical terms, the
conservation of momentum is described
by [26]
which equation isreadilyseen
to be equivalent to (2.5) The physical
interpretation of (2.6) is that every incident photon interacting with a
phonon stimulates the releaseof second phonon. Consequently, the sound is
25
Historical Background
Figure 2.9 Wave vector diagram for downshifted interaction.
amplified and the diffracted photon has a lower energy consistent with its
lower frequency. If the sound isof thermal origin, the phenomenonis called
stimulated Brillouin scattering. It requires powerful beams of coherent light
and was first observed by Chiao, Townes, and Stoicheff [89].The identical
effect with manmade sound was observed
at the same timeby Korpel, Adler,
and Alpiner [90].The latter, in the same experiment, also generated sound
by crossing different frequency laser beams within a sound cell. That this
should be possible had been predicted by Kastler [86],who also gave a
classical explanation of the effect based on radiation pressure. For the sake
of historical interest, Fig. 2.10 shows the attenuation and amplification of
the sound inherent in acousto-optic interaction and observedin the
experiment described in[90].
(a
1
(b)
Figure 2.10 (a) Light-induced amplification of sound (increase is downward). (b)
Induced attenuation. The bottom trace represents the light pulse. The time difference
between peaks is dueto sound travel time. (From Ref. 90.)
26
Chapter 2
From the above, it will be clear to the reader that the concept of planewave interaction has been very fruitful in generating new physical insights
leading to novel devices, theories, and experiments. It is, however, not the
only way to
approach
acousto-optics and, concurrently with its
development, the older classical methods were modified in an effort to
account better for the physical reality
of nonbounded sound fields and finite
incident beamsof
light. Thus, in 1969 McMahon calculated weak
interaction of both Gaussian and rectangular sound and light beams by
using a scattering integral [91], Korpel formulated a generalized coupledorder theory for arbitrary sound and light fields [26], and Leroy and Claes
extended the Raman-Nath theory to Gaussian sound fields [92]. Numerous
other theories were developed, but most of these used modern formulations
of either the normal mode approach (Brillouin) or the couple mode
approach (Raman-Nath) in the context of a rectangular periodic column.
Theyare,inasense,
continuations of the pre-laser era mathematical
approach and presently more relevant to diffraction by thick holograms
than to modem acousto-optics. We will therefore not discuss them further,
other than pointing out that some of these theories contain very useful
algorithms for numerical calculations in those situations where the
rectangular column model applies. Thus, Blomme and Leroy [93] have
developed a numerical method based on the eigenvalues for a system of a
finite numbers of orders. The essence of this method was originally proposed
by Mertens, Hereman, and Ottoy [94].
With the interest in practical applications of acousto-optics, a demand
arose for more sensitive acousto-optic materials. Whereas before acoustooptic parameters had mainly been studied in
the context of crystallography
[95], the emphasis now shifted to device optimization. Smith and Korpel
proposed a figureof merit for diffractionefficiency (now called M2) [97] and
demonstrated a simple dynamic measurement technique, later modified by
Dixon and Cohen to include shear-wave effects [98]. Other figures of merit
addressing different aspects such as bandwidth optimization, etc., were also
to
proposed [12,99]. At the same time, a beginning was made by Pinnow
predict elasto-opticconstant from first principles[100,lOl] in a more general
way than the classical derivation'from the Lorentz-Lorenz equation [102].
The latter relates the dielectricconstant to the density of dipoles,and it is a
simple matter to calculate the change due to isotropic pressure. It is much
more difficult to calculate the variation in intrinsic polarizability, especially
where shear forces are involved. In the latter case, the existing
phenomenological description [103,104] is actually insufficient to account
for anisotropic materials, as pointed out by Nelson and Lax [105,106].
Tabulated values of constants and figures of merit may be found in many
places [57,62,97,99,101,107-1121.
Historical Background
27
Many of the materials investigated were anisotropic, and it was soon
discovered that anisotropic operation led to special advantages in certain
cases such as, for instance, the birefringent scanner mentioned before [61].
A
systematic investigation of this entire subject
area was undertaken by Dixon
[l 131. Harris used birefringent collinear Bragg diffraction in a tunable
optical filter [114], and Havlice and co-workers used the same mode of
operation for Bragg diffraction imaging [l 151. Birefringence was also used
by Carlton for the purpose of acousto-optic signal processing [l161 and by
Korpel and co-workers for sound field probing [81]. In the latter technique,
one of the variants madeuse of the curious phenomena of dynadic
birefringence in liquids, previously studied by Ripley
and Klein [1171.
Fortuitously coinciding with the upsurge of interest in acousto-optics
during the 1960s was an accelerated development of ultrasonics technology
[l 181. In particular, the area of surface acousticwaves (SAW) was found to
have very important applications in the processingof electrical signals[l 191.
This, in turn, stimulated new applications of acousto-optics to measurement
and detection of such waves. It also made possible entirely new acoustooptic configurations involving the interaction between acoustic surfacer
waves and optical surface waves [l201 in overlay-type wave guides. These
configurations were found to have large diffraction efficiencies because of
the high-power density of both interacting waves, the energy in both being
confined to thin layers about one wavelength deep. Another important
consideration isthat such configurations are inherently two-dimensional
and
should ultimatelylendthemselves
to incorporation into integrated
electronics and integrated optics [121,1221. As the reader may have guessed,
the theoryof these kinds of interactions is fundamentally the same
thatasof
bulk interaction of sound and light. The actual calculations, though, are
usually more complicated not only because detailed depth dependencies
have to be taken into account, but also because the substrates are frequently
piezo-electric (which necessitates considering indirect acousto-electric-optic
effects) and birefringent. The heuristic approach we emphasize in this book
applies, however, equally well and hence we will not treat separately the
subject of integrated acousto-optics.
As for the use of acousto-optics to measure surface acoustic waves, it
should be notedthat such waves act as a periodic surface grating,diffracting
obliquely incident light in the same way as a thin phase grating [24].
Measuring the diffracted light (either in transmission or reflection) then
readily leads to an evaluation of strain in or surface perturbations of the
substrate. Sucha direct measurement of diffracted orders wasfirst
performed by Ippen [123]. However, from a signal-to-noise point of view, it
is preferable to measure weak light signal by heterodyning rather than
directly [124]. This may be accomplished in various ways. Korpel, Laub, and
28
Chapter 2
Sievering used a simple Fresnel imaging techniqueand detected the running
fringes with a grating of the same period[77]. Whitman and co-workers used
frequency-shifted reference beams for direct interferometric visualization
[l251 or electro-optic heterodyning[126], as did Massey[127]. Adler, Korpel,
and Desmares developed a technique in which a focused light beam was
deflected by the acoustic surface perturbations and the deflection
subsequently detected by a knifeedge-photodiode combination [128].
Figure 2.1 l shows an image, obtained with this method, of an acoustic
surface wave entering a double-groove wave guide.
It is natural to ask if a technique in which the light beam is narrower
than
the wavelengthof
sound should still beclassified as acousto-optic
diffraction. After all, the light
is refracted by the gradientof refractive index
rather than diffracted by the periodicity inherent in the sound
wave. There is,
however, a venerable precedent for classifying this phenomenon as acoustooptic diffraction: Lucas and Biquard studied it in their first paper on the
subject [5]. Theynoticed that for such a narrow beam, the separate
Figure 2.11 Image of acoustic surface wave guide obtained by coherent light
deflection. (FromRef. 128) 0 1968 IEEE.
Historical Background
29
diffraction orders would disappearto be replaced by a smeared-out version
of the’ incident beam. Based on their ray-tracing theory, they correctly
ascribed this to a periodic refraction of the light as the sound wave moved
through the beam. Much later, Whitman and Korpel proposed a unified
theory of acoustic surfacewave detection by optical meansand showed that
the narrow-beam case could be treated equally well on the basisof
diffraction as refraction [1291.
Refraction effects were also used by De
Maria andDanielson to construct
a laser Q spoiler [130]. The periodic reflection effects on which Fig. 2.1 1 is
based were later applied by Korpel and Desmares to build an acoustic
holographic camera [l311 and by Korpel, Kessler, and Palermo to build a
laser-scanning acoustic microscope[1321.
As indicated before, we have, in this historical survey, concentrated on
seminal contributions only. The interested reader may find more detailed
information about past and current developments in numerous review
articles and books [2,12,26,57,62,107-111,133-1471.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
Brillouin, L., Ann. Phys. (Paris), 17, 88 (1992).
Korpel, A., Proc ZEEE, 69, 48 (1981).
Brillouin, L., Actual. Sci. Znd., 59 (1933).
Debye, P., and Sears, F. W., Proc. Nat. Acad Sci US.,18,409 (1932).
Lucas, R., and Biquard, P., J Phys. Rad, 3, 464 (1932).
Klein, W. R., and Cook, B. D., ZEEE Trans., SU-14. 123 (1967).
Lucas, R., and Biquard, P., Comptes Rendus, 195, 1066 (1932).
Nomoto, O., Bull. KobayashiZnst. Phys. Res., l, 42 (1951).
Berry, M. V., The Diffraction of Light by Ultrasound,
Academic Press, New
York (1966).
Korpel, A., and Poon, T.C., J Opt. Soc. Am., 70, 81 7 (1980).
Osterhammel, K., Akust. Zeit., 6, 73 (1941).
Gordon, E.I., Proc. ZEEE, 54, 1391 (1966).
Blanch, G., “Mathieu Functions,” Handbook of Mathematical Functions (M.
Abramowitz and I. A. Stegun, eds.), Dover Publications,New York, p. 721
(1965).
14. Rytov, S., Actual. Sci. Znd., 613 (1938).
15. Kuliasko, F.,Mertens, R., and Leroy, O., Proc. Ind Acad. Sci. A, 67, 295
(1968).
16. Plancke-Schuyten, G., and Mertens, R., Physica, 62, 600 (1972).
17. Hereman, W., Academiae Analecta, 48, 26 (1986).
18. Raman, C. V., and Nath, N. S. N., Proc. Indian Acad. Sci., 2, 406 (1935).
19. Raman, C. V., and Nath, N. S. N., Proc Zndiun Acad Sci., 2, 413 (1935).
20. Raman, C. V., and Nath, N. S. N., Proc. Indian Acad Sci., 3, 75 (1936).
21. Raman, C. V.,and Nath, N. S. N., Proc Indian Acad. Sci., 3, 119 (1936).
30
Chapter 2
22. Raman, C. V., and Nath, N. S. N., Proc. Indian Acad. Sci., 3. 459 (1936).
23. Goodman, J. W., Introduction to Fourier Optics, McGraw-Hill, New York
(1968).
24. Rayleigh, J. W. S., Theory ofsound, Vol. 11, Dover, New York(1945).
25. Kogelnik, Bell Syst. Tech. L,48, 2909 (1969).
26. Korpel, A., “Acousto-Optics,” Applied SolidState Science, Vol. 3 (R.Wolfe,
ed.), Academic Press, New York,
p. 71 (1972).
27. Korpel, A., J: Opt. Soc. Am., 69, 678 (1979).
28. Nath, N. S. N., Proc Indian Acad Sci, 4,222 (1937).
29. Van Cittert, F! H., Physica, 4, 590 (1937).
30. Hargrove, L. E., 1 Ac Soc Am., 34, 1547 (1962).
31. Nomoto, O., and Torikai, Y., Acoustica, 24, 284 (1971).
32. Bhatia, A., and Noble, W. J., Proc Roy. Soc. SeK A, 220, 356 (1953).
33. Extermann, R., and Wannier, C., Helv. Phys. Act., 9, 520 (1936).
34. Mertens, R., Simon Stevin,27, 212 (1949/50).
35. Phariseau, I?, Simon Stevin, 33, 72 (1959).
36. Wagner, E. H., Z. Phys., 141, 604 (1955); also, 142, 249 (1955).
37. Phariseau, P., Proc Indian Acad Sci. A, 44, 165 (1956).
38. David, E.,Phys. Zeit., 38, 587 (1937).
39. Mayer, W. G., Ultrasonic News, 9 (1961).
40. Bergmann, L.,Der Ultraschall, Hinel Verlag, Stuttgart (1954).
41. Bergmann, L., Ultrasonics, Wiley, New York (1938).
42. Ali, L., Helv. Phys. Acta, 8, 503 (1935); also, 9, 63 (1936).
43. Cummins, H. Z.,and Knable, N., Proc. ZEEE, 51, 1246 (1963).
44. Lieben, W., J: Ac. Soc Am., 34, 860 (1962).
45. Jeffree, J. H., Television and Short Wave World, May 1936, p. 260.
46. Okolicsanyi, F., Wireless Engineer, 14,527 (1937).
47. Robinson, D. M.,Proc I.RE, 27, 483 (1939).
48. Korpel, A., Adler, R., Desmares, F!, and Watson, W., Proc ZEEE, 54, 1429
(1966).
49. Watson, W. H., and Korpel, A., Appl. Opt., 9, 1176 (1970).
50. Korpel, A., U.S.Patent, 3,424,906 (1969).
51. Rosenthal, A. H., I R E . Trans., UE-8, 1(1961).
52. King, M., Bennett, W. R., Lambert,L. B., and Arm, M., J: Appl. Opt., 6,
1367 (1967).
53. Whitman, R., Korpel,A., and Lotsoff, S., “Application of Acoustic Bragg
Diffraction to Optical Processing Techniques,”Proc Symp. Mod Opt.,
Polytechnic Press, Brooklyn, New York,
p. 243 (1967).
54. Korpel, A., U.S.Patent 3,544,795 (1970).
55. Korpel, A., Acousto-Optic Signal Processing,”Optical Information Processing
(Yu, E. Nesterikhin and G. W. Stroke, eds.), Plenum Press, New York,p. 171
(1976).
56. Special issueof IEEE on Acousto-Optic Signal Processing, Jan.
1981, Vol. 69,
No. 1.
57. Acousto-Optic Signal Processing(N. J. Berg and J. N. Lee, eds), Marcel
Dekker, New York (1983).
Historical Background
31
58. Korpel, A., Adler, R., Desmares, ,.’l and Smith, T. M., ZEEE. J: Quant. El.,
QE-I, 60 (1965).
59. Foster, L. C., Crumly, C. B., and Cohoon, R.L., .l
Appl. Opt., 9, 2154 (1970).
60. Lean, E. G. H., Quate, C. F., and Shaw, H. J., Appl. Phys. Lett., IO, 48 (1967).
61. Collins, J. H., Lean, E. G. H., and Shaw, H.J. Appl. Phys. Lett., 11, 240
(1967).
62. Gottlieb, M., Ireland, C, L. M., and Ley, J. M. Electro-Optic and AcoustoOptic Scanning and Deflection,Marcel Dekker, New York (1983).
63. Korpel, A., and Whitman, R.L., Appl. Opt., 3, 1577 (1969).
6 4 . Korpel, A., Lotsoff, S. N., and Whitman, R.L., Proc ZEEE, 57, 160 (1969).
65. Korpel, A., Appl. Opt., 21. 3624 (1 982).
66. Kroll, N. M., Phys. Rev., 127, 1207 (1962).
67. Cohen, M. G., and Gordon, E.I., Bell System Tech. .l, 44, 693 (1965).
68. Korpel, A., App. Phys. Lett., 9, 425 (1966).
69. Tsai, C. S., and Hance, H.V., 1 Ac Soc. Am., 42, 1345 (1967).
70. Hance, H. V., Parks, J.K., and Tsai, C. S., J: Appl. Phys., 38, 1981 (1967).
71. Wade, G., Landry, C. J.,and de Souza,A. A., “Acoustical Transparenciesfor
Optical Imaging and Ultrasonic Diffraction,” Acoustical Holography,Vol. 1
(A. F. Metherell, H. M. A. El-Sum, and L. Larmore, eds.), Plenum, New
York, 159 (1969).
72. Hiedemann, E., and Osterhammel, K., Zeit. Phys., 87, 273 (1937).
73. Hiedemann, E., Asbach, H.R., and Hoesch, K. H., Zeit. Phys., 90,322
(1934).
74. Bachem, C., Hiedemann, E., and Asbach, H. R., Zeit. Phys., 87, 734 (1934);
also, Nature (London) 133, 176 (1934).
75. Nomoto, O., Proc. Phys. Math. Soc. Jap., 18, 402 (1936).
76. Maloney, W. T., Meltz, G. and Gravel, R. L., ZEEE Trans., SW-15, 167 (1968).
77. Korpel, A., h u b , L., and Sievering, H. C., Appl. Phys. Lett., IO, 295 (1967).
78. Korpel, A., ZEEE, Trans., SW-15, 153 (1968).
79. Korpel, A., J: Ac. Soc. Am., 49, 1059 (1971).
80. Carter, W. H., J: Opt. Soc. Am., 60, 1366 (1970).
81. Korpel, A., “Eikonal Theory of Bragg Diffraction Imaging,” Acoustical
Holography, Vol. 2 (A. F. Metherell and L. Larmore, eds.), Plenum Press, New
York (1970).
82. Korpel, A., Kessler, L. W., and Ahmed, M., J: Ac Soc. Am., 51, 1582 (1972).
83. Kessler, L. W., ZEEE Trans., SW-19, 425 (1972).
84. Chu, R. S., and Tamir, T., J: Opt. Soc. Am., 66, 220 (1976).
85. Chu, R. S. and Kong, J. A., J: Opt. Soc. Am., 70, 1 (1980).
86. Pieper, R., and Korpel, A., .l
Opt. Soc. Am., 2, 1435 (1985).
87. Kastler, M. A., Comptes Rendus Acad Sc. Paris, 260, 77 (1965).
88. Fleyry, P.A., “Light Scattering as a Probe of Phonons and Other
Excitations,” Physical Acoustics,Vol. 6 (W. P. Mason and R. N. Thurston,
eds), Academic Press, New York (1970).
89. Chiao, R.Y., Townes, C. H., and Stoicheff, B. l?, Phys. Rev. Lett., 12, 592
(1 964).
90. Korpel, A., Adler, R., and Alpiner, B., Appl. Phys. Lett., 5, 86 (1964).
32
91.
92.
93.
94.
Chapter 2
McMahon, D. H., ZEEE Trans., SW-16, 41 (1969).
Leroy, O., and Claeys, J. M., Acustica, 55, 21 (1984).
Blomme, E., and Leroy, O., Acustica, 57, 168 (1985).
Mertens, R., Hereman, W., and Ottoy, J. P., Proc. Ultrasonics International
85, p. 185 (1985).
95. Bhagavantam, S., and Surayanarayana, D., Proc Indian Acad Sci. A, 26.97
(1 947).
96. Bergmann, L.,and Fuess, E., Nature (London), 24,492 (1936).
97. Smith, T. M., and Korpel, A., ZEEEJ @ant. El., QE-l, 283 (1965).
98. Dixon, R. W., and Cohen, M. G., Appl. Phys. Lett., 8, 205 (1966).
99. Dixon, R. W., J Appl. Phys., 38, 5149 (1967).
100. Pinnow, D. A., ZEEE J Quant. El., QE-6, 223 (1970).
101. Pinnow, D. A., “Electro-OpticalMaterials,” CRC Handbook of Lasers (R.J.
Pressley, ed.), Chemical RubberCo., Cleveland, Ohio (1971).
102. Von Hippel, A. R.,Dielectrics and Waves, Wiley, New York (1954).
103. Pockels, F., Lehrbuch der Kristall Optik, B. G. Teubner, Leipzig (1906).
104. Nye, J. F., Physical Propertiesof Crystals, Oxford Univ. Press(Clarendon),
New York (1960).
105 Nelson, D. F., and Lax, M., Phys. Rev. Lett., 24, 379 (1970).
106. Nelson, D. F., and Lax, M., Phys. Rev. B, 3, 2778 (1971).
107. Uchida, N., and Niczeki, N., Proc ZEEE, 61, 1073 (1973).
108. Damon, R. W., Maloney, W.T., and McMahon, D. H., “Interaction of Light
with Ultrasound; Phenomena and Applications,” Physical Acoustics,Vol. VI1
(W. P. Mason and R.N. Thurston, eds.), Academic Press, New York,p. 273
(1 970).
109. Sapriel, J., L’Acousto-Optique, Masson, Paris (1976).
110. Chang, I. C., ZEEE Duns., SW-23, 2 (1976).
111. Musikant, S., Optical Materials,Marcel Dekker, New York (1985).
112. Narasimhamurty, T. S., Photoelastic and Elasto-Optic Propertiesin Crystals,
Plenum, New York (198 1).
113. Dixon, R.W., ZEEE J Quant. El.. QE-3, 85 (1967).
114. Hams, S. E., Nieh, S. T. K., and Feigelson, R. S., Appl. Phys. Lett., 17, 223
(1970).
115. Havlice, J., Quate, C. F., and Richardson, B., “Visualization of Sound Beams
in Quartz andSapphire Near 1 GHz, “IEEE Symp. Sonics Ultrason.,
Vancouver, paper 1-4 (1967).
116. Carleton, H. R., Maloney, W.T., and Meltz, G.,Proc ZEEE, 57, 769 (1969).
117. Riley, W. A., and Klein, W. R., J Ac Soc. Am, 45, 578 (1969).
118. Special Issue on Ultrasonics, Proc ZEEE, 53 (1965).
119. Acoustic Surface Waves (A. A. Oliner, ed.), Springer, New York (1978).
120. Kuhn, L. M. L., Dakss, L., and Heidrich, F. P., App. Phys. Lett., 17,265
(1970).
121. Tsai, C. S., ZEEE Trans., CAS-26, 1072 (1979).
122. Lean, E. G., Progress in Optics, Vol. XI (E. Wolf, ed.), North-Holland,
Amsterdam, p. 123 (1973).
‘123. Ippen, E. P., Proc ZEEE, 55, 248 (1967).
Historical Background
33
124. Yariv, A., Optical Electronics, Holt, Rinehart andWinston, New York (1985).
125. Whitman, R. L., J Appl. Opt., 9, 1375 (1970).
126. Whitman, R. L., Laub, L. J.,and Bates, W. J., ZEEE Trans., SU-15, 186
(I 968).
127. Massey, G.A., Proc ZEEE, 56, 2157 (1969).
128. Adler, R., Korpel, A., and Desmares, P., ZEEE Trans., SU-15, 157 (1968).
129. Whitman, R. L., and Korpel, A., J Appl. Opt., 8, 1567 (1969).
130. DeMaria, A. J., and Danielson, G.E., ZEEE J Quant. El., QE-2, 157 (1966).
Ac Soc. Am., 45, 881 (1969).
131. Korpel, A., and Desmares, P., .l
132. Korpel, A., Kessler, L.W., and Palerrno, P. R., Nature, 232, 110 (1971).
133. Adler, R., ZEEE Spectrum, 4, 42 (1967).
134. Quate, C. F., Witkinson, C. D. W., and Wilson, D. K., Proc ZEEE, 53, 1604
(1965).
135. Gulyaev, Y V., Proklov, V. V., and Shkerdin, G.N., Sox Phys. Usp., 21, 29
(1978).
136. Sittig, E. K., “Elasto-Optic LightModulation andDeflection,” Progress in
Optics (E.Wolf, ed.), North-Holland, Amsterdam (1972).
137. Mahajan, V. N., Wave Electron.,2, 309 (1976).
138. Defebvre, A., Rev. Opt., 46, 557 (1967); also, Rev. Opt., 47, 149,205 (1968).
139. Mertens, R., “Fifty Years of Acousto-Optics,” Proc. 1lth Int. Congr. Acoust.,
Paris, pp. 101-113 (1983).
140. Hereman, W., Mertens, R., Verheest, F., Leroy, O., Claeys, J. M., and
Blomme, E., Physicalia Mag., 6,213 (1984).
141. Korpel, A., “Acousto-Optics,” Applied Optics and Optical Engineering,Vol. 6
(R. Kingslake and B. J. Thompson, eds.), Academic Press, New York,
p. 89
(1980).
142. Solymar, L.,and Cooke, D. J., Volume Holography and Volume Gratings,
Academic Press, New York (1981).
143. Nelson, D. F., Electric, Optic and Acoustic Interactionsin Dielectrics, Wiley,
New York (1979).
144. McSkimmin, H. J., “Ultrasonic Methodsfor Measuringthe Mechanical
Properties of Liquids and Solids,” Physical Acoustics,Vol. l-A (W. P. Mason,
ed.), Academic Press, New York (1964).
145. Spencer, E. G.,Lenzo, P. V., and Ballman, A. A., Proc ZEEE, 55,2074 (1967).
146. Tsai, C. S., Guided Wave Acousto-Optic Interactions, Devices and Applications,
Springer Verlag, Berlin (1980).
147. Alphonse, G. A., RCA Review, 33, 543 (1972).
This Page Intentionally Left Blank
3
The Heuristic Approach
In this chapterwe shall approach the problemof light-sound interaction in
a heuristic way based largely on physical intuition, analogies,and a curious
but fruitful mixture of ray and wave optics. Some of our assumptions may
appear somewhat ad hoc and sometimes not very compelling, but all of
them can, in fact, be justified a posteriori by performing the appropriate
experiments. In short, as in real experimental practice, we will use many
hand-waving arguments, secure inthe knowledge that the outcomeis already
known. When later onwe develop the subjectin greater depth, our methods
will be formalized and our assumptions refined, but right now we need to
develop a feeling for what is likely to happen and for the physical reasons
why it happens.
3.1
THE SOUND FIELD AS A THIN PHASE GRATING
Before starting the actual analysis, we shall first discuss the notation,
assumptions, and conventions used throughout mostof the text.
The interaction configurations wewill
deal withmostly are twodimensional, with the sound propagating nominally in the X direction and
the light in the Z direction. Wewill therefore use the angular conventions
showninFig. 3.1, where directions connectedwith the lightfield are
indicated by the angle 4, counted positive counterclockwise from theZ axis.
35
.
Chapter 3
36
For the sound field, we use an angle positive clockwise from the X axis. A
typical configuration is seen in Fig. 3.2. This shows an idealized sound
beam,extendingfrom
x=-Wtox=+w,
having straight wavefronts
separated by the acousticwavelength A and being contained between planes
z=O, z=L inside a medium of nominal refractive index no. A plane wave of
light, of angular frequency W , characterized inside the medium by a wave
vector ko, is incident from the left normalto the sound column.
A configuration as described above does
not reflect physical reality,not so
much because it is a two-dimensional abstraction, but rather because sound
will spread by diffraction, and a true single plane wave of light cannot be
generated. Nevertheless, many physical configurations approach this ideal
one, for instance one in which a not too wide beam of coherent light
traverses the sound field close to a uniform amplitude acoustic transducer.
In more general terms, we mean a configuration where in the interaction
region the sound and the light spread so little that they may effectively be
considered plane waves. Such considerations already make us suspect that
for a true physical description a decomposition of both fields into plane
waves may be useful. We will discuss this point later in Sec.
3.3.
Returning now the idealized configuration of Fig. 3.2, we shall further
simplify our picture by assuming that the sound field propagates in a
medium that is both acousticallyand optically isotropic.The sound field will
be represented by the symbol S that is taken to denote longitudinalstrain in
a solid or fractional density change (condensation)
in a liquid:
s(x, z, t)=Re[S(x, z) exp(int)]
(3.1)
where S(x, z) is a phasor with associated frequencyQ. It should be noted at
this point that we sometimes (especially in the case of optical quantities) use
X
Figure 3.1 Angle conventionsfor the sound field (@and the light field (4).
37
The Heuristic Approach
n
Figure 3.2 Planewavenormallytraversing
a thin soundcolumn,becoming
corrugated in the process, and giving rise to multiple orders of diffracted light.
the concept of a time-varying phasor,e.g., S(x,z, t). In such cases, the time
variation of this phasor is assumed to be slow compared to the associated
frequency. Thus, in a later chapter, we refer to a sound field whose basic
as S(x,z, t)=S(x, z) exp (i6Qt)
soundfrequenty Q hasbeenshifted by
where it is understood that the associated phasor frequency equals
Q.
In general, we shall use lowercase letters for the (obviously real) values
of
real physical quantities and capital letters for complex quantities. [Notethat
the latter are not always phasors as defined by (3.1.)] If time and/or space
dependence is not explicitly expressedby the notation, indicatedby the text
or obvious from the context, will
it be assumed that the quantity is constant.
Typical examples of notation are for a plane wave traveling in the + X
direction
S(x,z) =Sexp( -Jxx)
(3.2)
where the propagation constant K and the radian frequency Q are related
through the sound velocity V
Chapter 3
38
For a standingwave, we may write
S(x, z)=S c o s ( f i )
(3.4)
In both eqs. (3.2) and 3.4), S is a complex time- and space-independent
amplitude:
S=lsl exP(i4s)
Although the notation outlined aboveis not completely unambiguous in the
strictest mathematical sense (for instance, quantities
S@, z, t), S(x,z), and S
are denoted by the same symbol), it is compact and convenient for our
purposes. At any rate, the correct interpretation
will always be clear fromthe
context.
At this point, we shall not consider in detail the potentially very
complicated way in which the sound affects the optical properties of the
medium. Rather, we shall assume the simplest case in which a small isotropic
change & in refractive index occurs, proportionalto the sound amplitude:
&(X, Z, r)=C's(x,
Z,
t)
(3.6)
where C' is a real materialsconstant.
We may then write with (3.1)
&(x, z, t)=Re[An(x, z) exp(iQt)]=Re[C'S(x, z) exp(iS2t)l
(3.7)
For the planewave (3.2),
An(.x, z)=An exp(-ilux>
(3.8)
and for the standingwave eq. (3.4)
An(x, z)=An cos(fi)
(3.9)
where from (3.5)
An=C'S=IAnl
= M exp[i($s+
701
if C ' > O
(3.10a)
if Cc0
(3.10b)
In what follows, we chooseC ' > O . The other case is then readily obtainedby
our capitalletter
replacing
by &i-.rr.Note that in agreementwith
convention, An may be complex.
39
The Heuristic
As for the light, we adopt the convention that optical parameters such as
wavelength A., velocity c, and light propagation constantsk describe the field
inside the medium. If vacuum values are meant, we will use a subscript v as
in
(3.1 1)
where W denotes the radian frequency of the light and no the refractive
index. We further assume that the medium is nonmagnetic, i.e.,po=pv.
In order to treat in what follows immediately, the cases of a traveling wave
and a standingwave simultaneously, we will for the configuration of Fig. 3.2
write (3.6) as
&(X, Z,
t)=b(r) cos [Kx-p(r)]
(3.12)
where for a travelingwave
p(t)=Qt+&
(3.13b)
and for a standingwave
b(t)=lAnl cos(Qt+&)
(3.14a)
p(r)= -a
(3.14b)
The importance of writing & in the form (3.12) lies in the fact that the
temporal variation of b and p is assumed to be extremely slow comparedto
the transit time of the light through the sound field. Consequently, the
interaction analysis, for both traveling and standing waves, may proceed on
the basis of a “snapshot” of the sound field, during which b and p are
constant. Their time-dependence may then be inserted a posteriori in the
results of the analysis.
The light is representedby its electric field:
e(x, z, t)=Re[E(x, z, t ) exp (iot)]
(3.15)
Note that for light we use the concept of generalized time-varying phasors
from the beginning. This is particularly useful for the optical fieldswe deal
with, because many of these are up- or downshifted by multiples nQ of the
Chapter 3
40
sound frequency. Thus, E,,(x, z, t)=E,,(x, z) exp(jnQt) is an often used
expression for sucha phasor field, where it is understood that the associated
frequency equals o.
For simplicity, we shall at present not take polarization into account;
hence, e is a scalar quantity. We shall also usea normalization such that for
the plane wave E(x, z)=E exp(-jk,x-jk,z) the intensity is given by
I= EEr
(3.16)
where the asterisk denotes the complex conjugate.
A s for the frequencies of sound and light, it will be assumed throughout
that nlW 1. With the exception of one application in Chapter
8, we will also
adhere to the so-called small Bragg angle assumption, i.e.,Klkel. Together
with the assumptionof the paraxial propagationof incident soundand light,
this means that also the scattered light can be so treated. Hence, in all
so that sin Q-@, sin y-3: cos @-l-q2/2,
relevant calculations, ye1 and @el
cos y- 1-f l 2 .
Following the above preliminaries, we are now ready to analyze the
phenomenon itself. For that purpose, we first consider the case where the
sound column is thin enoughto be considered a thin phase grating (Exactly
what is meant by “thin enough” in this context will be discussed later.) The
effect of such a thin phase gratingmay be analyzed by the identical methods
used in optical processing, i.e., by calculating the phase shift of each optical
“ray”and neglecting ray bending and diffraction [l]. We shall call this
technique the straight undiffracted ray approach (SURA). The results so
obtained are characteristic of so-called Raman-Nath or Debye-Sears
diffraction.
3.1.1
Normal Incidence
Let Ei denote the complex electric field strength of the normally incident
plane wavein Fig. 3.2, i.e., E(x, 0, t)=E(x, O)=Ei. By our previous
assumptions, the total phase shift of the light “ray” (i.e., the partial
wavefront of widthAx) at x is given by
L
@(x,L, t ) = -k,l6n(x, z, t)dz- kL
(3.17)
0
With (3.12) we find readily
e(x, L, t)=-k,Lb(t) COS[KX-/?(~)]-~L
(3.18)
The Heuristic
41
Thus, the electricfield at z = L is given by
E(x, L,t)=Ei exp(
-jkL)exp
{ -jkYLb(t)~os[fi-p(t)]}
(3.19)
Equation (3.19) represents a spatially corrugated wavefront at the exit of
the sound field. This is schematically indicatedby the wavy line in Fig. 3.2.
Equation (3.19) is strongly reminiscent of electronic phase modulation that,
as most electrical engineers are well aware of, results in many sidebands, the
amplitudes of which are given
by Bessel functions [2]. Following this lead, it
is readily shown that[3]
E(x, L,t)=exp(-jkL)C(-J3”EiJn[kvLb(t)]Xexp[-jnfi+jnp(t)] (3.20)
where the summation is fromn = - W to n = + q and J n denotes the nth order
Bessel function.
The physical interpretationof (3.20) isthat each term in the sum gives rise
to one of the set of plane waves or orders into which the exit field can be
decomposed.Morespecifically,
the nth term causesaplane
wave to
k, such that knx=nK,i.e., the angleof propagation
propagate in the direction
Qi, is given by
nK
sin4, =-=k
nil
h
(3.21)
Unless indicated otherwise, paraxial propagation will be assumed for all
fields involved so that (3.21) may be written
nK
(pn=-”=-
k
nil
h
(3.22)
As schematically indicated in Fig. 3.2, the exiting light field thus gives rise
to adiscrete spectrum of plane waves or orders ofdiffractedlight,
propagating in the directions specified by (3.21). In an actual experiment,
each planewave is of necessity a beam of finite width.
In retrospect, (3.21) is quite plausible as it is also the basic equation for
light diffraction bya grating of period h [4]. The sound fieldin our
experiments acts essentially likethat, with the proviso that its nature isthat
of a (sinusoidal) phase, ratherthan amplitude grating. This point is actually
nontrivial and merits some discussion.
The typical gratings mentioned in elementary physics
textbooks are
amplitude gratings and only generate a multitude of orders because they
consist of sharplydefinedlines(e.g.,aRonchiruling).
A sinusoidal
Chapter 3
42
amplitude gratingwould only generate two orders in addition
to the incident
light [l].Our sound wave generates many orders, in spite of its sinusoidal
character, because it is a phase grating. There is a complete analogy here
to
the case of amplitude modulation and phase modulation in electrical
engineering.
Returning now to (3.20), let us first consider the caseof a traveling sound
wave. With (3.13) we find that the nth order contribution to the exit field is
given by
where we have written &(x, L, t ) in the form of an oblique plane wave with
the origin as a phase reference point and traveling in a direction @,, defined
by
knx=k sin &=nK
(3.24a)
k,,=k
(3.24b)
COS
@n=[k2-(nK)2]”2
so that within a small phase error of order
KZLlk(4, as we shall show later)
The socalled Raman-Nath parameter v [5] denotes the sound-induced peak
phase shift
The relative intensityof the various ordersis thus given by
as shown in Fig. 3.3.
With eqs. (3.15) and (3.23), we may write the complete expression for the
plane wave in the region z>I to which E&, L, t) gives rise to
e,@, z, f)=Re{E,expli(o+nQ)t-jkx sin @,-jkz cos&]}
(3.28)
It is seen from (3.28) that the nth order is shifted in frequency bynQ, i.e.,
by n times the sound frequency. The classical explanation of this is the
Doppler shift, which may be seen as follows. An observer lookingat the nth
The Heuristic Approach
43
t
2
3
4
V
5
Figure 3.3 Intensity of orders, diffracted by a thin sound column, as a function
of
the peak phase delay v (Raman-Nath parameter).
order sees the sound-induced radiating dipoles moving upward with sound
velocity V. The velocity component in his direction is given by Vsin $,;
hence, with eq.(3.21) we find for the Doppler shift
(3.29)
Note also from (3.25) that the nth order is shifted in phaseby -n~-/2+n$~
due to the scattering process.
Now consider the sound field to be a standing wave. With (3.14) and
(3.20), we find that the nth order contributionto the exit field is given by
En(x,L, t )=En(f) exp(-jkx sin &-jkL cos $,)
(3.30)
where, to within the same small phase error as (3.25),
in
E,,(f)=(-ly
cos(Qf+$,)]Ei
exp[-jna]J,[v
(3.31)
Note that because J,,(O)=O, n&, each order vanishes twice per cycle. This
is, of course, no surprise because the entire sound grating, beinga standing
wave, vanishes twice per sound cycle.
Finally, it is of interest to investigate the limit of weak interaction, i.e.,
when v+O. Using the small argument approximation for the Bessel
functions and neglecting second-order terms in v, we find readily that only
three orders(- 1,0, + 1) remain and that fora traveling wave
44
Chapter 3
(3.32)
(3.33)
x exp(- jkx sin4- jkLcos@,+ jQt)
x exp(- jkx sing-, - jkLcosqLI - jQt)
(3.34)
and for a standing wave
(I) 4 -jkL cos 4 +
El(x, L, t ) = -jEi - exp(-ja)
x [exp(-jkx sin
+ [exp(-jkx
jQt
+ j & )]
(3.36)
sin4 - jkL cos 4 - jQt - j g s ) ]
(11
&(x, L, t ) = -jE, - exp(-ja)
- jkL cos
x [exp(-jkx sin
+ [exp(-jkx
&,- jkL cos
sin
+ jQt + j&)]
(3.37)
- jQt - j & ) ]
Note that, in the standing wave case, both orders contain up- and
downshifted components. This isinagreementwith
the amplitude
modulation evident in(3.31).There exists, however,an equivalent and useful
physical interpretation as .follows. The standing wave of refractive index
variation may be thought of as two counter propagating waves, each giving
rise to a peak phase shift of v/2, when acting alone. In the limit of weak
interaction, the effects of each ofthesetwo
wavesmaybeanalyzed
+ 1 order the upward traveling
separately and the results added. Thus, for the
wave will upshift the frequency, the downward traveling wave will cause
downshifting. The two components so generated can be calculated from
(3.33)and (3.34)by replacing v with v/2). The result will be given by (3.36)
and (3.37),if the two travelingwaves have been giventhe right phase shiftto
account for a.
45
The Heuristic Approach
3.1.2 Oblique Incidence
This configuration is shown in Fig.
3.4 where the light is incident at an angle
$Q.The ray bundle entering the sound beam at x suffers a total phase shift
upon traversal to x’
(3.38)
where &‘(C, t ) denotes the sound-induced refractive index change along the
oblique path C.
From Fig. 3.4, we see readily that
&‘(c,t)=6n(x+C sin $Q,
Ccos Qo, t )
(3.39)
or, using the paraxial approximation,
&’(C,
C,
(3.40)
t)=b(t) cos[K(x+$Q~-P(~)l
(3.41)
t)=&(’(x+$Q<,
With (3.12)
&’(C,
X
Z
Figure 3.4 Obliquely incident light and diffracted orders for a thin sound
column.
46
Chapter 3
Substituting (3.41) into (3.38) and using cos @O -1 -&2,
order in $0
we find to a second
where sinc X = sin(7GX)InX.
The obliquely incident electric field may be written
E(x, O)=Er exp( -jk@ox)
(3.43)
The exit field E(x, L, t) may now be calculated in a way analogous to eq.
(3.19). Taking into account the ray displacement, i.e., replacing x by
x’-L@o, and leaving out the prime on x’ in the final result, we find after
some tedious algebra
E ( x , L, t ) = E, exp -jkL - jk&,x+ 2
x exp{- jk,,Lb(t) sinc
Kx-p(t)-=] 2
(3.44)
}
Like eq. (3.19), eq. (3.44) also represents a corrugated wavefront that may
be written as
E ( x , L, t ) = exp(- jkx sin
[
e0- jkL cos eo)X(j)”Er
(
x J , k,Lb(t) sinc -
(3.45)
We note that (3.45) is identical to (3.20) but for a term, exp(-jkx sin
eO-jkL cos @I), expressing oblique incidence; a phase shift, nKQoL/2,
referring to the center of the sound beam; and a multiplying factor,
sinc(KQoL/2z), in the argument of the Bessel functions.
The factor sinc(K@oL/2z)= sinc[&/(ML)] describes the effect of phase
The Heuristic Approach
47
cancellations along the oblique path of the incident ray. Note that this factor
vanishes whenever &=nNL, ie., whenever the incident ray tranverses an
integral number of sound wavefronts. In that case, the accumulated phase
shift averages out to zero and the diffraction effects disappear.
Because of the similarity of (3.20) and (3.43, all qualitative conclusions
drawn from the normal incident case apply to oblique incidence. In
particular, the incident light is split up into many orders, located
symmetrically around it. This is shown in Fig. 3.4, where
(3.46)
Defining
(YtL)
v’ = v sinc
(3.47)
we find with (3.12) and (3.13) that for a traveling sound wave, the nth order
contribution to the exit field, written as a plane wave, is given by
En(x, L, t)=En exp(-jkL cos &-jkx sin ipn+jnQt)
(3.48)
where, to within a negligibly small phase error,
(3.49)
For a standing wave, we find with (3.12) and (3.14)
En(X,
L, t)=En(t) exp(-jkL cos @,-jkx sin @n)
(3.50)
where, to within the same negligible phase error,
E,(t) = (- j)“ exp(
-jnKq0L
2
Jn[v’ cos(C2t + &)]E,
(3.51)
It should be noted that in the region s>L, the field is found, as in (3.28), by
substituting z for L in (3.48).
For later use it is of interest to explicitly write the amplitudes E,, of the
orders -1, 0, + I in the case of weak interaction with a traveling sound
wave
48
Chapter 3
EF Ei
(3.52)
(3.53)
E,, = -jEi
(i’)
sin(
y) +
exp( j@$-
2
(3.54)
Using (3.26) and (3.10), we may write for (3.53) and (3.54)
E-, = -jEi
( ]
E,, = -jEi
( -~ 3 (- [-
1
)
C ’S * sinc[ K@OL exp( jQ0L
C’Ssinc K-;L) exp -j;@oL)
(3.55)
(3.56)
where the sign ofC‘ is accounted for.
in (3.56)
As we will see later, the factor C‘S sinc(K$ol/2lc)exp(-jK$oL/2)
represents theangular plane-wave spectrum (radiation pattern) S(j),
evaluated at y= -h, of the sound field emanating froma uniform
transducer of width L , displaced by L/2 from the origin [l]. Aphysical
interpretation of (3.56) is that, of the entire spectrum, only that particular
plane wave of sound propagating upward at an angle y= - h [dashed and
labeled S(-&)in Fig. 3.41, and being perpendicular to the incident light,
causes diffraction effects that result in + l order generation as indicated by
(3.56). In addition, the same plane wave causes a contribution to the -1
order, with an amplitude related to the complex conjugate of the sound
plane-wave amplitude, as indicated by (3.55).
We will return to these aspects later when we develop our plane-wave
interaction theory. There, it will become clear
that plane waves of sound and
light can only interactif their wave vectors intersect almost perpendicularly.
In our present treatment, the “almost” is lost because of the simplifying
picture of straight undiffracted rays traversing a thin phase grating.
3.1.3
Criteria for Raman-NathDiffraction
We have seen that the use of the SURA approach describes Raman-Nath
diffraction as defined by (3,25), (3,31), (3.49), and (3.50). We willnow
investigate under which conditions this approach is valid and Raman-Nath
behavior is to be expected.
,
49
n e Heuristic Approach
In our analysis so far, we have made the following assumptions:
1. The sound field is thin enough to ignore optical diffraction effects.
2. The sound is weak enough to ignore optical ray-bendingeffects.
Let us now try to quantify these assumptions in orderto find their limitsof
validity.
In regard to 1, consider the ray bundle of width Ax (Fig. 3.2) with which
we have “probed” the soundfield for local phase delays. Inour analysis, this
ray bundle is assumedto be parallel; in reality it spreads by diffraction. The
angle of spread is on the order1A.x;
hence, the width of the bundle at z=L
is approximately Ax+LA/Ax. A minimum value is obtainedwhen Ax=(LA1I2
and results in a width of 2(LA)ll2 at the exit. It will be clear that this width
must, in fact, be much smaller than, say, 1 rad (N2n) of the sound wave
length in order for our phase-sampling to be sufficiently localized. It is
readily seen that this leads to the condition
A2
L((2na
(3.57)
or, in termsof the so-called Klein-Cook parameterQ = L @ / k [6],
Q& 1
(3.58)
A physical interpretation of (3.58) along the lines we have followed here
was first given by Adler[l.
Condition (3.58) is sometimes called the Raman-Nath or Debye-Sears
criterion. It should be realized, however, that it is incomplete because it only
states the conditions underwhich diffraction effects may be ignored;to also
ignore ray bending, a second
criterion is required, which we shall now
discuss.
Figure 3.5 shows a parallel ray bundle of width Ax’ propagating at an
angle a; over a distance h’
Because
.
the gradient of refractive index causes
a difference An (where An is real) across the ray bundle, the lower
part of the
bundle-where the velocity is greatest-has traversed
a greater length than
the upper part. The differenceAI is found readily from the differential phase
shift
kAl= k,AnAz‘
(3.59)
As a result of this effect, the original wavefront “a” has rotated to become
the wavefront “by’ after traversing a distance A d . The ray bundle now
propagates in the directiona,+Aar,where for small
Chapter 3
50
Figure 3.5 Ray,normal to wavefront “a” bentnormal to wavefront “b” by
gradient of refractive index.
AI
Aar =-
Ax’
(3.60)
From (3.59) and (3.60) we find, lettingAn,Ax’,Az’, A m , Al+dn, dx’, etc.,
(3.61)
or, invoking the paraxial approximation assumptions,
(3.62)
As ar=dxldz, where the ray trajectory is denoted by x(& and n only
depends on x , we find finally
d2a
1 dn
7=(G-)z
(3.63)
as the equation forthe ray trajectory.
Equation (3.63) may also be derived, in a somewhat less obvious but more
rigorous manner,fromgeometrical
optics [S]. A non paraxial exact
treatment of ray bending can be developed in terms of elliptic functions[9];
The Heuristic Approach
51
for our purposes, it suffices to apply (3.63)to some special regions of the
sound field.
For simplicity, consider normal incidencewith the refractiveindex
variation given by (3.9),with An real and a,=O. The maximum gradient of
refractive index is
(2)mm
=
Near the straight part of the sinusoidal variation of n, at a suitably chosen
instant of time, the ray trajectory is then given, from(3.63)and (3.64),by
x=
(-3
A?lz2
(3.65)
where x=O refers to the zero pointof the wave form. it will be clearthat the
total displacement of the ray during traversal of the width L of the sound
field must be much smaller than, say, 1 rad ( N 2 a of the sound wavelength,
in order forour ray bundle to be sufficiently straight. Hence, the criterion for
ignoring ray bending can be derived readily:
(3.66)
In terms of the Klein-Cook parameter Q and the Raman-Nath parameter
v, (3.66)may be written
(3.67)
Qv42
The ray displacement interpretation of(3.67)was first given by Rytov[lo].
Another type of ray bending occurs near the maxima of the refractive
index variation. Here, the sound wave acts as a lens focusing the incident
rays. For our straight ray analysis to be valid, it is necessary that the focal
point lies well outside the sound field. The effect may be calculated as
follows. Near the maximum of n, at a suitably chosen instant of time, the
refractive index is given
by Ancos(Kx), and hence
dn
-=dx
kxn sin(Kx) = -K2Anx
(3.68)
in the immediate neighborhood of the maximum, defined by x=O in the
coordinate system used.The ray trajectories follow from(3.63)and (3.68)
52
Chapter 3
(3.69)
Equation (3.69) indicates periodic focusing for rays close to the axis, a
phenomenon that also follows from the rigorous ray theory[9] and is clearly
visible in Fig. 2.3. The first focus occur&
at a distance
3
~ K ( A ~ Z I"~ ) " ~
LP=[
It
(3.70)
where it has been assumed that An4no.
Imposing the conditionthat L,bL lead6 to the criterion
QV(<,
It2
(3.71)
Essentially, (3.67) and (3.71) express the same kind of condition, with
(3.71) being somewhat less stringent. The criterion for ray bending to be
negligible may thus be written conservatively
Qv4
1
(3.72)
In summary then, the criteria that have to be satisfied in order to be able
to analyze the sound field with the straight undiffracted ray approach are
given by Q 4 1 and Qv4 1. Ipso facto, these are also criteria for Raman-Nath
behavior to occur.
It seems natural to ask whether the ray analysis can be extended by
exact ray trajectories. This
calculating the accumulated phase shift along the
is indeed possible, and relevant theories may be found in Refs. 1 1 and 12.
Because such theories take ray bending into account but not diffraction,
they are still subjectto the criterionQ 4 1.
3.2
THE SOUND FIELD AS A THICK PHASE GRATING
If the conditions Q 4 l and Q v a l are no longer satisfied, then the SURA
method may no longer be applied. However, the width of the sound column
may be subdivided in thin slices parallel
to X , each of which satisfies the
conditions for applying the SURA approach. This is the basis ofVan
Cittert's analysis [13]. The slices are taken to be of infinitesimal thicknessdz
and hence act as weak thin phase gratings that generate two additional
The Heuristic Approach
53
orders (3.53-3.56) for each incident one. This scheme, which
we shall call the
cascaded thin grating approach (CTGA), is symbolically illustrated in
Fig. 3.6, where two slicesare shown, separatedby a finite distance for clarity.
The beauty of Van Cittert's scheme is that use can be made of the thin
phase grating results already known, and
that these may now be considered
exact because the gratings are infinitesimally thin. For the traveling wave
case (the only one treated here),
we may thus go directly to (3.55) and 3.56),
realizing that inVan Cittert's scheme the nth order at the point z is
contributed to by downshifting from the(n+ 1)th orderand upshifting from
the (n- 1)th orderat that same point.
The contribution d-En(z), downshifted from En+l(z),may be obtained
directly from (3.55) by letting
E- 1+exp( -jkz cos 4,,)d-En(z)
(3.73a)
E,+exp( -jkz cos qh + l)En+l (z)
(3.73b)
L+dz
(3.73c)
Hence, to within a first order indz,
exp( -jkz cos qn)d-En(z)
= -0.5jEn+l(z)k,C'S*dz-exp(-jkzcos @,+l)
t
X
Figure 3.6
approach.
Successivediffractionmodelusedinthecascadedthingrating
(3.74)
Chapter 3
54
The forward-projecting phase terms in (3.73a) and (3.73b) are needed
because the slice dz underconsideration is located at z,not at the origin as in
the formulations (3.55) and 3.56).
The contribution d+En, upshifted from En-1, is round in the same way
from (3.56).
exp(-jkz cos &)d+En(z)
=-O.SjEn-l(z)k&’S
dz.exp(-jkz
COS 6 - 1 )
(3.75)
To conform withthe existing literature [14],we shall adopt the notation
kC
k,C’ = 2
(3.76a)
so that
(3.76b)
Adding the two contributions specified in (3.74) and (3.75), we find with
(3.76a)
dEn = -0.25jkCSE,,
dz
exp[- jkz(cos@,-, -cos@,,)]
(3.77)
- 0.25jkCS* Entlexp[-jkz(cos@ntl -cos@,,)]
Note that in (3.77) the En’s denote the amplitude of plane waves with a
phase reference point at the origin. Thus, for C=O the solution of (3.77) is
given by E,=constant. In our formulation, the propagation through the
unperturbed medium is already taken into account. It is useful to define a
virtualfield at the origin in terms of such “back projected” waves:
E(&, t; z)=XEn(z)
exp(-jk
sin
&x+jnflt)
(3.78)
Note that in (3.78), z appearsas aparameter that refers to the position (cross
section at z) of the actually existing field that is to be back-projected. This
actually existing field is, of course, given by forward projecting the waves
again:
E(x, z, t)=XEn(z) exp(-jkz
cos
&-jkx sin &+jnnt)
We will elaborate on the concept
of virtual fields later in Sec.
3.3.3.
(3.79)
55
The Heuristic Appmach
In most formulations of the problem (including Van Cittert's),a
somewhat artificial fieldY is used instead of a plane wave
En(z)
exp(-jkz
$n)=Y,,(z)
cos exp(-jkz
cos
$0)
(3.80)
In termsof Yn,(3.77) becomes
dyn
dz
jk(cos$,,
"
-cos$,,)Y,,
= -0.25jkCSY,,-, - 0.25jkCS * 'P,,-]
(3.81)
Written as in(3.81), the equations are similar to the well-known
Raman-Nath equations [l 5,161. Klein and Cook [l71 state them in the
following form, accurateto a second order inQin:
(3.82)
where it is assumed that S=-~lq,denoting a sound field of the form IS1
sin(S2t-fi).
The unwary reader, trying to read the original source material, should be
warned against the notational chaos that awaits him. Not onlyis the
definition (3.80) often used, but in addition, Raman and Nath wrote n for
-n (a practice still used by most European researchers) and used a rotated
coordinate system with the 2 axis aligned with the incident light. To make
matters worse, A* often stands for A, p
1 for I A n l , o* for Q, etc. In addition,
equations are frequently normalized, with {=vz/2Lor c=vz/L, and various
parameters are introduced such as p (or n)=Q/v (sometimes called regime
parameters), a=-OSqb/$~,~=v/2L.Also, it is quite customary not to state
the phasor convention used,nor which subscripts refer to the medium and
which (if any)to free space.
To write(3.82)in canonical form, many investigators, starting with
Raman-Nath, define
{=-vz
(3.83a)
p=Q
(3.83b)
L
V
so that (3.82) becomes
Chapter 3
56
(
2d Y,,
+(Y,,-Yn+l)
,
= j n p n + -' Y ) y n
d5
(3.84)
In this book, we shall stick to the formulation (3.77) because ( 1 ) it already
takes conventional propagation effects into account; (2) it is physically
obvious in the sense that itidentifies the contributions to E, from
neighboring orders, with the difference of the cosine terms indicating the
degree of phase mismatch; and ( 3 ) it is a special case of a generalized
equation valid for any sound field,
not just a sound column.
The latter point is due to the fact that although the CTGA method is
apparently heuristic, it does not really make any simplifying assumptions
other than the scalar nature of the field and the paraxial modeof
propagation. Neither ray bending nor diffraction are ignored. It should
therefore not come as a surprise that the final equations (3.77) may be
directly derived from the scalar wave equation (in fact, this was already
done
by Raman and Nath in their original treatment [15,16]),nor that they may
be generalized so as to apply to arbitrary sound fields. We shall return to this
point later.
3.2.1 TheRaman-NathRegime
As already pointed out, the difference of cosine terms in
(3.77) indicates the
lack of phase matchingthat characterizes the interaction.To a second order
in Klk and qb, we may write with (3.46)
):(
cos &+l -cos $, = - - $0 -
cos (pn-, -cos
(-
(3 +(7)($1
en= - -
$0
-
(3.85)
(3.86)
Let us now assume that the phase mismatch between neighboring orders
over the maximum length L ismuch smaller than, say, 1 rad for all
significant orders n, i.e.,
2
2
k($)
Lc(l
and
kn($).L((l
(3.87)
Which is the highest order that we should take into account in (3.87)? We
anticipate that, in the absence of phase mismatch, we will obtain much the
The Heuristic Approach
57
same resultas before for small L. Because the order amplitudes
will be given
by Jn(v), we can guess that the numberofsignificant
orders willbe
proportional to v. (This, by the way,is the exact analog of frequency
modulation in radio engineeringwhere the number of significant sidebands
is proportional to the phase modulation index [2].) With that assumption
and remembering that Q=K2/Llk,we may write for (3.87)
Qv41
Q41
and
(3.88)
Condition (3.88) is, however, the exact condition for which the SURA
method is applicable. Hence, we conclude (1) that assuming perfect phase
synchronism in the interaction is equivalent to using straight undiffracted
rays and (2) that the results of the present analysis should be identical to
those of the previous one.
With (3.85-3.87), we may writefor (3.77)
dEn - -0.25jkCSE,,-, exp(j&b0z)
dz
- 0.25jkCS * E,,+,exp(jK&,z)
"
(3.89)
together with the boundary values
Eo(O)=Ei
(3.90)
and
En(O)=O
Denoting
we may write (3.89) with (3.5)
(3.92)
If we use the recurrence relations for the Bessel functions [3], it may be
shown [l41 after tedious algebrathat the functions
sin
Tn=exp(jnbz)J,(a
satisfy the recurrence relation
dTn - 0 . 5 exp(2jbz)Tn-,
~
- 0 . 5 ~exp(-2jbz)Tn+,
dz
"
(3.94)
Chapter 3
58
Comparison of (3.94) and (3.92) suggests equating
Tn= EL;
a = 0.5kCISI;
2b
= -K$,,
(3.95)
so that the solution of (3.92) may be given directly in terms of EA or, with
(3.91), in terms of E n
(3.96)
where we have used (3.47) and (3.76).
As surmised, our present solution (3.96) is identical with the one (3.49)
derived by the SURA method, thus confirming the equivalence of the
criteria for phase synchronism with those for straight undiffracted ray
analysis.
3.2.2 The BraggRegime
If the interaction length is long enough, the phase mismatch expressed by
will prevent any cumulative contribution
the cosine difference terms (3.77)
in
from neighboring orders, and the net effect is that no diffraction occurs at
all. There exist, however, two conditions in which there is phase matching
between the incident light (at angle h) and one neighboring order. This
occurs when either
cos
$1
(3.97)
in which case, the + l order interacts synchronously with the incident light
or when
cos $o=cos
$-l
(3.98)
in which case,the interaction takes place with the
- 1 order.
With (3.46) we find that (3.97) is satisfied when
(3.99)
and (3.98) when
$o
+K
=”=
2k
+$B
(3.100)
59
The Heuristic Approach
Equations (3.99) and (3.100) correspond to the configurations for upand downshifted Bragg diffraction. They are illustrated schematically in
Figs. 3.7a and 3.7b. From the symmetry of the interaction, it looksas if the
diffracted order is reflected off the moving wavefronts of sound in the same
way that x-rays are reflected off atomic planes. Indeed, if one uses this
analogy and imposes the condition that rays reflected off successive crests
add up constructively, conditions (3.99)and (3.100) are derived readily[lS].
Nevertheless, this simple heuristic model should be viewed with suspicion
because, unlike a crystal where the x-rays .are scattered off well-defined
discrete planes, a sound column is, in essence, a continuous distribution of
“reflectors” Hence, there isno a priori physical reasonwhy it should behave
in the same fashion as a crystal. In fact, we have already seen in Ch. 2 that
acousto-optic scattering doesnot (at least to a first order) show the multiple
Bragg angles typical of x-ray diffraction.
Substituting (3.97) into (3.77) and neglecting all order but 0 and +l, we
find the equations for upshifted scattering
I
I
a
EO
4
L
E
I’i
E -1
b
Figure 3.7 (a) Upshifted Bragg diffraction. (b) Downshifted
diffraction.
.110)
Chapter 3
60
d-4 = -0.25jkCSEo
-
(3.101)
dEo - 4 . 2 5 jkCS* E,
(3.102)
dz
"
dz
It isnow readily found with (3.76), by imposing the proper boundary
conditions, that inside the sound column
El = -j exp(j4,)Ei sin
(3.103)
(3.104)
Similarly, with (3.98) the equationsfor downshifted scattering are
- -0.25 jkCS * Eo
"
dz
dE0 = -0.25jkCSE-,
dz
(3.105)
(3.106)
with the solutions inside the sound column
E-, = -j exp(- jqjs)Eisin ( O . 3
(3.107)
Eo = E, cos(") 0.5vz
(3.108)
Eo to Ei
Note that a complete periodic transfer of power takes place from
and vice versa. The intensities of the two orders leaving the sound column
i.e., at z=L, are in both cases given by
ZO'Zi
COS*(0.5V)
1+1,-1=Ii sin2(0.5v)
This behavior isshown in Fig. 3.8.
(3.109)
The Heuristic Approach
61
1.2
1.o
0.8
0.6
0.4
0.2
0.0
4
2
0
V
6
8
10
Figure 3.8 Power exchange betweenorders in pure Bragg diffraction.
3.2.3
TheNear-BraggRegime
In many cases, it is of interest to investigate Bragg diffraction behavior near,
but not at, perfect Bragg angle incidence. For upshifted interaction, this may
be done be setting
11)
&o=-@B+A@
It is readily shownthat in this case
&-COS
COS
$1
=AHNk)
(3.112)
Using (3.77), the equations coupling the orders
0 and + 1become
dEl = -0.25 jkCSE,, exp(- jK.zA@)
dz
(3.1 13)
dE0 = -0.25jkCS * E , exp(jKzA@)
-
(3.1 14)
dz
We now introduce new dependent variablesEO’ and El’:
E , = E; exp(
T)
(3.115)
Chapter 3
62
(3.1 16)
Substituting (3.115) and (3.116) into (3.113) and (3.114), we find
(3.117)
dE6
-+
dz
AI$
jK-EE,=-jaS*E;
2
(3.118)
where a= KC14= -kpno2I4.
Substituting a time dependenceexp(jr.z) for both El' and Eo' into (3.117)
and (3.1 18),
Eh=A exp(irz)+B exp(-jrz)
(3.i 19)
E{ = C exp(irz) +D exp( -jrz)
(3.120)
and setting the determinantof the resulting set of linear equations equalto
zero, we find
(3.121)
The constants A , B, C, and D may be found from the boundary conditions.
After some algebraand application of (3.1 15)and (3.1 16), we finally findfor
El and EO
(3.122)
-jkA4
2r
]
sin(rz)
exp( jKF4)
(3.123)
Using the notation alSlL=v/2, we find for the intensities at the output of
the sound cell (z=L)
63
The Heuristic Approach
h(;)
(3.124)
- z,
(3.125)
2
I, =
Io = Zj"
Note that the power transfer from IO to I1 and vice versa is now quasiperiodic and initially incomplete. This is shown in Fig. 3.9 for the intensities
z=L.
at the exit of the sound column
It is readily checkedthat the interactionbetween l o and 1-1is expressed by
exactly the same relations (3.124) and (3.125) if the angular deviation is
defined such that
(3.126)
h=q ~ + A q '
3.2.4
Criteria far Bragg Diffraction Behavior
Our ignoring of orders other than those that interact phase-synchronously is
based on the assumption that the term (Klk)2k in (3.85) and (3.86),
1.d
0.8
0.6
0.4
0.2
0.0
0
2
4
V
6
l0
Figure 3.9 Partialpowerexchangeinoff-angleBraggdiffraction.Thedeviation
from the Bragg angle equals0.25 NL.
64
Chapter 3
multiplied with an effective interaction lengthLe, must be much larger than
1 rad. If the sound field is weak, then the minimum
Le=L, and the condition
becomes simply
Q 91
(3.127)
In general, however,the effective interaction length shouldbe defined as that
length in which complete power transfer has taken place in pure Bragg
diffraction. According to (3.108), this occurs when 0.5vLdL=d2, i.e., when
L, is of the order of L h . It is readily seen that the condition then becomes
Q/vS1
(3.128)
In summary, wemaysay
that for Bragg diffraction to occur, both
conditions must be satisfied, i.e.,Q%l and Q/v%l.
A more mathematically oriented discussion of the various criteria may be
found in Refs. 19-21.
It should be notedthat in practical devices, v e x so that (3.127) suffices as
the sole criterion. Klein and Cook 161 have calculated the power in the first
diffraction peak as a function of Q. This is shown in Fig. 3.10, in which it
follows that more than 90% of the light is diffracted when Q>6.
1 .o
0.8
0.6
0.4
0.2
0
2 3 4 6 810
Q
20
40
l00
Figure 3.10 Maximumdiffractionefficiencyas a function of Q. (From Ref 6.)
0 1967 IEEE.
I
The Heuristic Approach
3.2.5
65
WeakInteraction
If we make the assumptionthat the interactionis very weakso that only the
+ l and - 1 orders are generated, and the 0 order is unaffected, then (3.77)
becomes with (3.85) and (3.86)
dJ% = -0.25jkCSE, exp
-
(3.129)
dz
dE-1
- -0.25jkCS * Ei exp
(3.130)
"
dz
These equations may be integratedto give with (3.76b)
E-1( L )= -jEi
('f')* [
C'S
sinc KL(@&-
[
x exp
@B
'3
(3.131)
(3.132)
+gB)]
It is interesting to compare (3.131) and (3.132) and (3.55) and (3.56), and
with the subsequent interpretation following the latter equations. It is clear
that this interpretation should be revised slightly in the following way. In
(3.132) the sinc term represents the angular plane-wave spectrum of the
sound field evaluated at Y=-(@+@B), not at y=-& as in (3.56). It is that
particular plane wave in the spectrum that could be interpreted as being
responsible for generating the E1 order. As shown in Fig. 3.1 1 the wave
vector of this wave, labeled ,!?[-(+O+@B)], makes an angle of d 2 - @ B with the
incident light, not d 2 as would follow from (3.56). The angle @B is the slight
(but very important) correction remarked on in the discussion (3.55).
of This
time the correctionwas not lost in the assumptions, because
(3.77) does take
diffraction into account, in contrast to the SURA method that led to (3.56).
It will be seen in the next section that the angle d2-@Bis precisely the one
required by conservation of momentum conditions. Using the same
reasoning as above in connection with (3.131), we find that an angle of
66
Chapter 3
Figure 3.11 Interpretation of diffraction by a thin sound column, using angular
plane-wave spectrum concepts.
d2+&I is required to generate the - 1 order. This also will be seen to be
required by quantum-mechanical considerations of the photon-phonon
collision process.
3.3 THE SOUND FIELD AS A PLANE-WAVE COMPOSITION
In this section, we shall first define formally what is meant by a plane-wave
composition, next discuss the interactionof sound and light on the basis of
wave vectordiagrams,
and finally formulate a heuristic plane-wave
interaction theory, using some results obtained previously to quantify the
formalism.
3.3.1 The AngularPlane-WaveSpectrum
In the region morethan a few wavelengths away froma real source (suchas
a transducer), any field satisfying the wave equation may be decomposed
into plane waves [l]. In our treatment, we assume that the sound fields travel
nominally in the + X direction and the light fields in the + Z direction, i.e.,
K,>O and k,>O; in other words, we ignore reflected waves.
As for the sound field, that is just an experimental condition; as for the
light field, it is equivalent to assuming that the gradient of &I is very small
The Heuristic
67
and that, in any case,. there exists no phase-synchronous interaction between
incident light and reflected light. This is, of course, no longer true for large
Bragg angles, butwe do not consider such cases as yet. At any rate, with the
conditions mentionedabove and assuming paraxial propagation, an
arbitrary light field cross section along the line perpendicular
to the Z axis at
the point zmay be written as a spectrum
l?($; z) of plane waves.If we use the
origin as a phase reference,
z)exp(-jkx sin$- jkz cos$)d(k$/2lt)
E(x, z)= pi($;
(3.133)
where E(x,z) isto be thought of as a cross section at z parallel
to x.Note that
the amplitude spectrum of the plane waves, E($; z), is assumed to be a
function of z. This is because generally such a spectrum refers
to a'particular
scattered order, sayEn($;z) for the nth order, and continuously evolves on its
z) is a
transit through the interaction region. Inthat regard, &C$;
generalization of the z-dependent plane-wave amplitude E&) used before.
In the unperturbed medium, l?($; z) is constant and can be written as E($).
For the incident light, for instance,Et($; z)=&$). It should be stressed that
this independence of z only applies if the angular spectrum is defined with
the origin as a phase reference, as we do here. It is quite common in the
literature, however,to use a local phase reference forl?($;z)i.e., the point(0,
z) itself. This is, for instance, the convention followed in [l]. In that case,
(3.133) is simpler in that the exp(-jkz cos $) factor disappears. However,
that same factor now reappears as a propagator for the spectrum in the
unperturbed medium. We shall in what follows adhere to the convention of
using the origin as a phase reference. The plane-wave spectrumso defined is
sometimes called the virtual plane-wave spectrum. More details of spectral
formalisms will be found in Sec.8.6.
Using the paraxial assumptionand rearranging terms,we may write
(::)
E(x, z)= r k ( $ ;z) exp(-jkz cos$) exp(-jkx$) d -
. (3.134)
It is clear that (3.134) represents a Fourier transform between E(x, z) and
l?($; z) exp(-jkz cos $), with transform variables x and k$/2lt. The latter
quantity is related to the more formally used spatial frequencyf, by
(3.135)
so that (3.134) may also be written in symbolic form
Chapter 3
68
where 3 denotes the Fourier transform operator. The notation wAfxwe
shall often leave out for brevity. With the paraxial assumption,we may also
write
[
-
exp(-jkz cos) I exp -jkz
(3.137)
[l-2@*)1
It should be noted that, strictly speaking, the integral limits in (3.133)
should be -l/A and +l/& as this is imposed by sin $<l; it is, however,
assumed that E($; z) itself is restricted to much smaller spatial frequencies,a
fact which is already implicit in the paraxial propagation assumption.
The plane-wave angular spectrum l?($; z) can be calculated from (3.134)
by using the inverse transform
l?($;z) = exp(jkz cos$)jmE(x, z)exp(+jkx$)dx
(3.138)
S
or symbolically
E($;
=exp(jkz
z) cos
$)F~[E(x,z)],
fX+A
(3.139)
where 9 - l denotes the inverse Fourier transform operator.
For the sound field we have similar expressions. In this case, however, the
plane-wave spectrum Sufi does not change with the distance traveled. With
the origin again as the phase reference
-
S(x, z) = r s ( y ) exp(-jKz sin y - jlvx cos y ) d
where S(x, z) is to be thought of as a cross section through x parallel to z.
Using the paraxial approximation and rearranging terms,
we may write
(F):
xexp(-jkiy) d
(3.141)
It is seen that S(x, z) and 11'(fiexp(-jlvx cos fi form a Fourier transform
pair with zand K742a as the transform variables.The spatial frequency F, is
69
The Heuristic Approach
defined by
-="=
2a A
(3.142)
F,
Equation (3.141) may be written symbolically
(3.143)
The inverse relation is
&y) = exp(jKx cos y
rS(x,
J-
z) exp(+jfiy) dz
= exp(jKx cos y)s"[S(x, z)], F, 4
y/A
(3.144)
As we have stressed before, it is useful to define virtual (back-projected)
optical fields JP(x; z) at the origin along the X axis. The "z" in E(")(x; z) is
to be thought of as a parameter that indicates the location of the backprojected cross section E(x, z). Another way of saying this is that such
virtual fieldswhen propagated (forward-projected) in the unperturbed
medium define actually existing fields E(x, z) at z. Now, E(x, z) is also
defined by (3.133) that contains the forward projection [term with exp(-jkz
cos 4)] of the angular spectrumE(@;z). It is, therefore, easyto show that the
following relationships exist.
E("(x; z)= r k ( 4 ; z ) exp(-jkxq) d
a
(3.145)
The concept of virtual fields is useful only for those fields, like &(x,n#i,
z),
whose plane-wave spectrum is continuously modified by passagethrough the
perturbed medium. For fields like Edx, z) that are connected with the
unperturbed medium and whose angular spectrum is therefore independent
of z, all back-projected fields are identicalby definition E?"'(x;z)=Ei(x, 0).
Chapter 3
70
3.3.2 The Wave Vector Diagram
As already mentioned in Chapter 2, the basiclight-sound interaction
mechanism can be illustrated by means of wave vector diagrams. These
diagrams refer quantum mechanically to the conservation of momentum in
photon-phonon collisions and classically to phase-synchronous interaction.
Figures 3.12(a) and 3.12(b) show the basic diagrams (in two dimensions) for
up- and downshifted interaction. The. relevant
k vectors belongto waves that
differ in frequency by a very small relative amount Qlo and hence are
essentially of equal length. Thus, the wave vector triangle is isosceles, and
the Bragg angle condition follows immediately:
K
@B
=%
for @B((l
(3.147)
a
Figure 3.12 (a) Wave vector diagram for upshifted Bragginteraction.
for downshifted interaction.
(b) Same
71
me Heuristic Approach
Because only one plane wave of sound is shown, the diagrams of Fig.
3.12
refer to the pure Bragg regime where no plane waves of sound in different
directions are available to cause rescattering into higher orders. Whether or
not this condition in fact pertains experimentally depends on the width of
the angular spectrum of sound and the strength of the interaction. Figure
3.13, for instance, shows a very wide spectrum of sound waves (a) that, at
high sound levels, causes considerable rescattering (b). Thus, K2 scatters kl
into k2, K3 scatters k2 into k3, etc. Note that the scattered orders are
separated by 2 @as~was found before.
For a sound column centered at L/2 and of length L, the angular planewave spectrum can readily be calculated from
(3.144) with x=O.
(3.148)
and is seen to be of angular width =NL. As said before, rescattered orders
are separated by ~ @ B = W AHence
.
the potential to generate many orders
requires that N L S N L . This leads us back again to the Raman-Nath
condition Q . 1 , which we have encountered before. In other words, from a
0
k-3
a
b
Figure 3.13 (a) Multiple-order generation by rescattering through (b) wide
angular spectrum of sound (From Ref. 14.)
72
Chapter 3
plane-wave interaction point of view, the potential occurrence of many
orders in a thin sound column (Raman-Nath regime) is due to the fact
that enough different plane wavesof soundare available to cause
rescattering.
Now, Fig. 3.13 should not be interpreted too literally: rescattering is
indeed involved, but the process is not simply one in which, say, first
E1 is
generated by means ofKI, then E2 by means of K2, etc. Such a picture would
L,
be equivalentto a cascaded interaction of successive sound cells of width
each one acting on a particular order generatedby the preceding one. That
is obviously not the real physical situation: there is only one
interaction
region of length L. In reality, the situation is more complicated and, as we
will see later, can only be fully explained in
terms of scattering paths
(Feynman diagrams) that illustratewhat
happens inside that single
interaction region. Nevertheless, the diagram of Fig. 3.13 is a useful guide
to
understanding the physical mechanisms behind the phenomena of multipleorder generation.
3.3.3
Plane-WaveWeakInteractionFormalism
In weak interaction, rescattering doesnot occur and only the three orders0,
+ 1, and - 1 are involved. This is evident from the single incident plane-wave
case described in (3.32-3.34).Is it possibleto say anything of the strengthof
the interaction in the incident plane-wave spectrum case, i.e., to derived a
quantitative formalism? Here, wehave to proceed very cautiously, The
diagrams of Fig. 3.12 describe a
global interaction; hence, they tell us
nothing about whatgoes on inside the interaction region. We might,
however, at this point inour heuristic analysis be able
to at least describe the
outgoing plane-wave spectrum of the light (i.e., the light field at +m) in
terms of the incident spectrum and the angular plane-wave spectrum of the
sound.
As for the light, we shall therefore use the spectra EO($;
+W), ,!?I($; +m),
and E-I($; +W) as defined in Sec. 3.3.1. Such spectra could be deduced,by
means of (3.139), from measurements outside the interaction region,z=m.
at
To lighten the notation, we write these spectra as EO($),
El($), and I?-I($).
Theydefine, through (3.145 , virtual (i.e.,back-projected)lightfields
EV'(x; m), E'['(x; W), and E(!'](x; m), along the X axis at the origin. As
explained before, E(')(x;z) means "back-projected from z." For brevity, we
write Eb"'(x),&'(X), and E?'l(x), for the fields back-projected from infinity.
Note that the propagationof Ei only pertainsto the unperturbed mediumso
that Ei($; Z ) = E i ( @ ) and E!")(x, Z)=Ei(X,0) for all z. This was explained in
connection with (3.145) and (3.146).
The Heuristic Approach
73
The virtual phasor quantities corresponding
to the virtual fields are given
by
E'"0 (x;f)=EP(x)
(3.149)
Ef)(x;
(3.150)
f)=Ef)(x)exp(jkQt)
where k=l, -1.
The virtual fields, when propagated forward in the unperturbed medium,
will uniquely describe the situation to the right, at z=w, of the interaction
region. Thus, these fields describe the final results of the interaction.
Our
present formalism (i.e., the wave vectordiagram) cannot describe what really
goes on inside the interaction region. Outgoing plane-wave spectra or,
alternatively, virtual fields are all we can calculate and all we really need at
present.
In what follows, we shall assume weak interaction only, so that the
0)
incident light isnot depleted andhence~o($)=~i($),ES"'(x)=El')(x)=Eo(x,
=Ei(X, 0). Now, in accordance with the
wave vector diagramof Fig. 3.12(a),
it seems reasonableto try a first-order interaction description of the form
(3.151)
where CI is an interaction constant. Equation (3.151)isvisualized
symbolicallyinFig.3.14(a).
Inorder to find Cl, letusanalyze
the
configuration of Fig. 3.11 for which we already know the weak interaction
result from (3.132). The obliquely incident field is given by (3.43)
as
&')(x)=Ei(x, O)=Ei
(3.152)
exp(-jk@ox)
Using this in (3.146)and remembering that &$), we obtain
(3.153)
so that
(3.154)
From (3.148) we find
(3.155)
Chapter 3
74
a
X
b
Figure 3.14 (a) Wave vectors used in weak upshifted plane wave interaction
formalism. (b) Same for downshifted interaction.
The Heuristic Approach
75
so that (3.151) becomes
or using the relations between$, qb, and $B implied by the delta function:
kl($1= CISLsin{ ( $ o +m&]
h
+$&l
-jK(@o
2
(3.157)
($-$;2$d]
Substituting (3.157) into (3.145), we find that (3.157) defines the following
virtual field at the origin:
x exP[-jk(@Oi-2$B)xl
We recognize (3.158) as a plane wave propagating at an angle (qb+2$~)and
amplitude given by the x-independent part of (3.158). This amplitude is to
be equated withE@) in (3.132). We then find
c,=--jkC
4
(3.159)
so that (3.151) may be written
For the downshifted case [Fig.3.14(b)], we find similarly
8-,($))=-0.25jkC,!?(-$-$~)&$+2$~)
(3.161)
As will be shown later, the heuristically obtained relations (3.160) and
(3.161) are in perfect agreement with those derived formally from the wave
equation. They enable us to calculate the weak interaction of arbitrary
sound and light fields.
Chapter 3
76
Some interesting properties and applications follow directly from (3.160)
and (3.161). We will discuss three of these here, taking (3.160) as our
standard equation.
Invariance of Oberved DiffHcted Light Intensity Patternto an X
Displacement of the Incident Light Beam
Let Ed$) correspond to a finite light beam of some sort, centered at the
origin, with its electric field along the X axis given by Ei(x, 0). This will
result in a diffracted spectrum ,??I($) whose corresponding virtual field
)
the Xaxis can be calculated from (3.145)
(beam) E ~ ) ( xalong
Now, let us move the center of incident beam up to X = X O . Its angular
spectrum is then given by
(3.162)
If we substitute this into (3.160), it is readily seen that the angular spectrum
of the diffracted beam is given by
where we used the relation $B=A/~A..
With (3.145) we find that the virtual
diffracted beam itself is
now given by
E(')
I (x-XO) exp(-jfio)
(3.164)
i.e., it is displaced by the same amount as the incident beam without
undergoing a change in shape. The only effect is
a phase shift by -&o
as a
result of the incidentbeam intersecting the sound field at a different
location.
Thus, from an experimental point ofview, the appearance (i.e., the
intensity pattern) of the diffracted beam in weak interaction is invariant
to a
displacement of the incident beam along the
z axis.
Measurement of the Sound Field Radiation Pattern
Let E,($) represent a plane wave incident at an angle $0.Its field along the X
axis is given by
(3.165)
E,(x,
-jk&)
0)=E/ exp(
The Heuristic Approach
77
With (3.139) we find that
(3.166)
With (3.160) we find that the diffracted planewave is represented by
(3.167)
Hence, it propagates in the direction & + 2 4 ~ ,as is to be expected. By
measuring its intensity while varying &, the sound field angular power
spectrum can be plotted, as was first demonstrated by Cohen and Gordon
v21
A more detailed analysis would, of course, take into account that the
probing field is a beam, not a planewave. From (3.160) it follows readilythat
this makes little differenceas long as the angular width of the probing beam
is much smallerthan thatof the sound field.
Bragg Diffraction Imaging
Let &e) have a very large angular width so that, relative to q$),we may
consider it to be a constant. It follows from (3.160) that, apart from a fixed
angular displacement and an inversion, the diffracted angular spectrum is a
replica of the angular spectrum. Hence, by performing an optical Fourier
transform, it oughtto be possibleto obtain a pictureof the sound field. This
method, called Bragg diffraction imaging, was first demonstrated
by Korpel
[23]. Detailed descriptions may be found in Refs. 14 and 24-27. We will
return to it in Sec. 6.7.
3.3.4 Strong Bragg Diffraction of a Broad Light Beam
In this two-dimensional case, a light beam of arbitrary cross section is
incident at the Bragg angle on a thick sound column of width L (see
Fig. 3.15). For weak interaction this case can, of course,be treated
according to Sec, 3.3.3. For strong interaction we assume that the angular
spectrum of the incident light is sufficiently narrow so that each incident
plane wave in E,(@ is subjectedto near-Bragg angle diffractionand gives rise
to a single diffracted wave E1(4+24~).Conversely, each plane wave E I ( ~is)
generated from a single corresponding plane wave Ei(4-24B). The latter
angleisequivalent
to the incidentangle & usedin the treatment of
Chapter 3
78
I
L L -
Figure 3.15 Interaction of broad light beam with rectangular sound column.
near-Bragg anglediffractioninSec.3.2.3.According
to (3.1 ll), the
quantity A$ then equals &+$B=$-$B.
Pursuing this reasoning, eqs. (3.122)
and (3.123) apply to our present problem if E1 is replaced by El($), EO by
&($-2$B), and A$ by $-$E. We find for the plus one order atthe exit of the
sound cell
79
The Heuristic
where l ? ~ ( @ = l ? ~ ( &w)=l?~(tj, L), because the perturbed medium ends at
z=L. A rigorous treatment will be given in Sec. 4.2.4. A practical example
for a Gaussian beam is to be found in Sec. 6.2. Note that for vanishingly
small v, (3.168) may be written as
- 0.25jkCSL
sindK(qj-@,)L/2
mGzE
(3.169)
in accordance with (3.160).and (3.155).
The method outlined here was proposed independently by Magdich and
Molchanov [28] and by Chu and Tamir [29]. A similar method for the near
field has been used by Chu and Tamir [30], and was later extended by Chu,
Kong, and Tamir to the far field [31]. The selective angular depletion of the
incident plane-wave spectrum, as evident from (3.168), has interesting
consequences,such as beam shapedistortionand
less thanoptimal
diffraction. We shall discuss these effects briefly in Sec. 6.2.
3.3.5
Diffraction by a Profiled Sound Column
The case ofa two-dimensional profiled sound column subject to plane-wave
incidence in the Raman-Nath region has been solved approximately by
Leroy and Claeys [32] through a perturbation expansion, using a modified
Raman-Nath equation. For a Gaussian sound beam, they find a negligible
difference with a rectangular profile as long as the cumulative phase shifts
induced by the acoustic fields are identicaland the width of the Gaussian is
comparable to the width of the rectangular profile. This is intuitively
plausible in the context of the SURA method, discussed in Sec.3.1.
Mathematically, the case of a profiled sound column can be treated
readily with Van Cittert’s method, discussed in Sec.3.2. A glance at Fig. 3.6
makesit clear that successive gratings do not have to have the same
amplitude or phase. Variations of these parameters with z are simply taken
into account by replacing S with S(z) in (3.77). Note, however, that this does
not mean that an arbitrary sound field can be treated, because the method
cannot deal with the case S=S(x, z). Only interaction with profiled (i.e.
nonspreading) sound fields can be analyzed thisway. We will treat the case
of an amplitude-profiled soundfield first, because it has a general solution.
Starting from (3.77), invoking phase synchronism for all orders, and for
Chapter 3
80
simplicity settingS(z)=IlS(z)l and 0 0 , we find
dEn = 0.25kC1S(z)l(En_,- En+1)
dz
(3.170)
Remembering that k~S(z)~/4=kylArz(z)1/2,
we introduce an accumulated
phase shift
V(z) = 0.5f kC]S(z)ldz
(3.171)
Substituting (3.171) into (3.170), we find
dEn - 0.5(En-,- En+l)
"
dF
(3.172)
which, beingthe recursion formula for the Bessel functions, gives
E n =Edn( f)
(3.173)
It is clear that for a rectangular profileV(L)=v.
The method outlined above
was used by Pieper, Korpel,
and Hereman [33]
in the investigation of an acousticallyapodizedBraggdiffraction
configuration. In that study, it was shown numericallythat the method gave
correct results if the Bragg diffraction criteria were satisfied. It was also
shown that Bragg behavior, defined as the complete periodic or quasiperiodic exchange of energy between two orders, was still maintained for
large v, even when one of the Bragg criteria ( Q / v % l ) was violated. It is
thought that this is due to suppression of the acoustic side lobes through
apodization. At the time of this book's writing, no satisfactory analytic
theory was available for this region, either for apodized
or rectangular sound
fields.
Foraphase-profiled
sound fieldthereexists
no general solution.
Sharangovich C341 has given an explicit solution for interaction of a plane
wave of light with a cylindrical wave front of sound in an anisotropic
medium. We shall here give the derivation in simplified form [35], ignoring
anisotropy.
A converging sound fieldis shown in Fig. 3.16. In the interaction region,
defined by a relatively narrow beam of light, the radiusR0 of the wave front
is assumed to be constant, and the soundfield is defined as
81
The Heuristic Approach
Figure 3.16 Interaction configuration for curved sound wave front.
exp(- j K x ) , 0 Iz IL
(3.174)
where \~rdenotes the relative phase at the edges of the sound field.
Assuming Bragg interaction, we now use (3.101) and (3.102) with S=S(z),
as explained before:
(3.175)
(3.176)
where a= KCl4.
The above two equations can be manipulated to give
Chapter 3
82
(3.177)
A new variable is now introduced:
q=-jx(z-T)
K
L
2
(3.178)
so that (3.177) becomes
(3.179)
Equation (1.179) is thestandardform
of theequation defining the
degenerate hypergeometric function [36] (also called Kummer's confluent
hypergeometric function[37]) given by
d2y
JY
x-+(q-x)"py=o
C1.Y
dx
(3.180)
with x=h, g= 112, p=a2Sn21(2jK/Ro).
Equation (3.180) has the solution
where Q, denotes the degenerate hypergeometric function.In the present caseq= 112 so that x1-q+6q.
Now, according to (3.178) 6q has two roots, which, as it turns out, both
have to be used [35]:
83
The Heuristic Approach
0
4
a
12
16
Raman Nath parameter v
Figure 3.17 Degenerate hypergeometric solution for interaction withcurved
sound wave front, and comparison with eikonal-Feynman diagram method. (From
Ref. 35).
A similar treatmentmay be applied to El.
A and B may now be found from the boundary conditions Eo(O)= 1 and
El(O)=O. The resulting expressions are extremely cumbersome-to the point
where it is perhaps easier to numerically solve the differential equation,
rather than use tabulated values ofthe degenerate hypergeometric function.
Figure 3.17 shows the power in the first order
at the output of the cell as a
function of the Raman-Nath parameter v [35]. The edge phase is taken
to be
I,U=~TC. Comparison is made with the approximate solution by the
eikonal-Feynman diagram technique, which will be discussed in Sec.
4.9.
REFERENCES
1. Goodman, J. W., Introduction to Fourier Optics, McGraw-Hill, New York
(1 968).
2. See, for instance,S. Haykin. Communication Systems, Wiley, New York (1983).
3. Olver, F. W. J., "Bessel Functions of Integer Order," in Handhook of
Mathematical Functions (M. Abramowitz and I. A.Stegun, eds.),Dover, New
York, p. 721 (1965).
4. See, for instance,D. Halliday and R. Resnick, Fundarnentuls qfPhysics, Wiley,
New York ( 1981 ).
5. Raman, C. V., and Nath, N. S. N., Proc. Indian Acad. Sci., 2,406 (1935).
6. Klein, W. R., and Cook, B. D. IEEE Trans., SW-14, 123 (1967).
7. Adler, R.,IEEE Spectrum. 4.42 (1967).
8. Horn. M., and Wolf, E., Principles Of Optics, Pergamon. New York (1965).
9. Lucas, R., and Riquard, J. Phys. Rud., 3,464 (1932).
P..
Chapter 3
84
10. Rytov, S. Actual. Sci. Ind., 613, (1938).
11. Berry, M. V., The Diffraction of Light by Ultrasound,Academic Press, New York
(1966).
12. Defebvre, A., Pouliquen, J., and SCgard, Comptes Rendues Acad. Sc. Paris. B,
265, 1(1967).
13. Van Cittert, P. H., Physica, 4, 590 (1937).
14. Korpel, A. “Acousto-Optics,” in Applied Solid State Science, Vol. 3 (R. Wolfe,
ed.), Academic Press, New York, p.71(1972).
15. Raman, C. V., and Nath, N. S. N., Proc. Indian Acad. Sci., 3,459 (1936).
16. Nath, N. S. N., Proc. Indian Acad. Sci., 4,222 (1937).
17. Klein, W. R., and Cook, B. D., IEEE Trans., SU-14, 123 (1967).
18. Yariv, A. Introduction to Optical Electronics, Holt, Rinehart and Winston, New
York (1971).
19. Gaylord, T.K. and Moharam, M. G., Appl. Opt., 20,3271 (1981).
20. Solymar, L., and Cooke, D. J., Volume Holography and Volume Gratings,
Academic Press, New York,(1981).
21. Hariharan, P., Optical Holography,Cambridge University Press, New York
22.
23.
24.
25.
26.
27.
(1984).
Cohen, M. G. and Gordon, E. I., Bell System Tech. J., 44,693 (1965).
Korpel, A., App. Phys. Lett., 9,425 (1966).
Korpel, A., IEEE Trans., SU-15, 153 (1968).
Tsai, C. S. and Hance, H. V., .lAc. Soc. Am., 42,1345 (1967).
Hance, H. V., Parks, J.K., and Tsai, C. S., .lAppl. Phys., 38,1981 (1967).
Wade, G., Landry, C. J.and de Souza, A. A., “Acoustical Transparencies for
Optical Imaging and Ultrasonic Diffraction,” inAcoustical Holography,Vol. 1
(A. F. Metherell, H. M. A. El-Sum, and L. Larmore, eds.), Plenum, New York,
p. 159 (1969).
28. Magdich, L. N. and Molchanov, V. Y., Opt. Spectrosc, 42,299 (1977).
29. Chu, R. S. and Kong, J. A., .lOpt. Soc. Am., 70, 1(1980).
30. Chu, R. S., and Tamir, T., .lOpt. Soc. Am., 66, 220 (1976).
31. Chu, R. S., Kong, J.A., andTamir, T., .lOpt. Soc. Am., 67, 1555 (1977).
32. Leroy, O., and Claeys, J. M., Acustica, 55,22 (1984).
Opt. Soc. Am., 3, 1608 (1986).
33. Pieper, R., Korpel, A., and Hereman, W., .l
34. Sharangovich, S. N., Sox Phys. Tech. Phys., 36,61 (1991).
35. Yuh-Ming Chen, Acousto-Optic Interactionin Arbitrary Sound Fields,Ph.D.
Thesis, The University of Iowa, 1994.
4
The Formal Approach
4.1
INTRODUCTION
This chapter isorganizedinmore
or lesschronological
order of
development. First we discuss the conventional theories applying
to a sound
column: the coupled mode analysis and the normal mode analysis. The
former uses the eigenmodes of the unperturbed medium, i.e. plane waves
that become coupled by the action of the sound. This analysis ultimately
leads to the Raman-Nathequations.
The second approach uses the
eigenmodes of the perturbed medium, which turn out to be described by
Mathieu functions.
Next we developgeneralizedRaman-Nath
equations that apply to
arbitrary light and sound fields in two or three dimensions. Formal integral
solutions forweak scattering aregiven and then recast using the plane-wave
spectra of the participating
fields.
A strong plane-wave interaction theory of arbitrary fields, restricted to
two dimensions, is dealt with next, and leads to the concept of Bragg lines
along which order transitions are mediated. Feynman diagram techniques
then provide an explicit solution in the form of path integrals.
The eikonal theory of acousto-optics leads to ray tracing techniques and
ray amplitude calculations. By combing it with Feynman diagram methods,
85
Chapter 4
86
it may be applied to strong interaction. As an example, we treat interaction
with curved wavefrontsof sound.
Finally, we discuss a vectorial treatment of acousto-optics based on
Maxwell's equations.
4.2
COUPLED MODE ANALYSIS
We shall, in what follows, still assume that the interaction can be described
by scalar quantities; later on, wewill investigate the validity of this
assumption in more detail.
We start from the wave equation for the electricfield, e, in two
dimensions, X and Z ,
where uv and cVare the permeability and permittivity of free space, and p is
the electric polarization.
We assume that the polarization is linearly related
to the electric field bya
sound-induced time-varying electric susceptibility
where XI is real and may be negative. Hence,
The relative dielectric constant is given
by
El0
=(l+xo)=d
(4.4)
so that
&)*=[l +x(t)]=[I +XO+XI cos(Q-Kx+$)
(4.5)
Assuming that IX,I<l, we may write
where the + sign corresponds to C ' > O and the - sign to C ' < O (see Eq. 3.10).
The FormalApproach
87
Substituting (4.3) into (4.1), we find with (4.4) and (4.7)
d2e d2e
+-ax2 d z 2
pv&,nid2e
dt
It is now assumedthat @+Q, so that (4.8) may be written
d 2 e pv&,no2d2e
d2e +-ax? dz2
dt
= + p V & , n ~ [ 2 n ~ ~ ~ A n ~ c o s ( i 2 t -d2e
K x + @ ) ] ~ - - ; (4.9)
&Writing the electric fieldas a discrete planewave composition
e=C0.5En(z)exp[-jkzcos@,,-jkxsin
@n+j(o+nQ)t]+CL
(4.10)
assuming that the variation of En(z) is relatively slow
(4.1 l )
and substituting (4.10) into (4.9), we find upon equating equal frequency
, invoking the small angle
components, using the relation c , , = @ & ~ ) - ~ . ~and
assumption
@n+1=@n+2@~
(4.12a)
that, with (3.10) and (3.76a).
dEn
-=
dz
-0.25 jkCSE,,-, exp[-jkz(cos@,,,_I -cos@,,)]
- 0.25jkCS * E,,+Iexp[-jkz(cos@,,+, -cos@,,)]
(4.13)
Equation (4.13) is the plane wave form of the Raman-Math equations,
derived earlier (3.77) by the heuristic CTGA method. As already remarked,
[l].
it is usually written in the form (3.82) through the transformation (3.80)
88
4.3
Chapter 4
NORMAL MODE ANALYSIS
We shall for clarity limit ourselves to the case of perpendicular incidence
and follow the original treatment by Brillouin [2], adapted to a traveling
sound wave.We start out by assuming that (4.9) is satisfied by a discrete
composition of normal modes. These modes are nonplane waves e, that
travel in the 2 direction with propagation constants k,=a,k, where the an
are as yet undetermined constants. The wave cross sections or profilesf,
must be periodic functions of the total argument, Qt-Kx+& of the sound
field, Hence,
The constant a,,should not be confused with the phase a of a standing
sound wave, used earlier in(3.14b).
Upon denoting
Qt-K~+$=27
(4.15)
(4.9) becomes (if we choosethe positive sign for simplicity)
(4.16)
Substituting (4.14) into (4.16),using(4.15),
find
and assumingthat
1,we
(4.17)
where the mode numbern has been omitted.
Equation (4.17) may bewritten in the canonical form
[3]
(4.18)
where
(4.19)
The Formal Approach
89
(4.20)
The solutions of(4.18) are the so-called Mathieu functions. Those
relevant toour problem are periodic for a countably infinite set of
characteristic q-dependent valuesa= a&) that yield even periodic function
f,=ce,(q,q), n=O, 1, 2, 3, . . ., and u=b,(q) that yield odd periodic solutions
f,=se,(q,q), n=1, 2, 3, . . . The period of the functions equals T or 2n
depending on whether n is even or odd. For physical reasons, the functions
to be used in the present application must have the same symmetry and
periodicity (z)as cos 277; hence, only the solutions cezn(q,q) are suitable. For
large q, the ce%(q,q) look very much like distorted harmonicsof cos(277) [4],
as shown inFig. 4.1 for ce4 with q= 10. For smallq, it may be shown [3]that
ce(q,q)-D2,cos(2q),
where
D2,, ais
normalization constant. The
characteristic values U, determine the a,, through (4.19), and these in turn
determine the propagationconstants k,=Gk of the correspondingwaves.
Because the Mathieu functions are orthogonal and normalized it should
be possible to find the coefficients C, appropriate to a particular boundary
condition [e.g., e(x, 0, 0)= constant for normal incidence] using standard
techniques. In practice, however,this is a cumbersome procedure because the
Mathieu functions are rather intractable
and appear to have very few “nice”
analytic properties. For one thing,an analytic relation between a and q does
not exist; one has to use numerical calculations or existing tables to find a
(and hence a) from q. A partial plot of a vs. q for some cezr solutions is
shown in Fig. 4.2.
After the a,,s and cns are found, the normal modes with profilesce%(q,q)
-1.2 I
0
I
77
I
277
rl
Figure 4.1 Typicalnormalmodecrosssection
for q=10. Theinterval 0-27r
corresponds to one sound wavelength. (Adaptedfrom Ref. 4.)
Chapter 4
90
a
4
Figure 4.2 Relationbetweenthecharacteristicnumber
Raman-Nath parameter q. (Adapted from Ref. 4.)
“a” andnormalized
must be propagated, each with its own.phase velocity a , d k to the point
z=L. There, they are to be summed in order to find the total field at the
sound column boundary. The radiated orders (angular spectrum) for z>L
can then be calculatedby an inverse Fourier transformas in (3.129).
The equivalence of the coupled mode theory
and the normal mode theory
was first shown by Berry [5] and by Mertens and Kuliasko [6] using a
methoddevelopedin
collaboration withLeroy
[7]. More extensive
investigations, including the case of oblique incidence, were carried
out later
by the Plancke-Schuyten, Mertens, and Leroy group [8-lo]. Hereman [l l]
has given a detailedand up-to-date treatmentof the normal mode approach,
its solutions, and its relation to the coupled mode technique. He has also
analyzed more rigorously the case of oblique incidence, which is much more
complicated and requires the use of Mathieu functions of fractional order.
A
thorough discussion of Mathieu functions as such may be found in Refs. 12
and 13.
From an engineering/physics point ofview, the normal mode ‘approach is
difficult to use and offers little opportunity for gaining physical insight.
Consequently, we will not further pursue it in this book.
Before proceeding with the next topic, it should be pointed
out that the
normal mode theory (modal theory) has also been investigated from the
point of view of microwave concepts and analogies. Thus, Bragg diffraction
conditions are typically illustrated by the intersection of modal dispersion
91
The Formal Approach
curves (U-p diagrams), and Bragg interaction devices are often compared
with directional couplers. This treatment, although stillrestricted to
rectangular sound columns (usually embedded in different mediaon either
side)lendsitselfvery
well to largeBraggangleconfigurations.
It is
excellently suited for readers with a microwave background because of the
use of already familiar concepts.
We, however, will not pursue itin this book
and consequently not list all the pertinent investigations and researchers in
will nonetheless be found
this area.A good description plus many references
in the articleby Chu and Tamir [14].
4.4
THEGENERALIZEDRAMAN-NATHEQUATIONS
We start from (4.1), but first extent it to three dimensions for greater
generality
d2p
V 2e - p , . ~d2e
, , = p,,dt’
dt -
(4.21)
where
v
2
d 2 d2 d2
=-+-+d.u2
a
2
dZ2
(4.22)
Generalizing the treatment of Sec. 4.1, we write
(4.23)
where r is a three-dimensional position vector. Also
(4.24)
and
(4.25)
so that, assuming x19 1, we find
(4.26)
where, according to the generalizationof (3.6),
92
Chapter 4
(4.27)
&(r, t)=C‘s(r, t )
Substituting (4.23), (4.24), (4.26),
and (4.27) into (4.21), we find
d2e
d2e
V:e - p y & , n ~ = 2pv~,no~’s(r,
t)at2
(4.28)
where we have assumed, as before, that the temporal variationof the sound
field is slow comparedto that of the light field.
In two dimensions, (4.28) becomes
d2e
d2e
V:e - b&,n,2 -= 2 p v ~ , n o ~ ’ s t( )p-,
dt2
(4.29)
where
and p is a two-dimensional position vector in the
X-Z plane.
Just as we derived the Raman-Nath equations from(4.9), we now want to
derive more general relations from (4.28) and (4.29), referring to more
general sound fields. Consequently, wenow write the electric field as a
composition of ordersn that, unlike before, are not necessarily planewaves
e(r, t)=COSE,(r)expp(o+nQ)t]+c.c.
(4.30)
For the sound field,we write
s(r, t)=0.5S(r)exp(@t)+c.c.
(4.3 1)
If we substitute (4.30) and (4.31) into (4.28), it follows readily that
(4.32)
V2E,(r)+k2E,(r)+O.5k2C~(r)E,-~(r)+O.5k2CS*(r)E,~1(r)=O
The two-dimensional versionof (4.32) is
V~E’En(p)+k2E,(p)+O.5k2CS(p)En-~(p)+O.5k2CS*(p)E~+~(p)=O(4.33)
We shall call (4.32) and (4.33) generalizedRaman-Nath equations
because if, in (4.33), E , ( p ) is assumed to be a plane-wave propagating
nominally in the 2 direction, i.e.,
The Formal Approach
Eh)*E,(z)exp(-jkx
93
sin@,-jkz cos&
(4.34)
and
(4.35)
S(p)=Sexp(
then (4.12) and (4.13) are retrieved as a special case, provided a slow spatial
variation of En(z) is again assumed.
4.5
WEAK SCAlTERlNG ANALYSIS
If we ignore rescattering, i.e., we assume that only the incident light causes
scattering, while remaining itself unaffected, i.e., Eo(r)=Ei(r), En(r)=O for
Inl> 1, then it follows from(4.32) that the + 1 order satisfies the equation
V2E1(r)+k2EI(r)=-0.5k2CS(r)E,{r)
(4.36)
which, if we assume k r k , has the formal solution[l 5,161
E,(r)= -
S(r’)Ei(r’)dx’dy’dz’
(4.37)
where
The configuration relevantto (4.37) is shown in Fig.4.3.
For the - 1 order, we find a similar solution:
In most cases, we are interested in calculating the field far from the
interaction region. Then,Irls-r‘l and [l61
(4.39)
where ur is a unit vector in the direction
r.
Chapter 4
94
X
OBSERVATION POINT
r
SCATTER ING POINT
Z
Y
Figure 4.3
integral.
Configurationandnotations used inthree-dimensionalscattering
Hence, in both (4.37) and (4.38).
(4.40)
Using (4.40)in (4.37). we find
k‘C exp(-jkr)
r
E,(r)= 8K
-S
I-
jjexp(~kr’.u,)s(r’)E,(r’)dx’d~’d~’
(4.41)
for ~ r ~ ~For
~ the
r ‘ ~- 1. order. we obtain a similar expression, with S being
replaced by S*.
In two dimensions, X and Z , (4.36) becomes
vfE,(P)+k’E,(P)=-0.5k’CS(P)Ei(p)
the standard solution of which is given by [ 151
(4.42)
The Formal Appmach
95
where
and HOis the Hankel function of the zeroth order and thefirst kind [17,18].
The configuration relevant to (4.43)is shown in Fig.4.4.
For klp-p’l%l, i.e.. many wavelengths away from the induced cources in
the interaction region, we may write
Far from the interaction region p%,p‘, applying (4.39)to two dimensions, we
find with (4.43)and (4.44)
For E- ](p),we find a similar expression withS* instead of S .
The weak scattering solutions derived above are not as easy t o apply as
may appear at first glance. This is because both S and E: are propapting
fields and can only be written down explicitly in very few cases. One such
case is a Gaussian beam [19]. but even here the resulting expression is too
complicated for convenientuse.
Gordon [20] has calculated threedimensional scattering between a Gaussian light beam and a rectangular
sound beam, and McMahon [21] determined the interactions hetween
X
OBSERVATION POINT
Figure 4.4 Configuration and notations in two-dimensional scattering intcgraI.
96
Chapter 4
rectangular beamsand Gaussian beams. The difficulties of evaluating (4.41)
and (4.45) are to a certain extent artificial. Both S and E satisfy the same
kindof
wave equation intwo
or three dimensions.Hence,given
unidirectional propagation, if S and E are known across one plane (line),
they are in principle known throughout the entire volume (plane) [22]. Why
then should one have to use a volume (surface) integral in the scattering
calculation as if S and E were arbitrary distributions? The answer is that one
need not do so. As we have already shown heuristically and will now prove
formally, for weak interaction in two dimensions the angular plane-wave
spectrum of the scattered fieldcan be simply expressed by the product of the
angular spectra of incident light and sound. Because each of these spectra
may be found from the field distribution across a line, no surface integrals
need to be used in this approach and, in two dimensions at least, the
problem is simplified considerably.
In three dimensions the situation is more complicatedbecause one
incident wave vector of light can interact withan entire coneof sound-wave
vectors, resulting in a cone of diffracted light wave vectors (this may be
easily seen byrotating a typicalwave vector diagram about the incident light
wave vector). Although an explicit formulation is available [22], it is mainly
conceptual andnot easy to apply. We will return to the full threedimensional problem in Sec.8.5.
4.6 WEAK PLANE-WAVE INTERACTION ANALYSIS
We limit ourselvesto two dimensions, Xand 2,and start from (4.45) that is
strictly valid forp m . The left-hand sidewe write as an angular plane-wave
spectrum of waves propagating nominally in theZ direction, to the right of
the interaction region
(4.46)
If we use stationary phase methods[[23] and Appendix B], it may be readily
shown from(4.46) that, for p=m,
(4.47)
where is the angle of the
wave normal to the point p.
97
The Formal Approach
Writing
S@’)=J$(y)exp(-jKz’siny-
jlYx’cosy)d
($)exp(-jkx’sin$- jkz’cos4)d
pr-uP=x’sin 4,,+z‘ cos $,
(4.48)
(4.49)
(4.50)
the integral in(4.45) becomes
x exp[jx‘(k sin - K cosy - k sin $)]dx‘
x exp[jz’( k cos q5,, - k cos 4 - K sin y)]dz’
(4.51)
With the paraxial approximation,(4.5 1) may be written:
Making use of the relation between angles implied by the first delta
function we can write the second one as8(-@,,/A+O.5A/A2-y/A) so that the
total integral becomes
(4.52)
Chapter 4
98
Substituting (4.52)and (4.47) into (4.45), we find
(4.53)
Using the same method forE- I gives
~-1(@)=-0.25jkC~(-@-@~)~~@+2@~)
(4.55)
Comparison of(4.54),(4.55)with(3.160),(3.161)shows
them to be
identical, thus confirming the heuristic reasoning used in Sec. 3.3.3.
4.7
STRONGPLANE-WAVEINTERACTIONANALYSIS
Following Korpel [24], we will start from the two-dimensional generalized
Raman-Nath equations (4.33) and write the light and sound fields inside the
active medium as plane-wave compositions. For the sound field, we use
(4.48); for the incident light field, we use (4.49), but the light fields generated
inside the medium, i.e., &(p), we describe by z-dependent angular spectra
(9
- jkz cos 4) d -
(4.56)
Note that, as explained in Sec. 3.1.1, the E,,(@) are a generalization of the
plane-wave amplitudes En(:) used in Ch. 3and Sec. 4.1.
Substituting (4.56)into (4.33) and assuming that
(4.57)
we find
The FormalApproach
-[2jkcos@[
99
'En~'z)]exp(-jkxsin@- jkzcos@)d
jscn$-,(4; z)
+ 0.5k2Cj
xexp(-jkxsin@- jkzcos@-jKzsiny-jficosy)
~d[$d(:)+OSk~Cljs'(y),!?~+~(@;~)
4
xexp(-jkxsinq- jkzcos@+ jKzsiny+jficosy)
(4.58)
xd(3d(--)=O
@
Y
We now multiply both sides of (4.58)
by exp(jknxx), whereknx=kSin$, and@n
is a specific angle in the spectrum of E n . Subsequently, we integrate from
x= -m to x=+m.Setting cos @=
1 in the amplitude factor, we find for the
first integral in (4.58).
(4.59)
The second integral becomes,to a second order inf in the phase term,
where we neglect a term( K / k ) f in the argument of&I
and define
The third integral becomes, similarly,
(4.62)
Chapter 4
100
where
(4.63)
and the same approximations
have been made as before.
Combining (4.59), (4.60), and (4.62), we find
x exp(- j& sin y - jkz cos
- j&qn-,Y2
2
x jkzcos@,,)d(;)
x exp(jfi sin y - jkz cos
X)
+ j&4,,+Iy2
2 -
(4.64)
x jkz cos $,,)d(
With (4.61) and (4.63), it may be readily shownthat
k COS 4n-k
COS
Qin-1~-K(Qin-4~)
(4.65)
k COS 4 n - k
COS Qin+lS(Qin+4~)
(4.66)
Substituting (4.65) and (4.66) into (4.641, We find
I
dZ
= -0.25 jkCk,,-l ($,, - 24.9; z)
(4.67)
The
Approach
101
where we have written siny foryin order to facilitate the interpretation and
have left out phase contributions -jk$Ef/2 in the terms on the right-hand
side of (4.64).
If we now introduce coordinates
%+l
= z($n
+$B)
(4.68b)
then it is seen readily, with(4.48), that (4.67) may be written
(4.69)
where we have replaced qj,, by $.
An alternative derivation of (4.69), based on graphical constructions, is
given in Ref. 24. Note that the validity of (4.69) is subject to the condition
for the relevant
that the neglected phase term be small, i.e., Kz,,,,,$~~y~/241
range of 7
Equation (4.69) is the basic equation describing strong, small Bragg angle
interaction between an arbitrary light field and an arbitrary sound field. It
has a very simple and obvious physical significance that we shall now
discuss.
First, we note that the angular spectrumof each order is directly coupled
to that of the neighboring orders only. Second, each plane wave of one
particular orderis coupled to two specific plane waves in neighboring orders,
i.e., those that are moving in directions different by &2$E. All this is to be
expected from thewave vector diagrams discussed before.
The characterof the coupling is best interpretedby means of the diagram
shown in Fig. 4.5. In this diagram, each plane wave (i.e., one from the
spectrum of each order)is representedby a solid line. The sound fields along
the dashed bisectors (called the Bragg lines) are given by S(xL1, z) and
S(X;+I, 2). Thus, the interpretationof (4.69) is that in each pointz, there are
two contributions to l?”. The first one is an upshifted contribution; it
originates from &-I and is caused by the sound amplitude at (&I,
z) on
the bisector between l?,, and &-I. Similarly, a downshifted contribution
arises from &=I, mediated through the conjugate of the sound amplitudeat
(x;+,, z) on the Bragg line between E,, and &,+I. This interpretation is
intuitively very appealing as the Bragg lines are at exactly the correct angles
for interaction of the light, if they are regarded as “pseudo” wavefronts
S(x2-I, z)and S(X;+I, z).
I02
Chapter 4
X
Figure 4.5 Diagram illustrating coupling between plane waves of adjacent orders
E,, through sound field along (dashed) Bragg lines.
To make the notation in(4.69)somewhatmore
compact, we shall
introduce the following notation for coupling factors: A quantity that
upshifts from the (n-1)th order to the nth order we shall call S L I ;one that
downshifts fromthe (n+ I)th order to the nth order we shall denote by S;+,.
Thus, with (4.69),
These quantities are also indicated in Fig. 4.5.The rule to remember is that
a minus superscript always refers to the conjugate of a sound amplitude.
The
Also, because of the physical interpretation,the quantities S; and S;+, refer
to the same Bragg line; hence,
S: = &+I)*. This may be checked from (4.70)
with (4.63).
With (4.70a)and (4.70b), (4.69) may be written in short-hand notation
(4.71)
where
a=0.25kC
(4.72)
The symbol “a” used here should not be confused with that used in the
characterization of Mathieu functions (4.19).
As sharp boundaries no longer exist inour arbitrary fields configuration,
the boundary conditions mustnow be written
E,($;-~)=O
for n z ~
(4.73a)
&(4;-c9=E@)
(4.73b)
For the discussion to follow, it is convenient to combine (4.71) and (4.73) in
the following integral form.
n#0
for
for n = 0
(4.74)
.
(4.75)
It is of interest to consider the case of a single plane waveof light
E,(z)exp( -jkx sin @,,-jkz cos A I ) in each spectrum of order n, proceeding at
an angle inside the interaction medium. In that case, it is readily shown,
by using (3.138),that
(4.76)
For the terms in the right-hand side of (4.69).
we then find
-
E,+, (4 - 24H; =) =
E,,”,(
m 4- 24H -$,,-l
1
(4.77a)
9)
104
Chapter 4
(4.77b)
Substituting (4.76) and (4.77) into (4.69), we find by equating arguments of
delta functions the usual relations (4.12) between
&S and also
dEn - -jaE,,-,S,+_,- jaE,+,S;+,
dz
"
(4.78)
[It will be clear that (4.78), including boundary conditions, may be written in
integral form.It then becomes similarto (4.74), without the tildes.]
Not surprising, (4.78) is similar in form
to (4.74), because it basically expresses the same individual plane wave interaction. Nevertheless, it is
important to keep in mind the difference in formalism
as expressed by (4.76)
that relates a plane-wave spectral density to a corresponding plane-wave
amplitude.,
One particular case of (4.78) we have already considered often is the
artificial one in which the sound field is a rectangular column extending
from z=O to z=L. In that case,
-j&) S(x,z)=S exp(
from which it follows readily, with (4.70)
and (4.63), that
Substituting (4.80a,b) into (4.78), we find
(4.81)
With (4.65), (4.66), and (4.72) this may be written
as
In (4.82) we recognize our version (3.77) of the Raman-Nath equations that
thus appear to be a special case of the presently discussed more general
theory, as they shouldbe.
The Formal Approach
105
Before proceeding with the explicit solution, we shall, as
an example of
the application of (4.71) and (4.73), treat the case of upshifted Bragg
diffraction by a rectangular sound column. The heuristic version
of this was
discussed in Sec. 3.3.2. In agreement with that discussion, we assume that
only G(@)
and El(@)are coupled. Accordingto (4.71) and (4.72), we write
a@)
e@,,.).
is short for
where
The Bragg lines corresponding to the coupling factors S$ and S- are
showninFig.
4.6. Theydefine
the sound fields S[z($-@),z] and
S*[Z(@+@B),Z),
as indicated in the figure. With the sound field given by
(4.79), we find readilythat
S,' = Sexp[-jKx(@- @ B ) ]
(4.85a)
IX
X
-
Z
2
a
Figure 4.6 (a) Bragglinepertaining to upshiftedscatteringinto
Similarfordownshiftedscatteringinto
(4).
81 (4). (b)
106
Chapter 4
Substituting (4.85) into (4.83), (4.84), and in the second resulting equation
replacing 4 by 4 - 2 4 ~on both sides, we find
a ~ * ( 4 ) / a z = - 0 . 2 5 j k ~ ~ ~ ( 4 - 2 4 B ~ ) e x p [ - ~ ~ ~ ( 4 - 2 ~ B ) 1 (4.86)
Comparing (4.86) and(4.87) with (3.1 13)and (3.1 14), we find that they are
identical if the substitutions are madethat were discussed in Sec. 3.2.2.The
final result is then identical
to the one found in the heuristic treatment,
given
by (3.168).
FEYNMAN DIAGRAM PATHINTEGRAL METHOD
4.8
We will now proceed to derive the exact solutionof (4.74) and (4.75) by the
diagrammatic methods developedby Korpel and Poon[25]. The basic idea is
to reduce the recursive relation (4.74) by repeated applications until one of
the terms on the right-hand side is reduced to contain l%. An example will
make this procedure clearer; it is symbolically expressed
by Fig. 4.7. Suppose
we are interested in&. According to (4.74). this quantity is contributed
to by
both &
, and l&. Let us arbitrarily choose theE 4 contribution (solid line) and
leave l& (dashed line) until later.Now, E4 is contributed to by & and l&.In
the example shown, the l% contribution is chosen and the Es contributions
left until later. In the next step, we arbitrarily choose &, etc. This process
7
6
5
0
-1
-2
-3
No. of Steps
Figure 4.7 Typical Feynmandiagrams illustrating various routes of scattering
from the zeroth order into the fifth order. (From Ref. 25.)
107
The Formal Approach
cowinues (dotted line) until we reach l%. Now, according to (4.73,there are
three contributions to choosefrom. We select the knownone, &=kinc
(arrow), leaving the other two for later. It will now be clear that the dotted
line from point A on level zero to the final destination on level 5 is a valid
contribution, one of infinitely many. Some
other paths starting at B, C, and D
are shown as examples of alternative routes.
It will be clear that the shortest possible path has exactly five steps,
moving along a diagonal line from0 through 1,2, 3 and 4 to 5. The longest
paths consist of infinitely many steps
and wander all overthe grid of Fig. 4.7
(perhaps crossing level5 many times) before terminatingon level 5.
Such a procedure, which consists of enumerating all different pathways
from the initial to the final state, and summing their probabilities (sum over
histories) was first developedby Feynman [26]within the context of his own
unique development of quantum mechanics. Diagrams, similar to those
shown here, that symbolize the various possibilities are called Feynman
diagrams. A good introduction to such diagrams and their applications may
be found in Refs. 27 and 28.
It may be readily shown that the contribution to along the dotted path
in Fig. 4.7 is given by
(4.88)
JQ
J-
where m is the number of steps(13 in this example).
In cases where single plane waves are involved [see (4.7811, it will be clear
that (4.83) represents one of the contributions to E,,, provided 6.is replaced
by Et. Equation (4.88) represents a typical path integral: the exact solution
consists of summingan infinity of such integrals.In symbolic terms.
kn(qkz ) = x ( a l 1 path integrals)
(4.89)
Written as in (4.89). the solution looks deceptively simple; in practice, of
course, the path integrals can only rarely be evaluated analytically. The
beauty of Feynman's technique lies rather the
in fact that, frequently, certain
path integrals may be ignored because physical institution tells us that their
contribution is negligible. An acousto-optic example of this may be found in
Ref 29, one in a completely different field (the impulse response of acoustic
Chapter 4
108
transducers in Refs. 30 and 31. Syms has applied path integrals to optical
problems involving chain matrices [32].
We shallnowapply
the path integral method to upshiftedBragg
diffraction by a rectangular sound column of a single plane wave of light,
incident at the negative Bragg angle. Ifwe use (4.79), it may be shown
readily with (4.80)that
Si-l = Sexp{-jKz[$o
(4.90a)
-l)$,]}
+ (2n
(4.90b)
S;, =S*exp{jKz[$o+(2n+1)$,]}
where we have used the relation
(4.90~)
4=&+2n$~
As &= -$B, we may write for (4.90a) and (4.90b)
[
S;-, = Sexp -jQ(n
-1)-
11
(4.91a)
(4.91b)
For ideal Bragg diffraction, Q=m, and consequently any path integral
containing Q in an exponential term vanishes because of infinitely rapid
oscillations along the integration path. From (4.91), it is then clear that the
only effective coupling factors are those
that connect levels 0 and + l , i.e.,
S,' = S
(4.92a)
S; = S *
(4.92b)
This situation is illustrated graphically in Figs. 4.8(a)
and 4.8(b), which show
some of the infinitely many contributions to the levels l and 0, respectively.
(Note that all paths only
involve these two levels.)
For simplicity, takingS to be real, we find readily that
E, ( L )=
2
r'
(- j u ) 21f!S dzZ1 S dz2r-l.
I=O
0
0
I,"SEidzl
(4.93)
109
n e Formal Approach
'
0
'm
a
b
Figure 4.8 (a) Five-step scattering path from zeroth to first order.
path from zeroth order back into itself.
(b) Six-step
where we have usedthe relation
(4.94)
Similarly, we obtain for the + 1 order
In (4.93) and (4.95), we recognize the well-known expressions for Bragg
diffraction interaction, thus confirming the Feynman diagram technique.
The caseof Raman-Nath diffractionmay beconfirmed in a similar way 1251.
4.9
EIKONAL THEORY OF BRAGG DIFFRACTION
The goal of the eikonal theory is to justify intuitive ray tracing techniques
and to quantify themby calculating the amplitude of the diffracted ray.
l10
Chapter 4
At first glance, it seems rather strange that a ray tracing theory could be
developed within the context of acousto-optic c/$fractiun, but we may
convinceourselvesof that by the Gedanken experiment illustrated in
Fig. 4.9 (borrowed from Ref. 33). The drawing shows
a typical upshifted
Bragg diffraction experiment, using sound and light beams of finite width.
As we have remarked often before, such a picture is essentially unrealistic
because of the diffraction of both sound and light that makes both beams
diverge. This latter effect, however, disappears whenever k 0 , h-0 (i.e.,
k=m, K=m). The width of the beams may then be made infinitesimally
small, in which case we call them rays. In the limit of infinitesimally small
wavelengths, we may thus replace the upshifted beam interaction of Fig. 4.9
by the ray interaction of Fig. 4.10(a), with Fig. 4.10(b) showing the case for
downshifted Bragg interaction.
The next step consists of applying the ray tracing method to local
interaction situations, e.g, the interaction ofan incoming ray with a curved
wavefrontof sound. Figure 4.11shows a typical and self-explanatory
example. It is taken from an article by Pieper and Korpel [34] dealing with
strung local interaction through a combination of eikonal theory and
Feynman diagram techniques. Inthe present chapter,we will limit ourselves,
however, to weak interaction.
X
Figure 4.9 Gedanken experiment illustrating development of ray-tracing method.
(From Ref. 33.)
111
The Formal Approach
INCIDENT RAY
I
DIFFRACTED RAY
SOUND RAY
SOUND RAY
a
b
Figure 4.10 Ray-tracingdiagramsfor(a)generation
generation of downshifted ray. (From Ref.33.)
of upshifted rayand (b)
Figure 4.11 Localdownshiftedinteractionwithacurvedwavefront
(From Ref. 34.)
of sound.
Chapter 4
112
Following Korpel [22,33], we start with the generalized Raman-Nath
equations [4.32] for EOand El, and the weak interaction assumption that
Eo(r)=Ei(r)
V2El(r)+k2E1(r)=-0.5k2CS(r)Et(r)
(4.96)
Using the eikonal functions Y(r) [35], which have the significance that
Y(r)=constant represents a geometrical wavefront,we write
E~(r)=IE~(r)lexp[-jkUl,(r)]
(4.97a)
Substituting (4.97) into (4.96), we find
(4.98)
where
A(r)=k~(r)+K~(r)-kYl(r)
(4.99)
It should be noted that the incident sound and light fields S(r) and E@)
satisfy an equation identical in form to (4.98) with the driving term on the
right-hand side missing. If we follow the usual geometrical optics treatment
[35],valid for k, K = w , the coefficients of (l/k)O, (Ilk)', (1/k)2, and ( U r n o ,
(Urn1,(1/Q2 are separatelyset equal to zero. For the (l/k)O, ( l / K ) O
coefficients, this results in the eikonal equations for the homogeneous
medium:
IVY,{r)l= 1
(4.1OOa)
IVYs(r)l= 1
(4.1OOb)
which signify that the rays of these fields propagate in straight lines [35] in
the direction of the unit vector
s=w
(4.1Ooc)
The Formal Approach
113
[Note that in the conventional treatment, the right-hand side of (4.100a)
equals “no” rather than unity. In our analysis, this is already taken into
account by the definition (4.97b), in which k refers to the propagation
constant inside the medium. Also note that the s notation for ray directions
and lengths should not be confused with the sound field s ( ~ , t ) nor
,
with the
subscript “S” that refers to the sound.]
Setting the coefficient of (Ilk) equal to zero eventually leads to the
transport equation for the field strength along the rays. In the case of Ei, it
can then be shown readily [35] that
le
IEi(s)l=(Ei(s,)Iexp -0.5
V 2 Y i ( s ds
)
1
(4.101)
where S is the distance measured along the ray from an arbitrary point so.
Equation (4.101) essentially expresses conservation of energy ainray pencil.
Returning now to (4.98) and following the same approach,we first obtain
IVYl(r)l=l
(4.102)
expressing the fact that the diffracted rays also propagate in straight lines.
To find the transport equation for lE11,we again select the coefficient of
(llk), but this time equate to
it the driving term ratherthan to zero (as in the
case of Ei):
IEl(r)lVzYl(r)+2W~(r).VIE,(r)l
= -OSjkqS(r)lE,{r)(exp[-jA(r)]
(4.103)
Using the distances along the diffracted ray ratherthan the position vector
r, we write
so that (4.103) becomes
IEl(s)lV2Y(s)+2alEl(s)llas
= -0.5jkqs(s)llEl(s)lexp[-jAOl
(4.105)
As pointed out before, without the right-hand driving term the solution
would be of the form (4.101).In our case, we shall try a variation
[ e
IEl(s)l = IE,’(s)lexp -0.5
1
V2Yl(s)ds
(4.106)
114
Chapter 4
where the coefficient of the exponential term isno longer a constant, and so
is an as yet undetermined pointon the diffracted ray.
Substituting (4.106) into (4.105), we find
(4.107)
from .which
IE;(s)l= ”0.25j k C r IS(s)llE,(s)lexp[-jA(s)]
sb
1
(4.108)
V2yli(s’)ds’ ds+E,(sb)
where Sb is another point on theray. A logical choice is
(4.109)
so that (4.108) becomes
IE;(s)I = -0.25jkCS’a~~(s)lJE,cs>lexP[-iAcs,l
[d
x exp 0.5 V2yli(s’)
I
d
r
‘ ds
(4.110)
From (4.99) it will be clear that, when k, Kern, the term exp[-jA(s)] will
oscillate infinitely- rapidly. Mathematically, therefore, (4.1 10) may, in the
limit, be solved exactly with the method of stationary phase [13, 231. The
stationary phase pointA may be found by solving
(4.1 11)
which may be written
Vyl1(s)*VA(s)=O
(4.112)
With (4.99) and (4. 1 0 0 ~we
) ~ find for (4.112)
s1-(Kss+ksi)=k
(4.1 13)
where ss, Si, and SI are unit vectors in the direction of the sound ray, the
incident light ray, and the diffracted lightray.
115
The Formal Approach
It may be shown [22,23] that the only solution to (4.1 13) satisfying the
“one ray through one point” conditionis given by
In (4.1 14) we recognize a local wave vector triangle, as is obvious from the
construction in Fig. 4.12. Thus, (4.114) defines a locus of ray-crossing
interaction points and hence proves the validity of the heuristic method
illustrated in Fig.4.10. We will return to the actual use of the method later.
It next remains to calculate the amplitudeof the diffracted ray. Let us, for
simplicity, equate the pointsa on the diffracted ray with the stationary phase
point A shown in Fig. 4.12. According to the method of stationary phase
[23], the solutionof (4.1 10) is then given by
IE’l(s)l=O
(4.1 15)
for scs,
and
(E;(s)(= -0.25 jkC(s(sa)~~Ei(sa)lexp[-j,4(sa)-$]
(4,116)
for s>sa and (C32Al~s2)sa>0.
For )E’ ](S)] to be a positive real quantity, itis necessary that A&)= -3d4
if C>O and + d 4 if CcO. The phase of E‘] then follows directly from(4.99).
Hence, at the interaction point A , the amplitude of the generated diffracted
ray is given by
(4.1 17)
after which the amplitude varies along the
ray according to (4.106).
If, at the stationary phase point, ~3~A(s)lC3s~<O,
it may readily be shown
that
”z
A($@)=-
4
3?r
4
i f C > O or - i f C C 0
(4,118)
116
Chapter 4
Figure 4.12 Wave vector triangle diagram in localinteraction. (From Ref 33.)
and
The case of downshifted interaction maybe treated similarly, the only
difference being that ys(r)in (4.99) is to be replaced by -ys(r)with a
subsequent change of(4.1 14)to read
-Kss+ks*=ks-1
(4.120)
as is to be expected.
We started out with the theory sketched from above (4.32) that applies to
three dimensions. It is clear that by starting from (4.33), entirely analogous
results would have been obtained in two dimensions, through replacing
r by p.
It is of interest to briefly consider an actual application of the acoustooptical eikonal theory. Figure 4.13 shows a two-dimensional configuration
in which a cylindrical wave of sound interacts with a ray of light at the
stationary phase point A . It is assumed that the Bragg angles 4~ are very
small so that to all intents and purposes the S axis (the S here not to be
confused with the complex sound amplitude) coincides with the
X axis, i.e.,
a/aEa/ax.
117
The Formal Approach
Figure 4.13 Two-dimensional upshifted interaction diagram used to calculate
amplitude of ray diffracted by curved wavefront ofsound.
According to (4.110), we must now calculate a2A/ax2.From (4.93) it is
clear that, apart from multiplicative factors, A(p) contains two plane
wavefronts Yl(p), Yz{p)and one curved wavefront Ys(p).For the first two,
the curvature a2Y/ax2is evidently zero. The sound eikonal function is given
bY
(4.121)
Ys(P)=P
and, in the neighborhood of A , may be written
0 . 5 ~ ~
R
Ys(x)
= ( R 2+ x ’ ) ~ . ’ R + -
(4.122)
Hence,
(4.123)
Chapter 4
118
With (4.123), (4.99), and (4.117), we find readily the diffracted light
amplitude at A
If we compare (4.124) with the weak interaction version( v a 0 ) of (3.1 lo), it
is easily seen, with v=kCLIq2, that we may define an effective interaction
length LI
It will be intuitively clear that the eikonal theory becomes less accurateif
there are significant variations in sound amplitude over the length LI, for
instance, if the curved sound wavefront is truncated
to a length smallerthan
LI. More details may be found in Ref. 34. Note that in the pure geometric
optics,
limit
ha0 and
the
effective interaction length
becomes
infinitesimally small, so that we may now speak of interaction points.
=m.
4.10
STRONG INTERACTION WITH CURVED SOUND
WAVEFRONTS
Having at our disposal a Feynman diagram theory for strong interaction
(with path integrals that are difficult to evaluate) and an eikonal theory of
weak interaction, it is reasonable to ask whether these two approaches could
be combined into an eikonal theory for strong interaction. The first step
toward this goal was taken by Pieper and Korpel [34], who assumed single
Bragg interaction with a curved sound wavefront similar
to Fig. 4.13, but for
downshifted interaction. Their general configuration is shown in Fig. 4.14.
Downshifted interaction isinvolvedwith
the point shown the only
interaction point on the wavefront. The different Feynman diagrams leading
from the zeroth order back to itself or to the - 1 order are shown in
Fig. 4.15. The sound is modeledas
S(x, z)= S(z) = So exp
(4.125)
where L is the length of the interaction region
(z=O to z=L), and
{=(.-g)
(4.126)
119
The Formal Approach
Figure 4.14 Ray interaction with curved sound wavefront. (From Ref
34.)
0
1
, I
- 1
I
I
I
I
I
I
I
I
2
3
21
22
23
L
Figure 4.15 Feynman diagrams for generation of the -1 order. (From Ref. 34.)
Chapter 4
120
It is assumed that the incident light beam is narrow
and the incident angle
small, so that within the interaction region the curvature of the sound
wavefront is substantially constant. In other words, the sound field is a
phase-profiled column. The coupling factors S21 and S6 that mediate the
transition from orders -1 to 0 and 0 to -1 (see Secs. 4.6 and 4.7) are given
bY
S2 1 =Soexp(+&)
(4.127)
Sij=Soexp(-&)
(4.128)
where
(4.129)
Let us now lookat a typical Feynman diagrampath integral, for example,
the one labeled3 on Fig. 4.15. With the notation of (4.88), we may write for
the path integral
4
(4.130)
J
x exp[-jy(z)]dz dz, dz2
0
We now apply the stationary phase method (Appendix B) to the first
integral (in z) of (4.130). With (4.128) it is easily found that the stationary
phase point z2 is given by
(4.131)
As can be seen from Fig. 4.14, this represents exactly the local interaction
point arrived at by ray tracing. The value of the first integral
may be readily
found by applying the stationary phase method with
(4.132)
121
The Formal Approach
where U(ZI-Z,) is a unit step function, indicating that the stationary phase
point must be within the integration interval (z1>zs) for the integral not to
vanish.
Now, if (4.132) is substituted into (4.130) and the integral over ZI is
evaluated on a stationary phase basis, we'll find the same stationary phase
point as before, and
pexP[+iv(Zl)l~exP[-iv(z)ldz
0
d.1 =
im ~ ( O ) ~ (- zG ,>
0
1
=-A&U(z,-z,)
(4.134)
2
where we have used the property U(O)=1/2. Proceeding in this manner it is
easily shown from(4.130) that
l
x exp[-jyl(z)]dz dz, dz2
(4.135)
0
By summing over all odd-numbered paths 1, 3, 5, etc., in Fig. 4.15, we find
finally
so that
where
(4.137)
and v=2aS0L is the Raman-Nath parameter.
122
Chapter 4
In the same waywe can evaluate the sum of the integrals involving all
even numbered paths to obtain
(4.138)
Note that IE-1I2+Eo2=1. Note also that the phase of the (conjugate)
sound fieldispreservedin
the diffracted light through thefactor
exp[-jw(zs)]. For b e 1 (using order+ 1rather than - 1and setting z,=O as in
Fig. 4.13) we again find the expression for lE-11 (4.124) of the previous
section.
In Ref. 36 the concept of local interaction is heuristically extended to
multiple interaction points. A typical configuration is shown in Fig. 4.16.
Interaction points A, B, C, and D are found by simpleray tracing
techniques, discussed before. The converging sound field is modeled as a
phase-profiled beam with constant phase curvature
(4.139)
i
Figure 4.16 Multiple interactionpointsoncurvedsound
Ref. 36.)
wavefronts. (From
123
The Formal Approach
where y = K h is the edge phase as shown in Fig. 4.16. As with a flat sound
wavefront, a Q factor is defined as
Q = -K
~ L
(4.140)
k
It will be seen from Fig. 4.16 that the interaction points are separatedby
2Rqb. The number of potential interaction points is thus given by L / ( ~ R ~ B ,
which may be shown to equal 8lyl/Q. The number of actual interaction
points and their locations depend on the angle
of incidence. If, for example,
the incident ray is rotated clockwise, then the interaction points are as well,
as shown in Fig. 4.17.
It is assumed in Ref36 that the interaction points are independent
if their
separation 2Rqb is substantially greater than the effective local interaction
lengthThismaybeexpressedbythe
condition that the separation
divided by the interaction lengthis much greaterthan 1:
(4.141)
It is further assumed that the interaction at independent points is
determined by a diffraction efficiency 7 based on (4.136):
Ix
I
I
I
I
\\\
sound
Figure 4.17 As Fig. 4.16, but with oblique incidence of the light.(From Ref. 36.)
Chapter 4
124
(4.142)
The heuristic conjectures of Ref. 35 were confirmed by simulation with an
effective numerical algorithm to be discussed in Sec.5.8.
Figure 4.1 8 shows a simulation for a configuration with Q=50, ty=50,
v=27r, and qhinc=-4@B. There are 8 potential interaction points-the
interaction length L=16Ro@~-and the spacing between them equals the
in (4.141)]. In spite of this the
effective interaction length [i.e., Q/4*=1
interaction points appearto be largely independent, as the measured powers
in the different orders are in close agreement with (4.141). Note that the
“growth length” of each order roughly equals their separation, and hence
equals the effective interaction length. In Fig. 4.19 the same parameters
apply, but the incident ray is rotated counterclockwise by 8 @ ~The
. shift of
the interaction points in Fig.4.19 is found to have the same value (one-half
as conjectured.
the interaction lengthof 16@~)
References 37 and 38 give a rigorous treatment of multiple plane-wave
interaction and prove that the geometrically determined points coincide with
the stationary phase points in the corresponding path integrals. It is also
shown that certain Feynman diagrams are forbidden, namely those that
would make rays go backward through interaction points already passed. A
0.20
0.18
L
m
C
0)
.-c
0.16
0.14
0.12
0.10
0.00
0.06
0.04
0.02
0.00
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.0 0.9 1.0 1.1 1.2
Normalized interaction length
Figure 4.18 Simulationresults for configuration of Fig. 4.17 with 4nc=-44B.
(From Ref. 36.)
125
The Formal Approach
Normalized
interaction
length
Figure 4.19 As Fig. 4.18, but with & = + 4 4 ~ . (From Ref. 36.)
typical allowed Feynman diagram representing progressive scattering into
the fourth order is shown in Fig. 4.20. Reference 37 also gives some more
comparisons between the eikonal-Feynman theory .and the numerical
algorithm of Sec. 5.8. Two of these, applying to the configuration of Fig.
4.17, are shown in Figs.4.21 and 4.22.
In summary, the central feature of the eikonal-Feynman theory is the
multiple interaction at local interaction points. It would appear that
extension to the general case should be relatively straightforward in those
cases where phase curvature can be defined and the interaction points are
separated by at least one effective local interaction length. A necessary
condition is, of course, always that the total interaction length L be smaller
(This condition isobviously
than the local interaction length
flagrantly violated for flat wavefronts of sound, as in a sound column; the
eikonal theory cannot be used in such cases.)
mo.
4.11 VECTOR ANALYSIS
So far we have tacitly assumed that our equations could be writtenin scalar
form. In most treatises, this has conventionally been the case, although some
important historical exceptions exist [39,40]. The aim of the present section
is to derive the theory of acousto-optic interaction by starting from
Maxwell’s equations, and then discuss under which condition the scalar
Chapter 4
126
0
W
k steps
all at z0
j steps
Figure 4.20 Typical allowed Feynman diagram for progressive scattering into the
fourth order. (From Ref. 38.)
1.o
7
0.8
-
0.6
5
g
0.4
..$a.
-
0.2
2
0.0
0
z
0
(1
10
20
30
Raman Nath parameter v
Figure 4.21 Powerin - 1 orderatthe exit of theconfiguration of Fig. 4.18.
(From Ref. 38.)
The Formal Approach
127
1.o
0
0.8
-g
0.6
al
L
0
L
W
2Q
-W
m
2
0.4
0.2
0.0
0
10
20
30
Raman Nath parameterv
Figure 4.22 Power in order 0 at the exit of configuration of Fig. 4.19. From Ref.
38.)
formulation used so far is correct. Specific polarization phenomena having
to do with the anisotropy of the elasto-optic effect will be discussed later.
Here, we will still adhere to the simple assumption that the change in the
electrical permittivity is a scalar function
of the strain (condensation),i.e., it
is supposed to be independent of the polarization of the electric field, and
causes no cross-coupling of polarization states. Consequently,we shall write
for the time-varying permittivity
E(r, t)=a+&’(r, t)
(4.143)
E‘@, t)=EoCs(r, t )
(4.144)
where
so that
(4.145)
where the subscript r refers to the relative dielectric constant.Hence,
&(r, t)=0.5noCs(r, t )
(4.146)
a relation we have used before in (3.6) and (3.76b).
We shall next assume that the incident field (as it would exist in the
Chapter 4
128
absence of any sound field) is source-free in the interaction region
and hence
satisfies the Maxwell equations
Vxhi(r,t)=Eo-
dt
0
Ai(r,
(4.147)
(4.148)
Vqei(r, t )=0
(4.149)
V-hdr, t)=O
(4.150)
where we have assumedthat b = p V .
When the soundfield is present, we may think of thetotal field as the sum
of the incident field plus a scattered field
e(r, t)=ei(r, t)+e'(r, t )
(4.151)
h(r, t)=hi(r, t)+h'(r,
(4.152)
t)
The total field must satisfy the relations
d
V x h(r, t ) = -[&(r, t)e(r, t ) ]
(4.153)
Vxe(r, t)= -pdh(r, t)/at
(4.154)
V*[E(r,t)e(r, t)]=O
(4.155)
V-h(r, t)=O
(4.156)
dt
It is relatively simpleto derive the following relations for the scattered field:
w r yt )
V x h'(r, t ) = -
(4.157)
w r y2)
V x e'(r, t ) = -pLy
-
(4.158)
V.d'(r, t)=O
(4.159)
dt
r3r
The Formal Approach
129
V.h’(r, t)=O
(4.160)
W , t)=Eoe‘(r, t)+p’(r, t )
(4.161)
p’@, ?)=E’@, t)[eXr, ?)+e’@, tll
(4.162)
where
It is thus clear that the essential part of the scattered field is caused by an
electric polarization p’ that, in turn, is induced by the time-varying
permittivity d(r, t ) acted upon by the total field.
To simplify matters, we introduce the Hertz vectorn’(r, t ) of the scattered
field, in terms of which bothh‘(r, t ) and e‘(r, t ) can be expressed
&’(h t )
h’(r, t ) = E ~ x
V-
(4.163)
at
e’(r, t ) = V[V d(r, t ) ]- &EO
d2d(r, t )
In order to satisfy (4.157)-(4.160), T‘(r,
V2d(r, t)] -
t ) must
d2d(r, t ) =--p’(r, t )
dt2
(4.164)
at
then satisfy [22,40]
(4.165)
EO
which has the explicit solution
(4.166)
where R=lr-r‘l, l/c=d(p~,dV is a volume element of r‘ space, and the
integration extends over the entire interaction region. Substituting (4.162)
and (4.164) into (4.166), we find readily
130
Chapter 4
Equation (4.167) is an integro-differential equation fore’ and was first used
by Bhatia and Noble in their research [35,42]. A weak interaction
approximation correspondsto neglecting the term withe‘ in the right-hand
side of (4.167). The fact that e’ and ei need not necessarily have the same
polarization is due to the VV. operator. Let us investigate this more closely.
In the two-dimensional configurationswe have most often used, nothing
changes in the Y direction; hence, a/ay=O. If now the polarization of the
incident lightis in the same direction, then
(4.168)
Thus, the VV. operator drops out of the equation,and, according to (4.167),
the scattered light will have the same polarization as the incident light,
namely in theY direction, perpendicularto the plane of interaction.
In a general three-dimensional configuration,
we compare the terms
d2(&’ei)
W
O
(4.169)
z
= k2E‘ei
- pvEOo2&‘ei
dt
(where we have neglected the temporal variation of
with
E‘
relative to that of e;)
V(V*E’~~)=(V~E’)~;+~(V.~~)(VE’)+E‘V(V*~~)
(4.170)
The first term is,at most, of order K%jed and the second one always smaller
than kKEle;l. As it is assumed that K e k , both terms are small relativeto the
a2/at2 term in (4.167) and may be neglected. That leaves us with the third
term, i.e., E’V(V-ei).
We have used the paraxial assumption that the light nominally propagates
in the 2 direction. Because ei must be perpendicular to this direction, V*ei
is-at most-of the order ksinqlej, where $J is the maximum angle a plane
wave in the spectrum ei makes with the 2 axis. Hence, E’V(V*ei)is-at
most-of the order k2sin$Jeil.The paraxial approximation assumes that
sinqM1; hence this term also may be neglected. The same reasoning applies
to e’.
In summary then, for our assumptions of small Bragg angleand paraxial
propagation, theoperator VV. in (4.167) may be neglected irrespectiveof the
incident polarization, and (4.167) reverts to a scalar equation. In the first
Born approximation, we may write
The Formal Approach
131
where we have ignored the time variation
of E' relative to that of e;. Relating
E' to S through (4.145)"using the conventional phasor notation for e', ei,
and S as exemplifiedby(4.30) and (4.31),replacing R by lr-r'l, and
assuming that !2<u-we retrieve (4.37) and (4.38) as is to be expected.
Returning now to (4.165) and adding to it the equation for the Hertz
vector of the source-free incident field
(4.172)
we obtain for thetotal Hertz vector
T(r, t ) =nf{r, t ) + a ' ( c t )
(4.173)
(4.174)
The total field e(r, t ) follows from n(r, t ) in the usualway:
d2n(r,t )
e(r, t ) = V[V x(r, t)]- pveodt
(4.175)
Substituting (4.162) into (4.166), we find with (4.175)
(4.176)
Using the same arguments as before regarding polarization, we may again
neglect the operator VV., both in (4.175) and (4.176), so that, using (4.143)
and differentiating (4.176) twice with respectto time, we obtain
(4.177)
132
Chapter 4
where e(r, t ) now refers to anarbitrary polarization component.Upon using
(4.144), (4.145), and (4.27), it is readily seenthat (4.177) reverts to (4.28).
Insummary then, wehaveseen
thatour previouslyused
scalar
formulation is correct in all respects, providedthat either (1) we have a two
dimensional X - 2 configuration with Y polarization, or (2) that K&, the
sound propagates paraxially in X, and the light in 2.As a historical
footnote, it should be mentioned that the relativeinsensitivity
to
polarization was first pointed out by Brillouin [39].
It should once more be stressed that all of the above, strictly speaking,
only apply when there exists a simple scalar isotropic relationship between
the change in permittivity and the strain. This is hardly ever the case in
practice; even certain liquids exhibitan anisotropy due to localized velocity
gradients. Nevertheless, by suitable choice of crystal orientation and light
polarization, most of the theory developed so far can be applied. By the
same token, contrary configurations may give rise to interesting and useful
polarization effects. We will discuss some of these later.
REFERENCES
1. Klein, W. R.,and Cook, B. D., IEEE Trans., SU-14, 123 (1967).
2. Brillouin, L., Actual. Sci. Znd., 59 (1933).
3. Blanch, G., “Mathieu Functions,” in Handbook of Mathematical Functions (M.
Abramowitz and I.A. Stegun, eds.), Dover, New York, p. 721 (1965).
4. Jahnke-Emde, Tables of Higher Functions, B.G. Teubner, Leipzig (1952).
5. Berry, M. V., The Diffraction of Light by Ultrasound,Academic Press, New York
(1966).
6. Mertens, R.,and Kuliasko, F., Proc. Ind. Acad Sci. A, 67, 303 (1968).
7. Kuliasko, F., Mertens, R., and Leroy, O., Proc Ind. Acad Sci. A, 67, 295 (1968).
8. Plancke-Schuyten, G., Mertens, R., and Leroy, O., Physica, 61,299 (1972).
9. Plancke-Schuyten, G., and Mertens, R.,Physica, 62,600 (1972).
10. Plancke-Schuyten, G., and Mertens, R.,Physica, 66,484 (1973).
11. Hereman, W,, Academiae Analecta (Belgium), 48 (1986).
12. Arscott, F. M., Periodic Diffirential Equations, Pergamon Press, Oxford (1964).
13. McLachlan, N. W., Theory and Applicationsof Matthieu Functions, Dover, New
York (1964).
14. Chu, R.S., and Tamir, T.,IEEE Trans., MTT-18,486 (1970).
15. Papoulis, A., Systems and Transforms with Applications in Optics, McGraw-Hill,
New York (1968).
16. Kyrala, A., Theoretical Physics,W. B. Saunders, Philadelphia (1967).
W. J., “BesselFunctions of Integer-Order,”in
Handbook of
17.Olver,F.
Mathematical Functions (M. Abramowitz and I.
A. Stegun, eds.), Dover, New
York, p. 355 (1965).
The Formal Approach
133
18. Spiegel, M. R., Mathematical Handbook.Schaum’s Outline Series, McGraw-Hill,
New York (1968).
19. Yariv, A., Optical Electronics, 3d Ed., Holt, Rinehart and Winston, New York
(1985).
20. Gordon, E. I., Proc IEEE, 54, 1391 (1966).
21. McMahon, D. H., IEEE Trans., SU-16,41 (1969).
22. Korpel, A., ‘Acousto-Optics,” in Applied Solid State Science, Vol. 3 (R. Wolfe,
ed.), Academic Press, New York,
p. 7 1 (1972).
23. Erdelyi, A., Asymptomatic Expansions, Dover, New York (1956).
24. Korpel, A., J Opt. SOCAm., 69,678 (1979).
25. Korpel, A., and Poon, T.C., J Opt. Soc. Am., 70, 8 17 (1980).
26. Feynman, R. P., Rev. Mod. Phys., 20,367 (1948).
27. Mattuck, R. D., A Guide to Feynman Diagrams in the Many Body Problem,
McGraw-Hill, New York (1967).
28. Schulman, L. S., Techniques and Applications of Path Integration, Wiley, New
York (1981).
29. Poon, T. C., and Korpel, A., J: Opt. Soc. Am., 71, 1202 (1981).
30. Banah, A. H., Korpel, A., and Vogel, R. F., Proc. IEEE Ultrason Symp., p. 444
(1981).
31. Banah, A. H., Korpel, A., and Vogel, R. F., .l
A. Soc. Am., 73,677 (1983).
32. Syms, R. R. A., Appl. Opt., 25,4402 (1986).
33. Korpel, A., “Eikonal Theory of Bragg Diffraction Imaging,” in Acoustical
Holography, Vol. 2 (A. F. Metherell and L. Larmore, eda), Plenum Press, New
York .(1970).
.
34. Pieper, R. and Korpel, A., J: Opt. SOCAm., 75, 1435 (1985).
35. Born, M., and Wolf, E., Principles of Optics, Pergamon Press, New York (1965).
36. Korpel, A., Opt. Eng., 31,2083 (1992).
37. Chen, Y-M., Acousto-optic interaction in arbitrary soundjields, Ph.D. Thesis,
University of Iowa, December 1994.
38. Chen, Y-M., and Korpel, A., .lOpt. SOCAm., 12 (A): 541 (1995).
39. Brillouin, L., Ann. Phys. (Paris), 17,88 (1922).
40. Wagner, E. H., Z. Phys., 141,604,622 (1955); also, 142,249,412 (1955).
41. Stratton, J. A., Electromagnetic Theory, McGraw-Hill, New York (1941).
42. Bhatia, A. B., and Noble, W. J., Proc Roy. SOCA, 222,356 (1953).
This Page Intentionally Left Blank
5
The Numerical Approach
Historically, numerical solutions first emerged for the rectangular sound
column in the region where neither Raman-Nath diffraction nor Bragg
diffraction applies. Most techniques solve a truncated system of RamanNath equations written in a suitable normalized form. We will show this
truncation first and then take a quick look at the subsequent numerical
integration.
If less than six orders are involved, exact solutions are possible,
and this is
also the case for two orders with incidence
at twice the Bragg angle. Higher,
multiple Bragg angle incidence is solved numerically.
A more physics-oriented approach uses successive diffraction along the
lines of Van Cittert’s analysis. In the cascaded Bragg diffraction method,
the interaction region is divided in successive Bragg diffraction modules.
This technique is especially advantageous for phased array configurations.
The split-step method, arguably the most powerful one, uses a modified
beam propagation algorithm and may be applied to arbitrary fields. The
Fourier transform technique is based on a systems approach to the problem
with suitably defined transfer functions. Finally, the Monte Carlo
simulation dependson a quantum-mechanicalinterpretation of the
interaction process.
135
Chapter 5
136
5.1 TRUNCATION OF THE RAMAN-NATH EQUATIONS
Writing
D = -d
dz
the general equation in the Raman-Nath formulation is given
by [see (3.82)]
It should be remembered that (5.3) specificallyrefers to a sound field
Iqsin(Qt-Ki), i.e., S=-~lqor 4=-7d2 [see (3.4)].
If now the truncation is from the
Lth order up to the Mth order with
M30,
LSO,
L+"
N=(M-L+l)
(5.4)
then the truncated systemstarts with the top equation
This is followed by (M-L-1) equations of the type (5.3), after which the
system is truncated with the
bottom equation
The total numberofequations,
N , equals (M-L+1).
conditions for the set(5.3-5.6) are
v401= VncGkO
The boundary
(5.7)
where Sk0=0 if k&, and GM= 1 if k=O.
The equations one finds in the literature are essentially of the type
(5.3-5.6). They may (confusingly) however, vary in detail because of the
diverse conventions and normalizations mentioned before [see remarks
following (3.82)] and also because of the choice of
The Numerical Approach
137
An important property of the truncated set is that
M
Cykty;= constant = wincwi'.c
k=L
indicating the somewhat surprising fact ofpower conservation in an
incomplete system. Equation (5.8) may be proven readily by multiplying
with v k each equation characterizedby (D+jgk)yk, adding to it its complex
conjugate, and finally adding all equations. This leads to c D ( v k v k ) = o ,
k
from which (5.8) follows.
5.2
NUMERICALINTEGRATION
The truncated system may be integrated numerically by letting dz+Az,
dvk+Ayk, etc. This was first done by Gill[l], who also incorporated higher
Taylor corrections inhis program to ensure stability. A more recent
numerical integration without higher-order corrections has been presented
by Klein and Cook [2].They state that for sufficient accuracy the number
of
steps P= L/& in thez domain must satisfy
P> lO0v
(5.9)
and also
(5.10)
where nmaxdenotes the highest order containing any significant amount of
light. In practice, accordingto Klein and Cook, In,l equals approximately2v
for small Q and decreases with increasing Q.Some of the numerical results
from Ref. 2 follow.
Figure 5.1 shows nonideal Raman-Nath diffraction for perpendicular
incidence ( Q = 1, QvS6); Fig. 5.2 illustrates nonideal Bragg diffraction
(Q=5, Qh31.2). Note that in Fig. 5.1 the true zeros, characteristic of the
Bessel function solution for the idealcase, disappear for large v. In Fig. 5.2
the periodic power exchange between the two main orders is no longer
complete, indicating the presenceof additional orders.
Numerical investigation may, of course, also be used
for the generalized
equations (4.78). They have, infact, been so treated by Pieper and Korpel[3]
in the contextof strong interactionwith a converging sound beam.
138
Chapter 5
1.o
0.8
0.6
0.4
0.2
0
0
1
2
3
4
5
6
V
Figure 5.1 NonidealRaman-Nathdiffraction (Q=I, QvC6)forperpendicular
incidence. (Adapted from Ref.2.) 0 1967 IEEE
h
139
The Numerical
5.3
EXACT SOLUTIONS
If the truncated set(5.3-5.6) contains fewer than five independent equations,
exact solutions may be found. This is readily seen from the characteristic
equation of the system obtainedby setting yk= Ckexp(pz) and solving forp .
The resulting equation is, at most, of the fourth degree and hence may be
solved in terms of radicals[4].
The simplest exampleof such a set is that for which M = l , L=O or M=O,
L= -1. This set has two equations, the solutions of which, for h=&$B,
correspond to up- or downshifted Bragg interaction, respectively [(3.103),
(3.104);(3.107),(3.108); $s=-7d2]. Ifdeviatesslightly
from the exact
Bragg angle, the solution is given
by (3.168).
An exact solution for downshifted Bragg interaction,taking into account
orders -2, - 1, 0, + 1, has been given by Jaaskelainen in a study of thick
transmission gratings [5]. Figure 5.3 (solid line) shows the solution so
obtained in the nonideal Bragg case [Q=& Q/vSl]. Note the considerable
deviation from ideal Bragg behavior (dashed line). The open circles data
are
points calculated by the present author using Klein and Cook’s numerical
integration scheme [appliedto (4.78)] with 1000 Az slices and 17 orders.
1.0
-
0.8
-
0.6
-
0.4
-
0.2
-
1, /I;
V
Figure 5.3 First-order diffraction in the nonideal Bragg region @=S, Q/v>O.5,
solid line) compared with the ideal case (dashed line). The open circles are data
17 orders
points calculated by the present author using numerical integration with
present. (Adapted from Ref.5.)
140
Chapter 5
A calculation similar to the one above was performed by Blomme and
Leroy [6] for incidence near the Bragg angle. In previous investigations, these
same authorshad treated oblique incidence for four orders [7], and
perpendicular incidence for five [8] and seven orders [g]. In the latter two
cases, the number of independent equations turns out to be three and four,
respectively, due to symmetry conditions pertainingto normal incidence, i.e.,
y-n=(- 1)nVn.
5.4
MULTIPLE BRAGG INCIDENCE
It has been found that appreciable power transfer between two orders is
possible not only for Braggangleincidence, but also for incidence at
multiple Bragg angles. If the light impinges at an angle of'm+B, then a
strong interaction appears to exist between orders 0 and -m, or 0 and +m,
respectively. Naturally, the intermediate orders must mediate this power
transfer; a direct m-phonon interaction would require the mth harmonic of
the sound frequencyto be somehow generated, a process that is excluded in
the experiments and numerical simulations showing this effect.
Consequently, a satisfactory physical interpretationof the phenomenon is as
yet lacking, although various exact solutions have been obtained. Alferness
[lo] analyzed the diffraction efficiency of thick holograms operating in the
second-order Bragg regime and concluded that 100% diffraction efficiency
was possible. The analogous case for acousto-optics was treated by Poon
and Korpel [ll], whousedFeynman
diagram methods and took into
account the orders 0, 1, and 2 (i.e., M=2, L=O). For incidence at the angle
y 0 = - 2 @ ~their
,
results were in agreement with the analytic solution of the
truncated set obtained by Alferness. The results may be summarized as
follows:
(5.1 1)
{
K)
I2=0.25 ( c o s ~ - c o s ~ +
) 2 - sinx-sin<
11
(5.12)
(5.13)
where
(5.14)
h
141
The Numerical
5'7 Q
(5.15)
Complete transfer of power into the second order is only possible for values
of
Q=4mz, (5.16)
m is an integer
]
0.5
v=2z( 4n2- m2
formodd
(5.17a)
(5.17b)
where n is a positive integer.
It is interesting to note, inpassing, that the requiredvalues of Q
correspond to the incident ray's intersecting exactlym wavelengths of sound.
The exact expressions (see [ll]) indicate that for other values of Q not
exactly equal to 4mz, an appreciable amount ofenergymaystillbe
transferred. Figure 5.4shows theoretical (solid line) and experimental
(circles) maximum diffraction efficiencies for the acousto-optic case [l l].
Figure 5.5 illustrates diffraction efficiency as a function of v for Q=14. In
both cases, there is a reasonable quantitative agreement between theoryand
experiment. It is intriguing that, as follows from Alferness' data, the + l
order, although nonexistent at the sound cell exit, reaches an intensity of
about 25% inside the cell.
Benlarbi and Solymar [l21 have givenan interesting analysisof still higher
Bragg angle incidence. They use a truncated system with M=O and L= -n.
Making somewhat ad hoc assumptions about slow spatial variations in yo
and W-,,,
they arrive at two first-order equations coupling these quantities.
In our terms, their solution is given
by
zo=cos2(o.5~q~v)
I-,=~in~(OS)qlv)
(5.19)
where
(5.20)
Chapter 5
142
0.2
0.4
14
12
0
I
1
10
1
1
1
1
1
1
16
'
Q
1
18
1
1
20
1
1
22
1
1
24
Figure 5.4 Maximum diffraction efficiency of second-order interaction as a
function of Q. The solid line indicates the theoretical values, the open circles are
experimental data points. (Adapted from Ref. 11.) 0 1981 IEEE.
0.6
t
t
l
/'
'L
0
0
V
Figure 5.5 Diffraction efficiency of second-order interaction as a function of v
for Q=14. The solid line indicates the theoretical values, the open circles are
experimental data points. (Adapted fromRef. 11.) 0 1981 IEEE.
The Numerical Approach
143
For n= 1, we do indeed find back the conventional Bragg diffraction
behavior. For n=2, the expressions do not appear to agree with those of
Alferness [lo] or Poon and Korpel [l l], although the position of the first
maximum is almost the same. It is claimed that the findings do agree with
the results of Chu and Tamir [13], the latter results being derived in quite
a
different manner. Also, the present author’s numerical simulations appear
to
agree fairlywell with the theory.
5.5
THE NOA METHOD
The process of solving the set of equations
(5.3-5.6) has been formalized by
Hereman, Mertens, and Ottoy for normal light incidence [l41 and oblique
incidence [15]. They gave it the nameNOA method for nth order
approximation, indicating the fact that their set is always truncated by
orders -N and N, i.e., M= N, L= -N.
For perpendicular incidence, the system is simplified because of the
symmetry conditions mentioned before, i.e., ty,=(-l)nty-,. The essence of
the method consists ofassuming
solutions of the kind C,exp(jsz).
Substituting this into the set (5.3-5.6) will lead to N+ 1 simultaneous linear
algebraic equations forC,. For this systemto have a nontrivial solution,it is
necessary that
+
Det(M-sr)=O
(5.21)
where M is a matrix formed from the coefficients of the algebraic equation
set, and I is the unit matrix. Expression(5.21) leads to an equation of degree
( N + l ) in S, the roots (eigenvalues) of which are found numerically. The
complete solution then followsin the standard wayby
finding the
eigenvectors corresponding to the eigenvalues [l61 and imposing the usual
boundary conditions. A variant of the technique [l41 usesHeaviside’s
operational method [l71 and is claimed to be 25% faster in regard to
computer time.
Figure 5.6 compares NOA (N=7) results for Q=1.26 (solid line) with
experimental results (circles) obtained by Klein and Hiedemann [l 81, and
with ideal Raman-Nath behavior (dashed line). It will be seen that there is
excellent agreement.
The NOA technique is relatedto a method developed earlier by Leroyand
Claeys [19], who used Laplace transforms in combination with a matrix
theory to arrive at an expansion in Bessel functions. Laplace transforms
have also been used by Poon and Korpel [20] for the calculation of path
144
Chapter 5
0
2
4
v
(i
8
10
Figure. 5.6 NOAzerothorderpredictions far Q'1.26 andnormalincidence
(solid line), compared with experiment (circles) and ideal Raman-Nath behavior
(dashed line). (Adapted from Ref14.)
integrals in the Feynman diagram approach, The m t b d ofchoice,
however, appears to be the NOA eigenvalue technique or its operational
variant.
5.6 SUCCESSIVE DIFFRACTION
This method, developed by Hargrove [21], is a filiitc increment version of
Van Cittert's cascaded thin grating analysis [22] discwsed in Sec, 3.2. The
increments Az are chosen small enough that for each thin grating, the
Raman-Nath Bessel function solution applies. The orders generklted by a
particular grating are rediffracted by the next one, A disadvantage of the
method is that Bessel functions have to be handled by the computer.
Nevertheless, the technique appears to give results in good agreement with
experiment. Concerning these comparisons, Hargrove makes the interesting
observation that usually only agreement in order intensity is looked for, not
in relative phase. The phases are, however, veryimportant if one is interebted
145
The Numericdl Approach
in the total light Aeld at the exit of the sound cell and, in fact, may well
determine its overall shape. Hargrove gives
an example in which, forQ=0.31
and u=4, the calculated intensities agreeto within 1% with those obtained in
the pureRaman-Nathcase.
There are,however,marked deviations in
relative phase. The effect of this becomes clear when all the orders are put
together at the exitof the sound cell to give the overall intensity distribution
IT, as shown in Fig. 5.7. It is obvious that the fieldis not just phasecorrugated as it would be (by definition) for the ideal Raman-Nath case. In
fact, considerable intensity modulation is seen to occur. Notice that the
highest intensity peak occurs at x=11/4, i.e., at the point where the sin(&)
sound field is a maximum and the rays are focussedby refraction.
The successive diffraction methodjust
discussedwasdeveloped
independently by Hance [23], who gives an interesting proof in which, in the
limit of Az+O, Q+O, the classical Raman-Nath solution is retrieved.
Pieper and Korpel [24] developed a successive diffraction method using
increments thick enough to satisfy Bragg conditions. In the next section we
will elaborate on this method.
x /A
Figure 5.7 Intensitymodulation of theemergingwavefrontfor
Q=0.31, v=4,
calculated by the successive diffraction method. (Adapted from Ref.
21 .)
Chapter 5
146
5.7
CASCADEDBRAGGDIFFRACTION
Following Ref. 24 we assume near Bragg diffraction into the -1 order. The
angle of incidenceqk, differs from the exact Bragg angle#B by an angle A$:
qk,O=$B+A@
(5.22)
We have treated a similar case for the+ l order in Sec. 3.2.3 In the present
case we find that
(5.23)
Substituting (5.22) into (3.77), and limiting ourselves to orders - 1 and 0, we
obtain the following coupled equations:
dE-l
= - j u s * Eo exp(jKiA4)
dz
(5.24)
dE0 = -jaSE-, exp(- jKzA4)
-
(5.25)
dz
It shauld be remembered that the phase reference in(5.24) and (5.25) is at
the beginning of the first sound cell. Because
we are dealing with a cascading
of sound cells, it is more convenientto use a sliding phase reference pointz
for the solutions, so that at the input of the next cell we automatically have
the correct phase reference.In Ref. 24 it is shownthat the coupled equations
for the ithcell then become
dE-li
A4 E-li
- -jusi * Eo, - j K dzi
2
"
(5.26)
(5.27)
where the point Zi=O denotes the left (input) edge of the ith cell. Solving
(5.26) and (5.27) subject to the appropriate boundary conditions, it may be
shown [24] that the ith cell is characterized by a transmission matrixTi:
(5.28)
The Numerical Approach
147
where
rill=cos(yiLi)+j-sin(yiLi)
(5.29)
si
riI2
= -ja-sin(yiLi)
(5.30)
2Yi
Yi
S; *
ri21
= -ja-sin(yiLi)
(5.31)
Yi
riZ2
= cos(yiLi)-j-sin(yiLi)
KA@
2Y i
(5.32)
where
(5.33)
The authors apply their matrix method to a beam steering configuration
where the sound fieldsSi in each cell form a phased array. Figure 5.8 shows
the frequency behaviour of a four-cell beam steering array (solid line) and
compares it with a simulation using a multiple-order numerical technique
(dashed line). The Q of each of the cells equals 2n at the center frequency,
and the Bragg angle condition is exactly satisfied at that frequency (F=l).
Figure 5.9 shows the behaviour of the same device withan angular offset of
0.1 Bragg angle at the center frequency.
The accuracy of the method is foundto increase with the number of cells
used.
5.8
THECARRIERLESSSPLIT-STEP METHOD
This method is a modified form of the beam propagation or split-step type
Fourier transform technique [25,26]. In the latter method the medium is
divided into many thin slices Az, and the field is propagated twice, in a
space-marching fashion, through each slice.
In the first step, the mediumisconsidered to be homogeneous and
diffracting. Consequently this step involves propagating the local plane-wave
spectrum through the distance Az and calculating the propagated field from
the propagated plane-wave spectrum.
148
Chapter 5
F
Figure 5.8 Variation of diffracted intensities (orders0 and - 1) with normalized
frequency F for a four-cell beam steering phased array, as obtained by matrix
multiplication (solid line) and numerical integration(dashed line), (FromRef. 24.)
In the next stepthe slice is cansidered to be inhomogeneous and
nondiffracting, i.e., it is treated as a phase filter, The field emerging from this
second transit isthus obtained by multiplying the previous propagated field
by the phase function that represents the pertubation of refractive index.
The final field is taken to the next d
c
ie and the process repeated until the
entire interaction region has been traversed. A compact mathematical
treatment of this process may be foundin Ref. 27.
In two dimensions the technique may be described by the following
operator equation [28]:
where M is the phase filter function describing the inhomogeneity:
M=exp[-jk,&(x, z)Az]
(5.35)
149
The Numerical Approach
-
Figure 5.9 Variation of diffracted intensities (orders 0 and 1) with normalized
frequency F for a four-cell, beam steering phased anay, the
withincidence light offset
by 0.1 Bragg angle. M refers to matrix multiplication andD to numerical integration.
(From Ref. 24.)
and &I represents the pertubation of the refractive index. The multiplier
denotes the so-called plane-wave propagator and may be written as
H = exp(-jk,Az)
H
(5.36)
where
(5.37)
for paraxial propagation. The extension to three dimensions is straightforward.
150
Chapter 5
The algorithm just described has been applied to the propagation of a
beam in a grating [29]and could, of course, be used in the case where the
grating is a sound field.
A simplification of the method for sound fields has been described by
Venze, Korpel, and Mehrl [28].They apply a priori knowledgeof the
properties of the interaction in the individual slices
to speed the execution of
the algorithm. It is known that, to a first order, in a thin phase grating the
light is split into three orders that travel the small distance Az to the next
slice. The authors now use the well-known fact that the three orders are
separated in direction by twice the Bragg angle. This makes it possible to
ignore the spatial carrier of the sound and concentrate on the envelope
(profile) ofthe sound only.
Because of the greater generality of the method, we shall discuss it in
28.
some detail, following the (two-dimensional) treatment outlined in Ref.
The pertubation 6n of (5.35)is, in the case of sound-induced refractive
index variations, a function of time and space:
&(x, z, t)=C’s(x, z, t)
(5.38)
where C ‘ = - O . ~ ~ ~ O ~ = O . ~as~defined
O C , before, and s(x, z, t ) is the (real)
sound amplitude.
S(X,z, t)=O.SSe(x, z)exp( - j f i ) exp(jf2t)
+0.5Se*(~,Z) exp(jfi) exp(-jQt)
(5.39)
where S e is the complex envelope ofa sound field propagating nominally in
the X direction.
A snapshot of the sound field is usedat t=O (other times may be chosen;
We’ll return to that issue later)so that, substituting (5.38)into (5.39),we find
&(x,z)=OSC’S,(x, z) exp(-jfi)+OSC’S,*(x, z)exp(jfi)
(5.40)
The expression forM may be approximated by
(5.41)
M=l -jk,&(x, z)dz
if we make the assumptionthat Ik,Sn(x, z)dzle 1.
Substituting (5.40) into (5.41), we find for the first operation ME(x, Z) in
(5.34)
ME(x, z)=E(x, z)[l -0.SjkvC’AzSe(~, Z) exp(-jfi)
-O.Sjk,C‘&S,*(x,
z) exp(jfi)]
(5.42)
151
The Numerical
The next operation to be performed in the split-step method to
is take the
Fourier transform of (5.42). Use is made of the following property of the
Fourier transform:
.r-'[g(x) exp(-jKi)]=.r"[g(x)]
with
k,+k,-K
(5.43)
We shall denote this by . ~ - , r - ~ [ g ( xwhere
)]
.?" is a "shift operator" that
replaces k, by k,-K. Similarly, the shift operator replaces k, by k,+K.
Using (5.43) in the evaluationof the next step ,F-'ME(x, z) in (5.34), we
find from (5.42)
.S/?+
Y - ~ M E ( Xz)=.F-'[E(x,
,
z)~-o.~~~"~c'.v-.~-'[E(x,
z)Se(x, z)]
- 0 . 5 j k , ~ C r . v + . r - ' [ E ( xz)Se*(x,
,
z)] (5.44)
Refemng back to (5.34), the split-step method may nowbe completed by
multiplying the propagation factor H and then performing the forward
Fourier transformon the result.
The method is readily implemented on a computer. A flowchart of the
main propagationloop of the program is shown in Fig.
5.10.
next AZ
4
Figure 5.10 Main propagation loop o f the carrierless algorithm. (From Ref'. 28)
152
Chapter 5
Figure 5.1 1 shows the simulated evolutionof a Gaussian beam incidentat
the negative Bragg angle ona sound column of width z=L. The maximum
sound amplitude corresponds to a Raman-Nath parameter v=a and the
Klein-Cook parameter Q equals 13.1.
Note the depletion of the center of the zeroth order at the center of the
sound cell, asto be expected at v=z.
Figure 5.12 shows data points at the exit of the sound cell and compares
them with the solid line that is the result of an analytical calculation (see
Sec. 3.3.4 and its applicationto a Gaussian beam in Sec.6.2).
The authors of Ref. 28 also performed physical experiments
in addition to
simulation and numerical calculation. The results show very good mutual
agreement.
It should be pointedout that, although three orders are
used in each small
step Az of the algorithm of Fig. 5.10, the final result will show all orders
generated. This is demonstrated in Ref. 30 which treats interaction with a
cylindrical sound wavefront. Figure5.13 is taken fromRef. 30 and shows the
Figure 5.11 Simulation plot of the intensity of the angular spectrum of the total
28.)
field at various positions along the interaction length. (From Ref.
The Numerical Approach
153
Figure 5.12 Analytical and simulation plots of the angular spectrum intensityof
the total field at the exitof the sound cell, whereV=T, Q=13.1. (From Ref.28.)
0.50
0.45
0.40
0.35
0.30
0.25
0.20
0.15
0.10
0.05
0.00
0.0 0.1 0.2 0.3 0.4 0.50.6 0.7 0.8 0.9 1.0 1.1 1.2
Normalized
interaction
length
Figure 5.13 Simulatedevolution o f threeorders of diffractedlightinthe
interaction with a converging sound wave. (From Ref30.)
Chapter 5
154
evolution of three diffracted orders in the strong interaction of a beam of
light with a converging sound field. Note that the orders appear to be
generated at specific positions within the interaction regions, in the order
-1, + l , and +2.
In summary, the carrierless split-step method is, when extended to three
dimensions, a very powerful simulation technique for strong acousto-optic
interaction of arbitrary fields. If the sound field is known only at the
transducer surface, the conventional split-step method may be applied to
calculate the sound S(x,z) necessary for the carrierless algorithm.
Finally, a word about the time dependence of &(x, z, t ) , which was
eliminated in the foregoing by setting t=O. Other values may be chosen to
show the (periodic) evolution in time of the diffracted field. In general, with
well-separated orders, this should not show anything new-other than that
the orders are shifted in frequency. Where orders overlap, the beat notes
between the order frequencies should show up, but this has not yet been
demonstrated.
5.9 THE FOURIER TRANSFORM APPROACH
This technique, developedby Tarn and Banerjee [31-331, uses the notion of
a slowly varying transverse profile (envelope) of the light field. Thus the
phasor of the rnth order optical field at any point in the sound cell is
represented by
&(x, z)= ye,m(x, z) exp(-jkz
COS
Qim-jkx sin 6 )
(5.45)
where Qim denotes the nominal direction of the mth order, and
itsslowly
varying profile (envelope).The incident light atz=O is describedby
and
sin
= sin &,c
K
+k
(5.47)
Now, rather than working directly with the profile of the light orders, the
authors introduce the local spectrum
of these profiles:
(5.48)
h
155
The Numerical
ea
w e ,m
=.
~
~
e
1,=mJye.m(x, z)exp(-jk,x)dx
(5.49)
+
It is now assumed that both and iP change only slowly withz and that the
sound field in the interaction regionmay be written as
exp(-jlux) S(X,z)=Se(Z)
(5.50)
In other words, the sound profile &(z) does not vary significantly with x
inside the interaction region; a profiled sound beam is used as a model. In
most cases this isa satisfactory model.
A final assumption is that the configuration is such that Bragg operation
can be assumed, i.e., only two orders (0 and - 1 in this case) interact.
With the above assumptions, the authors find, upon substituting the
various expressions in the waveequation
(5.51)
(5.52)
The boundary conditions pertainingto (5.51) and (5.52) are
Ye,o(kx, O)=yinc(kx)
(5.53)
ye,-l(kx,o)=O
(5.54)
In (5.51) and (5.52) the first terms on the right-hand side express the
propagation of Ye,o and Y~,-Iin the absence of the sound. Thus, these terms
describe the effect of diffraction. The second terms describe the interaction
with the sound.
In Ref. 33 the authors apply their formalism to interaction of a Gaussian
light beam with waist W O with a diverging Gaussian sound beam, which in
the interaction regionis approximated by
Chapter 5
156
where W denotes the waist of the sound beam, and R its radius of phase
curvature.
Substituting (5.55) into (5.51) and (5.52), the authors then solve the
equations numerically by the Runge-Kutta method. The fixed parameters
are w0=2h, W = q Q=8,f,,,,d=24 MHz, h=0.128 mm and A=0.632 p.
Figure 5.14 shows (a) the profile
of the zeroth orderand (b) that of the - 1
order as a function of a (=v, the Raman-Nath parameter) and Xlwo at the
exit of the sound cell, for a radius of curvature
R=40 m.
Figure 5.15 shows a similar case withR=4 m, and Fig. 5.16 with R= 1 m.
Note how with R=40 m, the behavior with v is very close to that of the
conventional case. Maxima of the - 1 order appear at v=lt and 3n, minima
at 2n and 4n. For the smaller radii of curvature R=4 m and R = l m, the
periodic behavior has disappeared, and the first maxima in the - 1 order
appears for increasingly larger values
of v.
5.10
MONTE CARLO SIMULATION
A s pointed out in Sec. 4.8, two-dimensional strong acousto-optic interaction
of arbitrary sound and light fields is, in principle, solvable by the use of
Feynman diagrams and path integrals. Figure 5.17, taken from Ref. 34,
shows the general configuration with the light propagating nominally in the
2 direction and the sound in the X direction. The arrows labeled n- 1, n,
n+l denote the direction of scattered, upshifted orders generated from a
particular plane wave in the plane-wave spectrum of the incident light. The
dotted lines, bisecting the angles between consecutive orders, are called
Bragg lines. As explained in Sec. 4.7, they play an important role in the
scattering process in that the sound field along their length mediates the
interaction between neighboring orders. The formalism expressing this is
given by the recurrence relations (4.74) and (4.75), where, with reference to
Fig. 5.17, we have
S+,- 1=S(&-
(5.56)
1,z)
S-n+l=S*(Xfl+l,
z)
(5.57)
The constant a is a material constant:
4
1
1
a=-kC=--kn
4
2p
(5.58)
The Numerical Approach
157
Figure 5.14 Profile of the (a) zeroth order and (b) -1order as a function of a
(=v, the Raman-Nath parameter) andXIWOat the exitof the sound cell, for a radius
of curvature R=40 m. Other parameters are wo=2A, W = m , Q=8, &und =24 MHz,
A=0.128 mm, and il=0.632 p m . (From Ref. 33.)
158
Chapter 5
0.00
<
2.09
Figure 5.15 As Fig. 5.14, but with R=4 m.(From Ref. 33.)
The Numerical Approach
159
2.99
Figure 5.16 As Fig. 5.14, but with R = l m.(From Ref. 33.)
160
Chapter 5
Figure 5.17 General interaction configuration with Bragg lines. (From Ref. 34.)
As discussed in Sec.4.7 by repeated successive application of
(4.74),E, can
be expressed as an infinite sum of path integrals, the paths themselves being
illustrated by Feynman diagrams. One way of depicting these diagrams is
shown in Fig. 4.7, but a way more suitedto the present treatment is shown in
Fig. 5.18. The scattering is from the incident light
to the fourth order through
the intermediate orders indicated. The path amplitude for this path isbygiven
&(k) = (-jaI6 jS:dz5 ]S:dz,
(5.59)
Figure 5.18 TypicalFeynmandiagramforscatteringintotheplusfourthorder.
(From Ref. 34.)
h
The Numerical
161
where k is an ordering label for classifying paths leading from B;,, to &.
The total scattering amplitude is given by a summation over the infinity of
such paths:
k4= 2k4(k)
(5.60)
k=l
In the Monte Carlo method to be discussed, the evaluation of path
integrals such as(5.59) has its basis in the interpretation of factors-jaSo+,
-ja&+, -jaS2+, etc., as essential parts of quantum-mechanical probability
amplitude densities. Thus, in eq. (5.59), the probability amplitude of a
transition from order 0 to 1 during dz isgivenby
- j a S t d z , and the
probability amplitude ofa photon existing in order1 and ZI is given by
-j a
]S:dz
(5.61)
where Einc is normalized to unity for simplicity.
The probability amplitude of a photon making a transition from 1 to 2
between 21 and zl+dz is -jaS:dz provided there is a photon available in 1.
But the probability of the latter is given by expression (5.61). Hence,
the
overall probability amplitude of a transition between ZI and a + d z is
expressed by
(5.62)
Continuing the argumentin the same fashion, we arrive at (5.59), i.e., the
probability amplitude of a photon existing in level 4 at z. The authors of
Ref. 34 add the following noteto their discussion:
Naturally this picture must not be taken to seriously. At best it is a
metaphor for the quantum mechanicalbehavior of the photon, a
behavior that cannot be described by any classical picture. (For one
thing, the probability amplitudes from0 to 4 must be added vectorially,
as if the photon traversed all paths simultaneously.) However, when used
with caution, the model isof considerable valueinguiding
the
probabilistic simulation.
A further discussion of this subject may be found in Ref. 35.
In the simulation the authors divide the interaction region in N slices, as
shown in Fig. 5.19(a). At any discrete point 1,2, 3, . . . N,. the photon may
162
Chapter 5
LIN
1
3
2
4
N
L
4
W
a
I
:+p
m
0
l
b
1
""""""""".._______.___.___..
0
m
1
m
2
C
1
0
m
1
m
m
2
3
d
Figure 5.19 (a) Transition points. (b) Transition from order 0 to 1 at m1 in pure
Bragg diffraction. (c) Transition backto 0 at mz.(d) Final transitionto order 1 at mg.
(From Ref. 34.)
make a transition from the level it is in . Figures 5.19(b, c, and d) show
typical transitions for exact Bragg diffraction in a sound column where all
S,,+ and S,- have the same value of SOand there are only two levels. Hence,
the probability of transition p = -jSodz= -jSo(WN). The authors prove that
for smallp (i.e., N95)the final resultsof such a modelare in agreement with
the known laws for Bragg diffraction.
In the actual simulation a difficulty arises: The quantum-mechanical
probability of a transition equals p , and of no transition equals unity.
Obviously 1+IpI2> 1. This does not lead to difficulties in the quantummechanical model, because the
total probability doesnot have to equal unity
until all parts are summed vectorial&. However, in a simulation with classical
h
163
7'he Numerical
computers, if the probability of a transition is IpI, then the probability of no
transition is automatically l-bl. The authors get around this difficulty as
shown in Fig. 5.20, illustrating how the transition probabilities are handled
in each transition pointof Fig. 5.18.
The numbers in brackets indicate probabilities, while the numbers in
parentheses indicate weighting factors. Thus, a nontransitioning photon
withunavoidable(classical) transition probability of 1-bl isgiven a
weighting factor of (l-bl)" to compensate for that. In a transition the
photon is "multiplied" by -j to account for the imaginary probability
amplitude. Whether a photon makes a transition at m is determined by a
uniform random number generator, that generates a number between zero
and 1. This number is compared with the value of in (5.58),
and if it is
found to be smaller, the photon changes its level number (label0 or 1) while
being multiplied by -j. In the other case, the photon keeps its level number
and is multiplied by (1-bl)-*.The process is repeated at the next slice, and
after N steps the photon is stored in the appropriate bin (0 or 1) with its
.""""."""".""
n
a
b
Figure 5.20 Simulation of quantum-mechanical
transition
probability
amplitudesbyclassicalcomputersforBragg
diffraction. The bracketsindicate
classical probabilities, the parentheses indicate weighting factors. (a) up-transition,
(b) down-transition. (From Ref.34.)
Chapter 5
0 .o
1 .o
NORMALIZED SOUND AMPLITUDE
Figure 5.21 Bragg diffraction simulation with 10,000 photons and 100 divisions.
(From Ref. 34.)
1.0
-
z
g
Lu
+
ORDER0
I
ORDER1
-PREDICTED
0
---PREDICTED l
0.8-
c
z
c
5
0.6-
A
2“r
0.4
-
0.2
-
I
U
9
II
i
c
0.0. d
0 .o
d
4
l .o
NORMALIZED SOUND AMPLITUDE
Figure 5.22 Near-Braggsimulationwith A @ B = M ~ L10,000
,
photons,and 1000
divisions. (From Ref.34.)
The Numerical Approach
165
a
b
.
C
Figure 5.23 Raman-Nathsimulationfor
M=9, 10,000 photons, and 1000
divisions. Orders0 and 1 are shown. (From Ref. 34.)
proper phase and weighting factor. After this the next photon is processed.
In the final step, weightedphotons in each bin are added vectorially, squared
(to calculate the power), and their number normalized to the square of the
total number of photons used in the simulation. A typical result for 10,000
photons and 100 steps is shown in Fig. 5.21. The dashed and solid lines are
the values predicted from conventional Bragg diffraction theory. The
agreement is very close,but the results begin to deviate if the Raman-Nath
parameters v774 corresponding to a normalized sound amplitude of 1. It is
found that the number of photons used in the simulation is more critical
than the number of steps.
In the example discussed above,the probability densityp does not depend
on the transition point m. The authors of Ref 34 also show a simulation
for near-Bragg diffraction, where this is no longer the case (see Sec. 3.2.3).
Figure 5.22 shows the result foran offset angle ofM4L.
166
Chapter 5
1.0 r+,
0
g
-
S
8
z
0.4
0.4-
0.2
0.2U
'
0.0
0.0
.*,
*
9
9
.d
1 .O
NORMALIZEDSOUND AMPLITUDE
Figure 5.24 Raman-Nath simulation for M=9, 10,000 photons, and 1000
divisions. Orders 2 through 4 are shown. (From Ref. 34.)
r
1
+
n
+
a.
Figure 5.25 Raman-Nath simulation with parameters that are the same as Fig.
24, but orders 2,3, and 4 are shown.
The Numerical Approach
167
For the general case, more than two orders must be considered. Figure
5.23 shows the transition probabilities and weighting factors in the case of
Raman-Nath diffraction with the orders ranging from -M to +M. When
the highest orders are reached only down- (or up-) transitions are possible.
Figure 5.24 shows simulation results for orders 0 and 1 , while Figure 5.25
shows orders 2 through 4.
In summary, this simulation method appears to be versatile and accurate,
although perhaps not as convenient to use as the carrierless algorithm. In
contrast to the latter, it is notsuitable for three-dimensional simulation.
REFERENCES
1. Gill, S. P., Office of Naval Res., Contract Nom-1866 (24) NR-384-903,Tech.
Memor., 58 (1 964).
2. Klein, W. R., and Cook, B. D., IEEE Trans., SU-14, 123 (1967).
3. Pieper, R., and Korpel, A., .lOpt. Soc. Am., 2, 1435 (1985).
4. Spiegel, M. R., Mathematical Handbook., Schaum’s Outline Series, McGrawHill, New York(1968).
5. Jaskelainen, T., Nuovo Cimento,26,439 (1979).
6. Blomme, E., and Leroy, O., Acustica, 59, 182 (1986).
7. Blomme, E., and Leroy, O., Acustica, 58,4 (1985).
Ac Soc India, 11, 1 (1983).
8. Blomme, E.,and Leroy, O.,..l
9. Blomme, E., and Leroy, O., Acustica, 57, 170 (1985).
10. Alferness, R., .lOpt. Soc. Am, 66, 353 (1976).
11. Poon, T. C., and Korpel, A., Proc 1981 Ultrason. Symp., 751 (1981).
12. Benlarbi, B., and Solymar, L., Int. .lElect, 48,361 (1980).
13. Chu, R. S., and Tamir, T., IEEE Trans., MTT-18,486 (1970).
14. Mertens, R., Hereman, W., and Ottoy, J. P., Proc Ultrason Int., 85,422 (1985).
15. Mertens, R., Hereman, W., and Ottoy, J. P., “The Raman-Nath Equations
Revisited 11. Oblique Incidence ofthe Light-Bragg Reflection,” in Proc. Symp.
Ultrason. Int., London (1987).
16. Franklin, J. N., Matrix Theory, Prentice-Hall, New York (1968).
17. Jeffreys, H., and Jeffreys, B., Methods of Mathematical Physics, 3d ed.,
Cambridge University Press, Cambridge, Chs.
7 and 8 (1966).
18. Klein, W. R., and Hiedemann, E. A., Physica, 29,981 (1963).
19. Leroy, O., and Claeys, J. M., Wave Motion, 6, 33 (1984).
20. Poon, T. C., and Korpel, A., Optics Lett., 6, 546 (1981).
21. Hargrove, L. E., J Ac Soc. Am., 34,1547 (1962).
22. Van Cittert, P.H., Physica, 4, 590 (1937).
23. Hance, H. V., “Light Diffraction by Ultrasonics Waves as a Multiple Scattering
Process,” Tech. Rept. Physics, 6-74-64-35, Lockheed Missile and Space Co.,
Sunnyvale, Calif, (July 1964).
24. Pieper, R.J., and Korpel, A., Appl. Opt., 22,4073 (1983).
25. Hardin, R., and Tappert, F., SZAM Rev., 15,423 (1973).
168
Chapter 5
26. Korpel, A., Lonngren, K. E., Banerjee, P. P., Sim, H. K., and Chatterjee, M. R.,
.lOpt. Soc. Am., 3 (B), 885 (1986).
27. Korpel, A., Lin, H. H., and Mehrl, D. J., J: Opt. Soc Am., 6 (A), 630-635.
28. Venzke, C., Korpel, A., and Mehrl, D., App. Opt., 31, 656 (1992).
29. Yevick, D., and Thylen, L., .lOpt., Soc, Am., 72, 1084 (1982).
30. Korpel, A., Opt. Eng., 31,2083-2088 (1992).
31. Banerjee, P. P,and Tam, C. W., Acusticu, 74, 181(1991).
32. Tarn, C. W., and Banerjee, P. P., Opt. Comm., M,481 (1991).
33. Tam, C. W., Banerjee, P. P,and Korpel, A., Opt. Comm., 104,141 (1993).
34. Korpel, A., and Bridge, W., J: Opt. Soc Am., 7 (A), 1505 (1990).
35. Korpel, A., Appl. Opt.,26, 1582 (1987).
6
Selected Applications
In this chapter wewill discuss the application of the theory to many
examples, chosen mainly for their suitability in this context, and not in the
first place for their relevance to current technology. Examples of the latter
kindcanbe
found inmanyof
the available bookson acousto-optic
applications (e.g.,Ref. 1). Aneffort has beenmade to analyzeeach
application fromtwo different pointsof view, for instance from the points of
view of modulation detection and sideband heterodyning in modulators,
plane-wave availability, and phase asynchronism in beam deflectors, etc
The last section is devoted to signal processing, a subject which has
become very popular recently. The reader will find the basic principles of
image plane processing and frequency plane processing reviewed in detail.
Some common architectures are discussed briefly.
6.1 WEAK INTERACTION OF GAUSSIAN BEAMS
In this section we shall investigate the two-dimensional interaction of an
incident Gaussian light beam with a Gaussian sound beam. The relevant
configuration is shown in Fig. 6.1. For simplicity, we shall locate the waists
of both beams symmetrically about the origin. To within the paraxial
approximation, the fieldof the incident beam alongX is given by
169
170
Chapter 6
X
I
Z
Figure 6.1 Interaction geometryof Gaussian light and sound beams.
E i ( x , 0) = Eiexp(-jk&x) exp
($-)
where W i is the beam radiusto the lle amplitude point.
The sound beam is given
by
(3
S(0, z) = S exp
For our calculation, we shall use the plane-wave weak interaction formalism
as expressed by (4.54). To that purpose, wevmust first calculate the angular
plane-wave spectra g(@)and f(<.n,
according to (3.138) and (3.144). After
some algebra, it follows readily tHat
171
Selected Applications
With (4.54) we find for incidenceat the appropriateBragg angle h=-$B
k,(4) = -0.25jkCmswiSEiexp
-(K’w,’
+k2w:)(4-eB)*
4
Comparison of (6.5) with (6.3) shows that outside the interaction region,
a Gaussian beam propagating at an angle @B and having a
waist W Igiven by
El($) represents
It will be clear from (6.5) that if kwi*KWs (i.e., the diffraction spread of the
light beam is much less than that of the sound beam), then W l s W i . This is
the usual case in beam deflectors, etc.,and approximates a configuration in
which the incident beam is a plane
wave, as usually assumed.
If, on the other hand, kwi4KWs (i.e., the incident light may now better be
described as a converging or diverging beam), then W I=Kw,lk. Thus the
emerging beam is a demagnified (xlk) replica of the sound beam. This is a
typical case of Bragg diffraction imaging [2], already discussed in more
general terms in Sec.3.3.3 and tobe analyzed in detail in Sec.
6.9.
Let us next calculate the power the
in diffracted beamand how it depends
on W, and Wi. It may be readily shownthat
First, consider the case where the width of the sound beam isWe
fixed.
write
(6.7) as
It is clear that an asymptotic maximum will be reached when w r ) q i.e.,
when the incident light beam resembles a plane wave. Apparently,
the best
strategy is to concentrate all the incident power into one plane wave
interacting with the strongest plane waveof sound in the spectrum.
However, not much change occurs as long as KWJwi<k, i.e., as long as the
diffraction spread of the light beam is less
than the diffraction spread of the
sound beam. When the former substantially exceeds the latter, many plane
waves of light are not interacted with, and the scattered power decreases
172
Chapter 6
rapidly, in proportion to the ratio of sound diffraction spread to light
diffraction spread.
If the width ofthe light beam is fixed, similar considerations can be shown
to apply. The best strategy is to concentrate all the sound power into one
plane wave interacting with the strongest plane wave of light, but, again,
there is not much change until the sound beam is made so small that a
substantial number of plane waves of sound do not find any light waves to
interact with.
The above analysis can be repeated for the more practical case of a
rectangular sound beam. The calculation is somewhat more involved, but
the results are basically the same. The general effect of beam width on
scattered power for this case was analyzed by
Gordon [3], who used a threedimensional volume scattering formalism.
6.2
STRONG BRAGG DIFFRACTION OF A GNJSSlAN LIGHT
BEAM BY A SOUND COLUMN
The general configuration is as shown in Fig. 3.15, and the general theory
has been described in Sec.3.3.4 and 4.6 for the arbitrary 6mss section of the
incident light beam. We will here analyze the specific case of a Gaussian
incident beam as specified by (6.1), with the angular plane-wave spectrum
given by (6.3), with &=-@B. Assuming, for convenience, that S = J ~ Swe
~,
find upon substituting (6.3) into (3.168) and normalizing relative to
I&-@B)I
Note that &-@B)
represents the maximum plane-wave amplitude (density)
in the angular spectrum. For@=@B and v = z , it follows that &@B)=&{-+B),
i.e., the center incident plane wave has been completely diffracted. This
obviously must leave a hole in the plane-wave spectrum of the zeroth-order
diffracted light.
The phenomenon is shown in Fig. 6.2 that depicts, for the zeroth order,
the results of a numerical calculation performed by Chu, Kong, and Tamir
[4]. Figure 6.3 shows the corresponding - 1 order. Beam profiles other than
Gaussian also have been analyzed. Results may be found in
Ref. 5.
173
Selected Applications
I
-.l2
-.08
-.04
0
.04
.08
.l 2
Figure 6.2 Angular plane-wave spectrum of the zeroth order beam forv=z(solid
4.)
line) and v = d 2 (dashed line). (Adapted from Ref.
Apart from beam distortion, the process of selective diffraction also
causes decreased diffraction efficiency for the entire beam. This effect has
been calculated by Magdich and Molchanov [q.The decrease in diffraction
efficiency depends strongly on the parameter
which denotes the ratio of angular widths of incident light and sound.
Figure 6.4 shows the relative power in the diffracted + l order beam as a
function of electrical power appliedto the acoustic transducer.It is seenthat
for all values of a, the maximum diffraction efficiency occurs at the same
power level. This level should, according to (6.9), correspond to v=z. For
increasing angular spread in the incident light, the maximum diffraction
efficieficy decreases, as more and more plane waves of light are diffracted
kds efficiently. Note that Fig. 6.4. shows reasonable agreement between
€heoryand experinient.
Chapter 6
174
t
4
-.08
-12
-.04
I
0
.04
.OB
4
.l2
Figure 6.3 Angularplane-wavespectrum of the -1 orderbeamfor
4.)
line) and v = d 2 (dashed line). (Adapted from Ref.
0
3
6
9
P (Watts)
12
V=K
(solid
15
Figure 6.4 Relativepowerinfirst-orderdiffractedbeamasafunction
6.)
electrical power applied to the transducer. (From Ref.
of
Selected Applications
6.3
175
BANDWIDTH AND RESOLUTION OF LIGHT DEFLECTOR
The conventional configuration used for analysis is shown in Fig. 6.5. An
infinite plane wave of light with amplitude Ei is incident at -@B,, and a + 1
order beam with amplitude El, is generated at +@B,. The subscript “C”
refers to the center frequency F,. By changing the frequency of operation,
the diffracted beam, (now called El) is made to change its direction to
2@bb-@Bc.
The angle of incidence is fixed, and the Bragg angle condition is
satisfied for each frequency by
an appropriate planewave at F -(@B-@B,) in
the angular spectrumof the sound.
For weak interaction, the process may be analyzedby the use of (3.160).
For @o=-@B, we find, using (3.128)to calculate Bi(@),
(6.11)
where q3~=K/2k=Q/(2kV)is the independent variable, proportional to Q.
The diffracted light is obviously a plane wave propagating in the expected
Figure 6.5 Beam deflector showing plane-wave Ei incident at the correct Bragg
angle @B, for center frequency operation. The diffracted wave El is shown at an
arbitrary operating frequencyF, for which the appropriate Bragg angle equals@B.
176
Chapter 6
I
(6.12)
Denoting
M=
K- Kc
(6.13)
we find that the zerosof E1 occur at
(6.14)
corresponding to the case where the interacting planewaves of sound fall on
the first (positive and negative) zeros of the angular plane-wave spectrum.
For largeL, the solutionof (6.14) is approximately given by
(6.15)
Defining the bandwidth B as one-half of the frequency difference between
zeros, we find from (6.15)
(6.16)
A different argument, appropriate for the sound column picture, is based
on Fig. 6.6. The column is considered to approximate a plane wave whose
interaction is represented by the wave vector diagram kt, kl, K,. The end
points of all diffractedwave vectors must lieon the circle throughA and B,
with 0 as the center. If the sound frequency of the column is raised from
f,
tof, the lengthof the soundwave vector increases fromKc to K,its direction
remaining the same, because it represents the entire sound column in this
model. Ideally, the new wave vector trianglek;,kl, K should be closed,but,
as shown, there existsa deficiency Ak. This deficiency leads to a cumulative
phase mismatch LAk in the interaction,and as a result thetotal contribution
from E to E1 vanishes when
LAk= r 2 a
I!
Selected Applications
177
\
/hk
0
Figure 6.6 Wave vector diagram showing phase match at center frequency and
phase mismatch Ak at frequency F,corresponding to propagation constantK.
It may be shown readily from Fig.
6.6 that
(6.18)
Substituting (6.170) into (6.18), we find forthe first zeros
(6.19)
Note that this is identical to (6.15); hence, the phase mismatch reasoning
leads to the same bandwidth prediction as the angular plane-wave spectrum
argument.
From a point of view of efficiency, it is important to make L,and hence
Q, large. As shown by (6.16), this will adversely affect the bandwidth. A way
around this difficulty has been found by having the sound beam track the
required Bragg angle when the frequency changes.
178
Chapter 6
The principle of such a beam-steering phased array is illustrated in Fig.
6.7. The stair-step device has four individual transducers [7,8] alternatively
energized in opposite phase, as indicated by the + and - signs. The height
of the stair-steps equals one-halfof the wavelength & at the center
frequency&. Thus, when the device is operatedat the center frequency, asin
Fig. 6.7(a), the effective wavefrontp propagates vertically upward.The light
is incident at the center frequency Bragg angle $B= so as to cause optimal
generation of the +1 order. If nowthe frequency is reduced, then the required
Figure 6.7 (a) Stair-step'phased array operating at the center frequency fc. (b)
Same phased array operating at a slightly lower frequency. (c) Direction of the
incident light relative to rotated wavefront p .
Selected Applications
179
Bragg angle is smaller. This is automatically brought about by the effective
wavefront rotating clockwiseby an angle @l, as indicated in Figs. 6.7 (b and
c). It is readily seen from the figures
that the angleof rotation is given by
(6.20)
As seen from Fig. 6.7 (c), the angle of incidence now equals @ B ~ - @ ~The
.
required angleof incidence, of course, equals+B, so that the angle of error
@e
is given by
The spacing between the transducers is now chosenso that dc$eldA=O at
A=&. With (6.20)and using the conventional definition for
the Bragg angle,
we may write
(6.22)
from which it follows readilythat
(6.23)
Thus tomake this zero, the design requires
that
(6.24)
It is interesting to calculate the weak interaction behavior of the phased
array and compare it with the conventional sound cell. This is very simple,
because we have done most of the work already in connection with (6.11)
and (6.12). In fact, a little reflection will showthat we only have to change
the argument of the sinc function in (6.12) (representative of the angular
spectrum of the sound field) in the following
way: @Bc-@B+@Bc-@t-+B=@e.
With (6.22), (6.24),and (6.12), we then find readily
(6.25)
Chapter 6
180
If again we define the bandwidthas one-half the difference between zeros
of the sinc function,we find
(6.26)
We should compare (6.26) with (6.16). If, for example, Q=16n, then the
fractional bandwidth for the conventional deflector equals
25%, that of the
phased array deflector%YO.
The first phased array deflector was used in a laser television display
system developed at the Zenith Radio Corporation in 1966 [9].Figure 6.8
shows the transducer array used in that system. Figure 6.9 shows plots of
diffracted light intensityvs. cell orientation for three distinct frequencies: (a)
refers to the phased array of Fig. 6.8, (b) to a comparable but conventional
transducer. Finally, Fig. 6.10 shows the frequency response of the phased
array at low power input.
More details about phased arrays may be found in books or articles
devoted to applications (see, e.g., Ref. 10).
The resolution of the deflector used as a scanner can be expressed as the
number N of resolvable angles within the scan.In air the diffraction angle&
of the light beamof width L can be .expressed
&J=-&,c
L
Figure 6.8 Phasedarraytransducerusedin
(From Ref. 9.)
(6.27)
1966 Zenith laser display system.
181
Selected Applications
t
YIUIRAMANS
YILLIRAMUS
Figure 6.9 Plot of diffracted light intensity vs. cell orientation for (a) the phased
9.)
array of Fig. 6.8 and(b) a comparable but conventional transducer. (From Ref.
Figure 6.10 Frequencyresponse of thephasedarraylightdeflector
(From Ref. 9.)
of Fig. 6.8.
182
’
Chapter 6
and the total scan angle is given by
(6.28)
Hence;
(6.29)
where ‘t is the transit time of the sound cell. The expression (6.29) was first
demonstrated experimentallyby Korpel et al. [l l].
6.4
RESOLUTION OF SPECTRUM ANALYZER
In a light deflector operating in the scanning mode, the transducer is
sequentially energized by adescending or ascendingseries of sound
frequencies. That same device may be used as a spectrum analyzer if the
frequencies are applied in parallel. This is shown schematically in Fig. 6.11.
A wide beam (width L) of coherent light, represented schematically by the
ray marked “a”, is incident upon a Bragg cell. Frequencies f l y f2, . . . f,
generate upshifted rays that are focused by the lens with focal length F in
points XI,x2 . . .),x of back focal plane X. (Refraction of the rays on the
surfaces of the sound cell is ignored in the drawing.) The brightness
distribution I(x) in that plane is then proportional to the power spectrum
Pv) of the electronic signal according
to the following rule:
I
1-1
L I
1-1
l
1-1
A
/
I I=I
I 1-1
’y f ,
f*
...L
II tI
\I
v
x-
II
I
II
Figure 6.11 Operation of spectrum analyzer. Power at sound frequenciesfi,
. . .. ,fm is displayed as optical brightnessin points XI, x2, . . .. ,Xm.
ji,
183
Selected
I(x)
a
P( f ) withx = F- f
v,
(6.30)
Now, the brightness distribution around a particular position x, is of
course not a point, as indicated in the ray drawing. Instead
it consists of the
diffraction pattern caused by the entrance pupil of the lens. If this is
determined by the length L of the sound cell, thenit may be written as
2
I,(x -x,)
a
sinc
(6.31)
where h, denotes the free space wavelength.
Let us denote the width
of this pattern in the conventional
way as half the
distance between the first zeros of the sinc function:
(6.32)
This width determines an effective resolution Af of the sound cell. From
(6.30) we find that
a
v,
h=F-Af
(6.33)
Comparing (6.32) and (6.33), we find finally
(6.34)
This is a very plausible result, because it seems reasonable
that the frequency
resolution should be inversely proportional to the transit time (observation
time) of the sound.
It should be remarked that the light distribution in theX plane does not
just represent the power spectrum of the input signal; if we concentrate on
the optical amplitude, then this distribution is, in a profound sense, the
frequency spectrum (Fourier transform), because not onlycomplex
amplitude, but also frequency itself (through upshifting) is conserved. We
will return to that aspect in Section6.13 on optical processing.
The resolution of the spectrum analyzer may also be expressed as the
number N of resolvable frequencies within the bandwidthB. With (6.34) we
find readily that
1S4
Chapter 6
N=Bz
(6.35)
We find that this expression is identicalto (6.29), which gives the number of
resolvable points of the deflector. This should not be a surprise, because, as
stated before, the only difference between the two devices is the serial vs.
parallel frequency input.
6.5
BANDWIDTHOFMODULATOR
The essential configuration of a Bragg modulator is shown in Fig. 6.12.The
sinusoidal pattern is a symbolic representationof a carrier with wavelength
A, modulated by a sinusoidal signal of wavelength h.
The AM modulated
carrier may be written as
X
Figure 6.12 A typicalBraggmodulatorconfiguration.Thesinosoidalpattern
symbolizes the modulated acoustic signal.
185
Selected
$(x,t ) = q l +m ~~~(Qrnt-Kmx)]
cos(Qt-Kk)
(6.36)
where S is real, m is the modulation index, Q, the modulation frequency,
and K,,,=2lclh, the propagation constantof the envelope.
As the signal moves through the incident beam (amplitude Ei, angle of
incidence 4 ~ ~ = x / 2 kthe
) , + l order diffracted beam is modulated in power.
When a,,,increases (hence, A m decreases), a larger part of the modulation
cycle with appear within the width D of the incident beam. It appears
intuitively plausible that the modulation index of E1 will then decrease to
reach a zero whenD=&.
We will analyze the modulator response in two different ways that are
often thought to be incompatible and todescribe different effects.In the first
method, we integrate the locally diffracted power in order
to calculate
(6.37)
As before, we assume weak interaction for convenience, although the
analysis may equally well be carried
out for strong interaction. From
(3.1 lo),
we find for smallv
I,
IiV2
=4
(6.38)
We are here concerned with a modulated beam; hence,
we write
v=v(x, t)= Vo[l +m cos(Q,t-K,x)]
(6.39)
where
v,=-kCLS
2
(6.40)
The total integrated power inthe diffracted beam is given
by
S(?)= 0.25r
Dl2
-D/2
Iiv(x,t)’ dx
(6.41)
Assuming m<<1, we find
(6.42)
Chapter 6
186
where
(6.43)
Pi=IiD
and
(-
m,=2msinc
k D )
As Km=2dh,=QmlV, it is clear that the effective modulation index m1
decreases with increasing modulation frequency. The first zero will be
reached when D = & , , as may be verified from ( 6 4 , and is in accordance
with physical intuition.
The above analysis is essentially one of localEy sampling the diffracted
power. It is justified as long
as the sampling ray bundlesare not so small that
they overlapby diffraction inthe interaction region.The condition for this is
similar to the one we have encountered in using theSURA method (see Sec.
3.1.3), but this time with respect to the modulation wavelength h, rather
than.the carrier wavelength.Replacing A by h,in (3.57), we find ultimately
(6.45)
As an example, consider a sound cell operating at 50 MHz with Q=lO.
The local sampling analysisis then valid for(Q,JQc)2*0.1, say, Fm<5 MHz.
For higher modulation frequencies, the analysis must proceed
theon
basis of
angular plane-wave spectra and becomes increasingly complicated, even
more so as the nature of the optical detector has to be taken into account.
The latter point will become clearer from the alternative analysis to be
presented next.
The essence of the alternative method is shown in Fig. 6.13. Instead of
one diffracted beam, three are shown, this time with amplitudes El+, EIO,
and El-. The center beam correspondsto diffraction by the carrier and has
a frequency @+Q, where o is the light frequency. The two other beams
correspond to diffraction by the sidebands at QkQ, of the modulated
signal. They have frequencies ofo+Q+Qm for El+, and O+Q-Qm for El-.
Our analysis now proceeds as follows. Imagine a photodetector positioned
perpendicularly to the El0 beam so as to intercept all three diffracted
beams.
Let us ignore, for the time being, the fact that the beams, propagating at
slightly different angles, do not overlap exactly. The detector is assumedto
deliver a current proportionalto the total integrated power. Leavingout the
'
187
Selected Applications
X
Figure 6.13 Interpretation o f Bragg modulation in terms of sideband-carrier
mixing.
(identical) z dependence of the three beams and the common frequency shift
Q, we find
i(t) = S’D121E,o
+E,+exp(- jkA@x+ jQ,t
-Dl2
2
+E,- exp(+jkAqx - jQmt)l dx
where A+ is the angle between the beams
A @ = -K
m
k
as followsdirectly from Bragg angle consideration [see (6.3)].
(6.46)
188
Chapter 6
The total sound signal (6.36) may be written as
S(X,t)=S COS ( Q t - f i ) + 0 . 5 m S cos[(Q+Qm)t
-(K+K,)x]+OSmS co~[(Q-Qm)t-(K-Km)x]
(6.48)
The last two terms in (6.48) represent the sidebands that diffract El+ and
E l - . Treating each term (sound field) in (6.48) separately and assuming
weak interaction, we find with (3.103) and (6.39)
EIO=
E1+=E1-=-0.25jmViE~
Substituting (6.49) and (6.50) into (6.46), we find
[
i(t) = 0 . 2 5 q b 2 1 + 2m sin .(kY)]
(6.51)
With (6.47) we find that (6.51) is identical with (6.42). Thus, the second
method gives the same results as the first one, as must be implicit in the
mathematics. Rather more important than the formal equivalence, though,
is the interpretation of the new method. To see that, we write (6.46) in the
following way:
i(t)=io+ir(t)
(6.52)
io= EoEi dx
I
(6.53)
il(t)=Re[Ip
(6.54)
where the phasor I*, representing the ac current from the photo detector, is
given by
I
I
Ip= 2 E,',E,+exp(jkA4x) dx + 2 EloE;- exp(-jkA4x) dx
(6.55)
Each of the RH terms of (6.55) represents the mixing of two light fields at
frequencies differing by a m . These fields are, of course, the carrier beam
( E I o )with the upper sideband ( E l + ) beam, and the carrier beam with the
lower sideband (E1 -) beam. The mixing beams are inclinedat an angle A+,
Selected
189
and, as a result, the mixing efficiency is reduced until, at A+=AID, the beat
frequency disappears.
The importance of the above interpretation is that it makes it clear that
the detector output may vanish while the diffracted light connected with the
modulation process is still present in the total beam. The information may, in
fact, berecovered again by narrowing the photodetector aperture, or
covering it with a periodic transmission maskof the form(1+cos &x). The
stroboscopic effect of such a mask on the running (intensity) interference
pattern 1+2m cos(Qmt-Kmx), brought about by the interfering beams,
results in an output current at frequency Q,. Thus, in order to discuss
modulation efficiencyand bandwidths, the detector should be specified.
A detector-independent bandwidth also existsand comes into play when
the bandwidth of the device as a defector is exceeded and the diffracted
beams El+ and El- can no longer be generated. The criterion for that has
been derived in Sec. 6.3.
At the beginning of our discussion, we had tacitly ignored beam
spreading and separation. That this may be done is a consequence of a
remarkably simple theorem that is, however, so important to the application
of signal-processing sound cellsthat we devote the next sectionto it.
Finally, it will be clear from our analysis that the modulation bandwidth
may be increased by decreasing the widthD of the incident beam. However,
if this is pushed to the point where the angular spread of the light beam
begins to exceed that of the sound beam, diffraction efficiency begins to
decrease, as was shown inSec. 6.1.
6.6 THE QUASI THEOREM
The quasi theorem gets its name from
the fact that it usually, butnot always,
applies. It is best discussed by using a concrete example, e.g., the mixing of a
carrier at o with upper and lower sidebands at o+Qm(the common
frequency shift Q brought about by a sound carrier, as in Sec. 6.5, has been
absorbed in 0). The relevant configuration is shown in Fig.6.14. An infinite
extent one-dimensional photodetector is located at z1 and collects all three
beams. Subsequent heterodyning results in a time-varying output il(t) that
contains frequency componentsat Qm and 251,. The first component comes
about by the mixingofeach
sideband with the carrier. The second
component results from the mutual mixing
of the two sidebands.It was not
No such
considered in Sec.6.5 because of the assumption IE1-1, IEI+I~IEIo>.
restrictions are necessary here.
We are interested in what happens whenwe move the photodetector to a
new position on the Z axis, say 22. At first glance, it must appear that il(t)
190
Chapter 6
X
Figure 6.14 Heterodyning of three beams as an illustration of the quasi theorem.
will decrease, because two beams no longer overlap. That would be the
wrong conclusion though; the beams do, in fact, overlap morethan shown,
having spread by diffraction. The first part of the quasi theorem (and his
part is actually not quasi but strictly valid) says
But for a time delay,il(t) is independent of the position of the (infinite)
photodetector.
The physical reason for this is that a photodetector basically records the
arrival sequence of photons.This sequence cannot be changed, only delayed,
by moving the photon counter, as long asall photons are collected. (Hence,
wehave the theoretical requirement for an infinite photodetector.) A
mathematical proof plus some remarkable consequences of this may be
found in [12].
The secondpart of the quasi theorem says
h(t) also does not change if transparent elementsare put in the path of
the light beams, provided (1) that each element covers all the relevant
fields completelyand (2) that the elements arenot highly dispersive.
It is proviso (2) that accounts for the “quasi” nature of the theorem. A
dispersive element may cause temporal bunchingor debunching of photons,
Selected Applications
191
thus changing the character of il(t). For instance, a properly designed and
positioned Fabry-Perot interferometer may suppress the two sidebands in
transmission and hence make i l ( t ) disappear. A less obvious way to defeat
the quasi theorem is given by the following example. Imagine three parallel
beams, with El+ and El-, at some reference time, in phase withEOand also
assume that IEI+I,IE1-IeIEol. The total beamisobviously amplitudemodulated, and, consequently, a sinusoidal component i l ( t ) will be of the
form il cos Qmt, if we neglect 2Qm terms by our previous assumption. If now
a highly dispersive element is introduced that phase shifts E1+ and El- by
90"relative to EO,then the amplitude modulation changes to phase
modulation and il(t) vanishes. Obviously, this requires some nontrivial and
dedicated experimental effort, and hence in almost all other experiments
proviso (2) is satisfied,and, subject to proviso (l), the theorem applies.
Let us test this on the configuration shown in Fig. 6.14. In the first
experiment, we put the photodetector at z1=0 in the plane of the aperture.
The beams are then well-defined, and, following the same reasoning as in
Sec. 6.5, we may write
il(t)=i+(t)+i-(t)
]
i+(t)=Re[I,+
)]
i-(t)=Re[I,-
E:
I,,+= 2
(6.56)
E;,E,+ exp(jkA@x)dx
(6.59)
E,,E,' exp(jkA@x)dx
(6.60)
Dl2
Ip- = 2
ID12
It is readily shownthat
I, = 2DE,',EI+ sinc (D?)
(6.61)
Ip = 2DE,,E; sinc (D?)
(6.62)
In the second experiment, we put the photodetector in the back focal
plane of a lens of focal length, f, which accepts all three beams. The
configuration is shown in Fig. 6.15. For convenience, the aperture is situated
192
Chapter 6
Figure 6.15 As in Fig. 6.14, but with lens inserted as Gedanken experiment.
in the front focal plane of the lens, but that is not necessary for the
argument.
In the back focal plane of the lens, three Fourier transforms of the
aperture D are formed, as shown [13]. The three amplitude distributions are
given by
F+=
F. =
[3) [3
1
[3) +
sine[(x fA#)$]
(6.63)
sin(%)
sinc[(x fA#)$]
F- =
(6.65)
The heterodyne currents are given
by
Ip+= 25 F;F+ dx
I
Ip = 2 FiF: dx
(6.66)
(6.67)
193
Selected
Carrying out the integration [14],we retrieve, after some algebra,
expressions (6.61)and (6.62), thus provingour point.
Note that, in the context of experiment1, the interpretation of (6.61) was
that with increasing A#, there is increasing phase cancellation across the
surface of the photo conductor. Eventually, this results in il(t) vanishing
when A#=A/D, i.e., when the spatial variation in the phase of the locally
generated current reaches2n.
In the context of experiment 2, the separation between the focused sinc
patterns increases, resulting in less overlap and hence a smaller heterodyne
current. Eventually, whenA#=A/D, the separationis such that the maximum
of one pattern fallson a zero of the neighboring one. The sinc functions are
now orthogonal (asin conventional sampling [13]), and again the heterodyne
current vanishes.
Finally, we should note a trivial but
important fact. If at the output of any
sound cell all diffracted and undiffracted beams are heterodyned together (if
desired after being processed by an optical system satisfying the quasi
theorem), then the resulting heterodyne current vanishes. This is
so because
a sound cell does not absorb photons and hence cannot generate temporal
variations in the overall photon flux. Of course, spatial variations in the
photon flux do occur, and these maybe transformed into temporal
variations by suitable amplitude masks. An interesting example will be
found in the Fresnel field signal processor described in Ref. 15.
6.7
OPTICAL PHASE AND AMPLITUDE MEASUREMENT
In optics we can only measure the time-varying flux of photons arrivingat
the detector, i.e., we always measure intensity in some formor another. This
is in contrast to other fields such as acousticsor the lower frequency region
in the electromagnetic spectrum, where it is possible to measure the field
quantities directly (strain, pressure, current, voltage, etc.). In optics
we must
resort to superimposing coherent fields to measure amplitude and phase.
For example, we may obtain phase information from the location
or shift of
interference fringes. If the fields areat slightly different frequencies, then in
the total power (which may be measured with a photodetector), there is a
time-varying component at the difference frequency that gives information
about the field amplitudes. Considerthe local output current caused by the
mixing of a reference fieldand a “signal” field on a photodetector placedin
some X-Y plane:
i(t, X, U)a 1
=1
I*
~ s+
Y ) e x p ( ~ s t+) Er (x,Y ) exp(jmrt)12
~ (X,
s
l2
IEr + 2 ~ e {(X,
~ y)Er
s (X, Y ) exp[j(ar - u s >t11
(6.68)
194
Chapter 6
where the subscripts ‘S’ and ‘r’ refer to signal and referencefields,
respectively. It isclear that the current componentat the difference
frequency carries information about the phase
and amplitude distributionof
the signal field relative to the reference field. The latter could be made
uniform to simplify matters.
Naturally, the time-varying current must be integrated over the entire
photodetector, and in that operation local information vanishes, unless the
photodetector is small or the reference field is limited in size, e.g., if it is a
focused spot. In that case the signal field may be sampled by moving that
spot around. This is exactly what is being done in the “phase
and amplitude
sampler” developed by Korpel and Whitman [16]. In that device a Bragg
diffraction sound cell plays an essential role, because it is used for both
scanning and frequency shifting of the reference beam. The experimental
set-up is shown in Fig.6.16.
The beam from a laser is split into two beams. The upper beam goes
through a beam-perturbing object (e.g., a transparency) that creates the
signal field incident on a largearea photodiode. The lower part of the beam
is fed into a Bragg light deflector, the upshifted order of which forms the
scanningreferencebeam.
This beamisfocusedbylens
L and semitransparent mirror M onto the photodiode. The frequency of the reference
!““”l
I.F. FILTER
ANDAMP.
SWEEP
FREQUENCY
GENERATOR
VIDEO DETECTOR
GENERATOR
T.V. DISPLAY
Figure 6.16 Experimentalset-upforphaseandamplitudescanning.
Ref. 16.)
(From
Selected
195
beam is upshifted by the time-varying sound frequency A(?).It is clear that
all the elements we have discussed are present in this configuration. The
current at &(t) is mixed with a local oscillator at fs(t)+f;., where f;. is an
intermediate frequency. The amplitude of this signal is displayed on a TV
monitor that isscannedinsynchronismwith
the reference spot. (The
relatively slow vertical scanning of the reference spot is accomplished by a
mechanical mirror.) For the sake of simplicity, no phase measurements
(requiring synchronous detection) were attempted in this experiment. It is
clear that with lensL focused on the plane
of the photodiode, the field
at that
plane will be displayed. What is not so obvious is that if L is focused onto
some other plane, the field inthat other plane will be displayed in spite of the
fact that it does not coincide with the plane of detection. This is a direct
consequence of the quasi theorem that we discussed in Sec. 6.6. It does not
matter where the photodetector is located as long as all the, light is collected.
Figure 6.17 makes this beautifully clear. It shows the field just behind the
beam-perturbing object, a Ronchi grating in this case. This was achieved
by
using forL a negative lens with itsvirtual focus at the plane of the grating. In
a second experiment, the lens L was removed altogether, i.e., the reference
beam came to a focus at infinity. As a result the far field of the signal beam
ought to be displayed. This is clearly the case, as shown in Fig. 6.18.
Figure 6.17 Display of optical field in the plane of a Ronchi grating. (From
Ref. 16.)
196
Chapter 6
Figure 6.18 Display of the far field of the light transmitted through
grating. (FromRef. 16.)
a Ronchi
The sensitivity of the method is surprising. For shot noise-limited
operation the minimum detectable power (for a signal-to-noise ratio of
unity) is given by
p& =-eBN
a
(6.69)
where e is the electronic charge, B the bandwidth of the system (6 MHz in
the experiment), N the number of resolvable spots (lo"),and a the detector
sensitivity (0.37 m).
For the experiment the calculated sensitivity was
0 . 2 6 ~ W. The measuredsensivitywas
1 . 5 ~ W, asthesystemwas
not shot noise limited.
It is clear that Bragg diffraction cells are important elements in any
amplitude or phase measurement system, not so much because they make
scanning possible, but rather because they are convenient frequency shifters.
In the example to follow, a Bragg cell is used in a non-scanning phase
detector to measure extremely small dynamic oscillations [ l l . Figure 6.19
shows the optical configurationand related electronic components.
A laser beam at frequency fo is directed through a Bragg cell where a
portion of the light (the zeroth order)is passed undeflectedto lens L, which
focuses it on a vibrating quartz wedge, activated by a signal generator at
frequency fm. This imposes upon the reflectedlightbeamaphase
modulation atfm of modulation index m=2kvu, where U is the amplitude of
197
Selected Applications
LENS
SIBGEM.
4
L
WATER FILLED
-Cf0
-
'0 'S
XI
RECORDER
-Y
DRIVE
M2
1
X.
Figure 6.19 Experimental set-up for the measurement of small vibrations. (From
Ref. 17.)
the normal surface displacement. The reflected beam passes once more
through the sound cell, where a portion of it is deflected and upshifted by
the sound wave of frequency& present in the cell. This part of the signal
beam, now at frequency fo+&then falls on a photodiode. It is there
heterodyned with a reference beam fo-f,. This reference beam is derived
from the original downshifted part of the laser beam at frequency fo-f,,
whichisfocusedby
L onto a stationary mirror M2 and sent back
undeflected through the sound cell. The similar treatment of reference and
signalbeamensures that theyoverlap at the photodiode in the same
direction and with the same wavefront curvature. The photodiode now
generates a difference signalat frequency 2&, phase modulatedat frequency
fm. Therefore, the diode output current contains sidebandsat 2fs&fm,which,
for the small modulation index involved, have amplitudes proportional to
m=2kvu. One of the sidebands is selected and amplified by a receiver and
detected with a square law detector. In order to achieve a high signal-tonoise ratio, the soundcell signal at& is amplitude modulatedat 1000 Hz to
obtain a 1000-Hzmodulated final signal at the output of the square law
detector. This is then further processed in a narrow band-tuned amplifier
and fed into the Y axis of an X-Y recorder. The X axis is synchronized with
the drivethat moves the quartz wedge through the focused beam.
Figure 6.20 shows the surface displacement squared vs. position on the
wedge atfm=8.5 MHz. The modeof vibration is localized (trapped) near the
point on the wedge where the thickness equals one-half the wavelength of
the sound wave. It was estimated that peak surface displacement in this
198
Chapter 6
Figure 6.20 Relativesurface displacement squared vs. position on the quartz
wedge atfm=8.5MHz. (From Ref. 17.)
experiment amounted to approximately 0.6 A. The laser power in both
reference and signal beam was0.15 mW. The predetection bandwidthB1 was
57 kHz and the post-detection bandwidth B2 was 15 Hz. The operationwas
limited by thermal noise. For shot noise-limited operation, the minimum
detectable surface displacement was calculated
to be
(6.70)
where B=%,
e is the electronic charge,a the photodiode sensitivity,and
PI the laser power. For the values used in the experiment this value was
calculated to be 2 . 6 ~ A.
Yet another technique for measuring phase was invented by L.Laub [18,
191. The method is illustrated in Fig. 6.21, which is taken from the patent
description. Referringto the figure, laser beam(12) is directed through a halfsilvered mirror (32) onto a Bragg cell (44).The cell is activated by a swept
frequency generated by ramp generator (40) in order to scan the diffracted
beams (16) and (18) across the (partially) reflecting topological object (26).
Beams 16 and 18 are generated by two closely spaced frequencies fs+f m ,
where& is a modulation frequency originating in the oscillator (40). The
frequencies of the beams arefo+fs+fm, where f o is the optical frequency. The
frequency f m is chosen such that the diffraction spots of focused beams 16
and 18, separated by Ax, have a large area of overlap. Beams 16 and 18 are
Selected Applications
199
Figure 6.21 Scanning phase proflometry set-up. (From Ref. 18.)
reflected toward the soundcell with a phase difference A9=2kAh,
of
where Ah
is the difference in height between the centers of the focused spots. Thus
A@=2k(dhldx)Ax,as shown in Fig.6.22. At the soundcell each reflected beam
is split into two other beams, such that after reflection off the half-silvered
mirror (32), the final field at the photodetector (36) contains frequencies
f$+2f,, f++f+2fm,
and f$+fs-2fm. The phase of the superheterodyne currents
at 2fm and 4fm contain the information about Aq5, i.e., about the topological
gradient dhldx. This information is extractedby a phasemeter (50) being fed
2fm as a reference signal through a multiplier (52). After integration by a
circuit (54), the gradient isdisplayed onan oscilloscope (58) whose
horizontal sweep is synchronized with the ramp generator (40). Vertical
scanning is accomplished by a translator (64), driven by a motor (62), and
synchronized with aDC ramp generator (66).
Figure 6.23 shows a drawing of an actual photograph of the oscilloscope
display. The object examined consisted of a smear of red blood cells on a
glass plate. The maximum height of the phase profile traces is less than
1 pm,the average width of the blood cells is approximately8 pm.
The Laub system is basically a common-path differential interferometer
and therefore relatively insensitive to vibration compared with the previous
system discussed (Fig. 6.19).
200
Chapter 6
Figure 6.22 Detail near foci of sampling beams. (From Ref. 18.)
Figure 6.23 Oscilloscope display of red blood cells obtained with the system of
Fig. 6.21. (From Ref. 18.)
6.8
BRAGG DIFFRACTIONINTERMODULATIONPRODUCTS
In a typical Bragg diffraction signal processor, such as
that shown in Fig.
6.24, and to be further discussed in Section 6.13, numerous frequencies are
present and may have to be processed individually. At weak interaction
levels, there is no difficulty with this, but with stronger interaction, multiple
scattering may occur, leadingto third-order intermodulation products. Thus,
a frequencyf2 may first upshift the light, followed by
afi downshift plusa h
20 1
Selected Applications
Mirror
Mirror
Loca I
scillator
Beam
L.O.
Optical
System
"
"
Figure 6.24 Prototype coherent acousto-optic signal processor. (From Ref.49.)
/
Figure 6.25 Wave vectordiagramillustratingthird-orderacousto-opticintermodulation in Bragg diffraction.
Chapter 6
202
upshift. As shown in Fig. 6.25, the resulting spurious diffracted light is
upshifted by a total of 2fi-fi. (An identical mechanism will also result in
the generation of a spurious component
at 25 -h.)
Exactly how serious this
effect is depends, of course, on the proximity of the two frequencies relative
to the Bragg diffraction bandwidth. To simplify matters, we shall assumefi
to be close t o h so that both frequencies induce on-angle Bragg diffraction.
The Feynman diagram discussed in Sec.
4.8 is a convenient tool for analysis
of this case. The diagram relevant to the present case is shown in Fig.6.26.
Because of our assumption, the coupling factors that connect the levelsasare,
in (4.92), S:=$, Si=Si with j = 1,2, depending on the frequencies involved.
With (4.95) we find that the plane-wave amplitude at level 1 is given by
With (4.94) we may write this
(6.72)
VI = v2= v, we have
For equal amplitude sound signals,
(6.73)
2f2-f,
f2
f2- f,
0 '
Figure 6.26 Feynmandiagramillustratingacousto-opticintermodulationin
Bragg diffraction.
Selected Applications
203
where
(6.74)
is thepower diffraction efficiency for$ orf2 by themselves. Thus, ifq=lo%,
the power in the third-order intermodulation product will be down by
1/3600, or about 35.5 db relative to the main signal.
It is of interest to note that in Raman-Nath diffraction two additional
paths exist, as shown in Fig. 6.27. Because again the coupling coefficient for
each transition equalsS or P,
we find readily that the total contribution is
three times that given by (6.73). Hence, intermodulation products are likely
to be more severe in Raman-Nath diffraction.
It will be clear that in the formalism used here, many more complicated
intermodulation scenarios-three-tone intermodulation, off-angle Bragg
diffraction with limited bandwidth, etc.-may be analyzed in a way that is
convenient and physically plausible. For comparison, a more conventional
treatment may be found in Ref.20.
Now, in accordance with our dual approach, let us analyze acousto-optic
+2
-2 f 2
2f2- f,
+l{
O{
-1
\
f,
f2-f:,
\
0
- f,
Figure 6.27 Feynmandiagramwithtreepaths,illustratingacousto-opticintermodulation in Raman-Nath diffraction.
Chapter 6
204
intermodulation from a different point of view. If two frequencies fi and fi
are simultaneously present, with equal amplitudes, then the total sound
signal in the Bragg cell may be written as
S(",
t)=Re[S exp@&t-jKIx)+S
exp(j&t-jK2x)]
(6.75)
which may bewritten, if we assume S to be real,
t
s(x,t)=2Scos (n,-al)Z-(K,-Kl)Z
"I
"I
(6.76)
xcos n,+n1"(K,+K1)5
[
2
t
Hence, the signalin the sound cellmaybeconsidered
a carrier at a
frequency (l22+*1)/2, 100% amplitude modulated by a modulation signalat
frequency (Q2-R1)/2. If the modulation frequency is small enough [see
(6.45)], then from (3.103) with &=O, z = L and using a time- and spacewe fhd
dependent v(x, ?)=v cos[(!&-Ql)t/2-(K~-K1)~/2],
[":)'I
El(", L, t) = -jEi sin
(6.77)
where we assume v=kCLS/2e 1
The first term in (6.77) represents contributions to the first order at both
f+ and fi. This may be seen readily from the first terms in the diffracted
light
"1
"I
-Kl)-
2
-jEi(f)exp[jot+/nlr+(Kz-Kl)-
) + e . .
2
(6.78)
Selected Applications
205
The K terms, which indicate the spatial modulation of the diffracted light,
automatically guarantee the correct directions for the orders at o+a1 and
w+a2.
The second term in (6.77) leads
to the other terms
(6.79)
If we concentrate on thepart with 3(!&-!21)t/2, it may be seen readilythat it
gives riseto anintermodulation productof the form
with a similar expression in 2Q1-Q2. The K term in (6.80), expressing
spurious spatial modulation of the diffracted light, again guarantees that
this particular contribution appears in the correct spurious order.
It is clear
that the amplitudeof (6.80) is identical
to that of (6.72), ifin thelatter we set
v1 = v2= v. We thus find perfect agreement with the earlier point of view:
intermodulation caused by the nonlinear [sin(v/2)] of a sound cell may
equally well be explained by multiple scattering. The former model, however,
is limited to small frequency differences [see (6.45)]; the latter model has no
such limitations, provided we use the proper form of the coupling factors
[(4.70a) and (4.70b)l.
Finally, in Raman-Nath diffraction, the first model would use a nonlinear
response of the kind
YVJ
J1(v)=--- +...
2v 2 2
(6.81)
instead of the Bragg diffraction response
(6.82)
206
Chapter 6
It follows that (6.81) indicates an intermodulation contribution (the cubic
term) three times higher than (6.82), in accordance with the predictions of
the multiple scattering model.
6.9BRAGGDIFFRACTIONIMAGING
In Chapter 2we introduced the concept of Bragg diffraction imaging [21]as
a natural consequence of the one-to-one correspondence between wave
vectors of light and sound, implicit in the wave vector diagram in two
dimensions. In the present section, we will investigate this notion in more
depth, both on the basis of plane-wave interaction and of ray tracing. We
will also perform a scattered amplitude eikonal calculation in detail as an
example of general procedure.
6.9.1Plane-WaveInteractionAnalysis
In Sec. 3.3.3 it was pointed out, in connection with (3.160), that if the
incident light field has large angular spread,the scattered light field‘s planewave composition mirrors that of the sound field. Hence, the scattered light
somehow carries an image of the sound field that presumably could be
retrieved by the proper optical processing. A typical experimental
configuration is shown in Fig. 6.28, where the incident light is convergent
and focused to a line perpendicular to the paper at the origin P outside the
(dotted) sound cell. The sound emanates from a line source
Q, located at ,
x
zq, to interact withthe light in the regionof the sound cell around the point
M . The object ofour analysis is to find out where Q is imaged, and how the
image quality is affected bythe angular widthof the incident wedge of light.
To simplify matters, we assume that the focused lightspot at P is Gaussian:
(6.83)
where EO is real for simplicity. The angular plane-wave spectrum with
reference pointP is, according to (3.139) with z=O, given by
A
or, with (6.83),
(6.84)
207
Selected Applications
Figure 6.28 Braggdiffractionimagingconfigurationshowingsoundsource
and incident light focused atP.
Q
(6.85)
The sound at Q we model by an idealized Huygens source in the line z=z,:
S(Q) = $$(X
- xq),
z = zq
(6.86)
The sound angular spectrum, with Q as areference point, is then given by
Taking P as a reference point we find (keeping the convention for y in mind;
see Fig. 3.1)
i(y, P)= i(y ) = iq
e x p ( i h q cosy + j a qsin y )
(6.88)
Chapter 6
208
The + l order scattered light spectrum (Pas reference point) may now be
determined from(3.160)
(6.89)
Now let us see if there exists
a point where all the plane waves of
l?*(+)are
in phase again.In other words, we look for a reference pointV such that
(6.90)
L
Now l?*(@,
V ) and l?*(@,
P) are related by a simple wavepropagator
E,(Q, V )= E,(Q, P ) exp(- jkx, sin Q - jkzy cosQ)
(6.91)
With (6.89) and (6.91), it follows from(6.90) that
f i q
cOS($~-@)+fi~
Sin(@B-Q)-kxv
sin
Q-kz,
cos Q=O
(6.92)
for all Q.
Equating cos Q and sin Q coefficients separately, we find after some
trigonometry
(6.93)
(6.94)
Equations (6.93) and (6.94) will be recognized as a clockwise rotation by
( d 2 - Q B ) of a vector (zq, xq)
into a vector (zv,x") plus a scalingby K/k=NA.
Hence, these are the imaging rules that project the sound point Q into its
image V. At the same time, the diffracted light going through Vpropagates
nominally in the direction 4=2@B, as indicated by the maximum of 81(Q,V)
in (6.90). The imaging rules are illustrated in Fig. 6.29. The point V is the
Selected Applications
209
X
I
x‘
Figure 6.29 Illustration of imaging rules for upshifted @+V)and downshifted
@+V‘) interaction.
image for the- 1 order. The calculation is entirely similar, again resulting in
a scaling by Klk,but this time a counterclockwise rotation by
d2-4~.
From (6.90) it is clear
that the imageof Q is degraded in the sense
that it is
no longer a delta function like the original sound source object. Rather, it
has the shapeof the focus of the incident light beam
that constitutes, in fact,
the impulse response of the system. This may be seen as follows. The light
field along the X‘ axis through the point V expressed as a function of
x’=x-xv is found by taking the Fourier transform of (6.90) (with Fourier
transform variables x’ and qYA)
E m =m 4 , v
1
S)
= -0.25jkC&E,, exp(
exp(- jkx’24,)
(6.95a)
Note that the profile is indeed identical
to that of the incident focus, whereas
the amplitudeis proportional to that of the Huygens sound source.The term
exp(-jkx’24~) expresses the fact that the scattered light propagates
nominally in the $=24B direction.
210
Chapter 6
For downshifted diffraction,one finds similarly
(6.95b)
where x " = x " x ~ .
A more general analysisof the Bragg diffraction imaging process may be
found in Ref. 22.
6.9.2 Eikonal Analysis
We will now try to find evidence of imaging by applying the ray-tracing
of
diagram of Fig. 4.10 for upshifted diffraction. Figure 6.30 shows a wedge
rays emanating from the Huygen source of sound at Q. The central ray
(solid line) intersects the central ray of the optical ray bundle, incident from
the left, in the interaction pointA, at the correct angled 2 + @for
~ upshifted
Figure 6.30 Ray-tracinganalysis of Braggdiffractionimaging. Q is the sound
source, the incident light is focused atP,the diffracted light atV. The circle C is the
locus o f interaction points.
211
Selected Applications
Bragg diffraction. A diffracted ray bundle leavesA with is central rayat the
correct diffraction angled2+@Bwith respect to the sound ray. We now look
for the locusof other interaction points such asA' and A". It is readily seen
that such points mustlie on acircle (C), as this guarantees constant
interaction angles. As for the diffracted rays, the angular invariance means
that they must all go through one point
(V),also located on the circle. Thus,
V represents the image ofQ.
The imaging rules may be readily derived from Fig. 6.30 as follows.
Considering AVAP and AVNP, we see that LV N P = L V A P = ~ @Hence,
B,
from AVNP we find that VP=2CN sin 2@B.If we consider APCQ, it is clear
that L P C Q = ~ L P A Q = ~ ( ~ ~ - Thus,
@ B ) .LCQP=LCPQ=@B. It follows that
QP=2CQ COS@B=~CN
COS@B. Hence, VPlQP=2 sin@B=Hk,which confirms
our earlier prediction about scaling. It is further seenreadily that
L V P Q + L V A Q = z . As L V A Q = d 2 + @ s , it follows that L V P Q = ~ ~ - @inB
accordance with results from the wave theory. For downshifted imaging, a
diagram similar to Fig.6.30maybe
constructed and the appropriate
imaging rules derived[23].
We have noted in Sec. 6.9.1 that the resolution in the sound image is
limited by the size of the focused spot of light.
This is also evident from Fig.
6.30. If, for instance, the incident lightis limited to a uniform wedge of apex
angle 2+, then thisis also true of the diffracted light. Thus, the resolution in
the demagnified image is of the order of (U2) sin 4, corresponding to an
effective resolution of(M2) sin @ in the sound of field.
Having demonstrated the usefulness of Bragg diffraction ray tracing, we
will next illustrate the other aspect of eikonal theory, i.e., the calculation of
the scattered amplitude. The starting point is (4.1 17)or (4.1 19). Let us first
calculate
"as,=!E' &(
+
-kd'Y,
"
as2
Kd'Y,
as2
kd'Y,
ds2
)l
(6.96)
A
in accordance with (4.99), with the eikonal functions defined by (4.97). The
corresponding cylindrical wavefronts are shown in Fig. 6.31, from whichit
follows readily that
%=-pi
(6.97a)
Ys=
(6.97b)
+PS
Y1= -p1
(6.97~)
1 are drawn from the respective
where the position vectors pi, ps,and p
212
Chapter 6
$1
X'
Figure 6.31 Diagram for eikonal calculationof Bragg-diffracted light.
sources R Q, and K By transforming eqs. (6.97) to a coordinate system with
origin at V and axesin the Z'-X' direction, it maybe found, with
d/ds=dl&', that
(6.98a)
(6.98b)
(6.98~)
,
, and pq represent the distance fromA to P and Q, respectively. The
where p
distance AV is denoted by p".From Fig.6.31 it is further seen by
considering AAPD, AAVD, and AAQD that
213
Selected
pp=2pccos a=2pc COS@-$E)
sin(a+$)=2pc
pq=2pc
sin p
(6.99a)
(6.99b)
where P=a+OB, and pc is the radiusof circle C.
Subtracting (6.99~)from (6.99a) and using sin $~=K/2k,we find
,
With (6.100), (6.98), and (6.96), it then follows that
$1
=Kpvcos2 $B
A
PPPl7
(6.101)
and IS(s,)l=lS(A)I. As we use an
Our next step is to find ~E,(sa)~=~E8(A)~
eikonal theory, i.e., k, K+-=, the fields may be found directly from the
angular plane-wave spectra through(4.47). With (6.85) we have
(6.102)
where is the angle of the ray through A . (Actually, &=O for the centerray
of Fig. 6.30, but we shall leave the expression as is.) With (6.87) it follows
that
Substituting (6.101) and (6.103) into (4.1 17), we finally amve at
(6.104)
Now, the same expression willbe found if westart from different interaction
points A', A", etc. We mustthenchange
to
f l u , etc.As $=$', the last
term in (6.104) may then be generalized to e~p(-2k~w~$',,~/4)
where "a"
stands for any interaction point. Ifwe compare (6.104) with (6.102), it will
be clearthat the former expression refers
to a Gaussian identicalto the waist
of the incident lightbut located at V. From the same comparison, it is clear
214
Chapter 6
that the modulus of the amplitude is given by 0.25kC&Eol, in complete
agreement with (6.95). The phase ofEI(A) may be found from the fact that
a = - 3 ~ / 4 f o r C>O [see (4.116)]. With (4.99), (6.97), and (6.100), it follows
that
-3n
-kpp + Kp, - kY: (A) = -kpy - kY1 (A) = 4
(6.105)
and, hence, the phase of &(A) is expressed by a factor proportional to
exp(ikp,), in agreement with (4.47), if we remember that EI is converging
rather than diverging.
In summary, we have shown that the eikonal theory not only provides a
convenient ray-tracing formalism, but also correctly predicts the amplitude
of the scattered ray. More information may be found in Ref. 24. A raytracing analysis of axial cross-section imaging (as frequently used in signal
processing) may be found in Ref. 25, whereas Refs. 26and 27 analyze signal
processing applications of transverse cross-section imaging, i.e., the type
discussedabove. Experiments demonstrating the spatial nature of the
imaging, as in a reconstructed hologram, are discussed in
Ref. 28.
6.9.3
Imagingin Three Dimensions
So far we have assumed that the imaging is two-dimensional, i.e., all "rays
are parallel to the 1-2 plane. On this assumption, any imaging in the Y
direction just consists of a stackingof two-dimensional images. If this were
true, then the resolution in the Y direction would be better than in the X
direction, because the angular aperture of the incident light would not be
relevant. Indeed, inspecting Fig. 6.32 we clearly see the difference in the
resolution, as the Fresnel pattern of the illuminating sound field is only
visible in the Y direction (horizontal lines).
Based on the above, the assumption of stacked imagescannot be too bad,
yet we knowthat not all rays can be horizontal: detail
of the soundfield in Y
must lead to diffracted sound rays in theY-Z plane.
In Ref. 29 the three-dimensional situation has been analyzedon the basis
of diffracted ray tracing. Figure 6.33 shows the situation for downshifted
imaging. The incident light is a cylinder beam focusedinto a line 00'. The
sound originates from the point source
S. In the X-Z plane we recognize the
familiar circle of interaction points for horizontal rays. Diffracted rays in
this plane cometo a focus inS-.
SA' and SB' are sound rays out of the X-Z plane. They interact with
incident rays CO' and D'O', giving riseto diffracted raysA'E' and B'F. To
Selected Applications
215
c
Figure 6.32 Transinosonifiedacousticimage of Silver Angel fish. Notethe
difference in resolution of the Fresnelfield background. (From Ref.31.)
a first approximation, for small angles 6, it may be shown that all such
diffracted rays intersect the lines S-F and E’F’. Hence, the imaging is
astigmatic, withS-F and E ’ F forming the verticaland horizontal linefocus,
respectively. The vertical focus is located at the same position as for twodimensional imaging; the horizontal focus for downshiftedimagingis
located downstream along the central diffracted ray
at a distance( N I )times
the distance SB from the source S to the incident light. Therefore the
“vertical” angular aperturesof the diffracted rays of light and sound are in
the same ratio as the wavelengths, i.e.,
BB’lSB-(NA)BB’lBF’.Hence, but for
aberrations not covered by the theory (large e),the vertical aspect (1:l) of
the sound field image suffers no degradation due to the limited angular
aperture of the incident light. Indeed
it has been foundthat resolution of the
order of one wavelength of sound may be achieved inthe Y direction [30].
Figure 6.34 shows the ray tracing for upshifted imaging. Note that the
horizontal line focusE’F is virtual in this case.
216
Figure 6.33
Ref. 29.)
Chapter 6
Ray tracing of three-dimensionaldownshiftedimaging.(From
F
Figure 6.34 Ray tracing of three-dimensional upshifted imaging. (From Ref.29.)
Selected Applications
217
In order to remove the astigmatism of Brsgg diffraction imaging, a
suitable lens system must be employedto merge the horizontal and vertical
line focus. At the sametime the severe anamorphism (the horizontal
magnification is %A, the vertical one unity) must be compensated for. The
simplest way to dothis is by a simple cylinder lensthat images the point Sonto the lineE'F, with the proper magnification of
NA.
An interesting question is what will happen if the illuminating beam has
the more practical shapeof a cone rather than a wedge. It will be seen that
the symmetry inherent in this configuration allows the rotation ofthe figure
in the diagram of Fig. 6.30 about the axis QI!Thus, rays that are out of the
plane form additional images Q on a circle through V with radius pf as
indicated [31]. For small angle approximationsit can be shownthat in sucha
system again two line foci occur[32].
References 32 and 33 describe and analyze byray
methodsan
experimental system with almost spherical illumination, i.e., the main
illumination lens is spherical and a weak cylinder lens mergesthe two foci.
The sound field originates from eight transducers spaced in the horizonal
( X ) direction. One of the inner transducers is defective. A wave-theory
analysis of this systemmay be found in Ref.34,
Figure 6.35 shows the brightness distribution (without the weak cylinder
lens) in some horizontal cross sections (X-?) near the vertical focus, while
Fig. 6.36 shows the distribution in some vertical crms sections (Y-2) near
7 mm beyond
vert. focus
-
Profiles are
Individually
Normalized
Figure 6.35 Horizontal cross sections near the vertical focus. (From Ref. 32)
Chapter 6
218
the horizontal focus. Note that in the center horizontal cross section the
resolution is sufficient to show up the defective transducer, but not enough
to clearly separate theother ones.
Figures 6.37 and 6.38 show horizontal and vertical cross sectionswith the
foci made to coincide. In this final experiment the spherical lens aperturek
horiz. locus
Profiles are
Individually
Normalized
15(mm)
: C Y
9 mm in front of
f
Figure 6.36 Vertical cross.sections near the horizontal focus. (From Ref 32.)
l.O?
I
0
5
10
;?
15
20
25
Position (mm)
30
35
Figure 6.37 Horizontal image distributionwith the foci made to coincide. (From
Ref. 32.)
Selected Applications
I .
0.0-1.'
0
.
219
>
5
10
15
20
Position (mm)
Figure 6.38 Verticalimagedistributionwiththefocimade
Ref. 32.)
to coincide. (From
increased by a factor of 2, and hence the horizontal resolution doubles.The
individual transducersnow show up clearly.
6.10
BRAGG DIFFRACTION SAMPLING
In the area of acoustics in general, there exists a need for small
nonmechanical probes with which to sample a sound field locally without
disturbing it. The question naturally arises whether acousto-optics could
satisfy this need. If the sound field isnot too wide ( e e l ) , one may think of
probing it with a parallel light beam and recording the power in a single
diffracted order. According to (3.173), what is measured is then the sound
field integrated over the interaction
path, rather than its valueat one specific
point. Moreover, the phase of the sound can never be retrieved from a power
measurement. The phaseretrievalproblem
might beovercome
by
heterodyne detection, perhaps along the lines indicated Ref
in 15. As for the
integrating effect, could that be avoided by not using a parallel beam, but
instead focusing the incident light? Would this not create something like a
narrow parallel beam of finitelength in the focalregion? It is not
immediately clear how to answer this question, but the notion is worth
following up.
Now, in Bragg diffraction imagingwe have all the elements necessary for
a
Gedanken experiment. With referenceto Fig. 6.39, we see in the focal plane
220
Chapter 6
figure 6.39 Heterodyne model of sound probing.
of the incident wedge of light the undiffracted light focused to a sinc type
pattern at P,while at V1 and V2 are images of sound sources at QIand Q2.
The latter are up-and downshifted, respectively, bythe sound frequency Q.
If somehow P could be made to coincide with V1 on a photodetector, the
resulting RF current wouldpresumably carry information about the
amplitude and phase of the sound at Ql.Also, because of the narrow local
oscillator spot at VI, the effective resolution would, as in imaging, again be
determined by the angular apertureof the incident light, i.e., be equal
to N2
sin q5 for the focused light beam.This scheme appears promising but forthe
required coincidence of P and V I . Some further reflectionwillreveal,
however, that we already have overlapping spots, namely Pinitself wherethe
imagesof the sound field at P overlap with each other and with the
undiffracted light. Both images are offset in frequency from the zeroth order
Selected
22 I
by the same amount; presumably, all
we have to dois just heterodyne P with
the now coinciding VI and V2, i.e., find the RF component in the current of
out in
a photodetector collectingall the light. Unfortunately, as was pointed
Sec. 6.6, this will not work. A sound field does not absorb photons,hence, a
modulation of all the light passing through it cannot occur. In a more
restricted context: the signals at VI and VZ are, according to (6.95), in
quadrature to the carrier,thus effectively constituting a phase modulation
of
the undiffracted light. The result isthat the heterodyne current from VI and
P exactly cancelsthat from V2 and P.
Is there a way to frustrate this cancellation? It turns out that there exist
two ways to do this. The first method makes use of fact
the that the nominal
directions of light propagation from VI and V2 differ by 4@3. This, in turn,
means that in the far field the patterns do not overlap precisely. Although
the total current will still be zero,it is now possibleto use a selective mask in
front of the photodiode to suppress one of the cancelling heterodyne
currents, thereby frustrating the cancellation. A knife edge or a graded
neutral density filter will satisfy the requirements. Another way to look at
this is illustrated in Fig. 6.40. This shows our initial picture of a narrow
(width=w) parallel light beam of finite length l that makes up the focal
region of the incident light. The soundwave propagating through this beam
will make it deflect periodically,as indicated by the thinlines. A knife edgeK
placed in front of a photodetector PD will periodically obstruct the light
beam moreor less, thus causing an RF component in theoutput current i(t).
Both the heterodyning and the periodic deflection model are equivalent
and have been used with much success in the acousto-optic monitoring of
surface acousticwaves. We will return to that topic shortly.
A completely different way of detection is to rely on polarization effects.
Under certain circumstances(to be detailed later), the scattered light has a
polarization component perpendicular to that of the incident light. By the
use of a quarter-wave plate, the phaseof the diffracted orders is changedby
d2, and by means of an analyzer, a common component is selected for
heterodyning. The basic principle is identical to that used in the Zernike
phase contrast microscope [ 131 or in the electronic detection of phasemodulated signals [35]. A typical configuration is shown in Fig. 6.41. Note
that the incident beam forms a cone rather than a wedge of light. In Sec.
6.9.3 we have argued that such a cone would cause the diffracted images to
be ring-shaped. However, the diameter of the ring for images of P itself is
zero; hence, there isno deleterious effect forour particular application.
To analyze the complete device shown in Fig. 6.41, we first assume ideal
operation of the quarter-wave plate plus analyzer section, i.e., we assume
that the phase of the diffracted light has been shifted by 90" somehow and
that all the light is interceptedby a photodetector. In the figure, the latter
is
222
Chapter 6
PD
L
i (t)
Figure 6.40 Periodic deflection model of sound probing.
Quarter-wave
S(P1
Sound
X‘
Figure 6.41 Actual sound probing configuration. (From Ref. 36.)
placed in the X-Z plane. According to what was discussed before in Secs. 6.4
and 6.5, the relevant RF current is then given by
I,, II{E:(x,z)[jEl(x,z)]+E,(x,z)[-jE-,(x,
z)]*}dxdz
(6.106)
223
Selected
where the js refer to the 90" phase shift imparted by the quarter-wave plate.
Now, E1 may be calculated in the firstBorn approximation from the volume
scattering integral (4.41), and E-I may be obtained in a similar way. Full
details of this calculation may be found in Ref. 36. The final result is
that I,,
may be expressedas follows
(6.107)
where the integration isover the entire interactionvolume.
Expression (6.107) shows that the sound field is indeed being sampled in
amplitude and phase by a three-dimensional sampling function, i.e.,IEi(r)l2.
With reference to Fig. 6.40, the effective resolution is given by 1'/2, i.e., that
distance away from focus where the width W' of the incident cone equals
about one wavelength of sound. Phase cancellation in S(r) will then cause
the contributionto the integral in (6.107)to vanish at that point. It is easyto
see that by this criterion the resolution is of the order of M2 sin@,in
agreement with earlier predictions.
In the periodically deflected beam model, the same distance 1'12 sets a
limit to net effective deflection,and, hence, the same approximate resolution
is predicted. However, a precise analysis on the basis of this simple model is
impossible. It cannot treat regions far out of focus whereW'> N2.
In Fig. 2.8 we have already shown a plot of a sound field cross section
obtained with the method of Fig. 6.41. A two-dimensional image is shown
in Fig. 6.42.
6.11
SCHLIEREN IMAGING
Schlieren imaging makes possiblethe visualization of inhomogeneities inan
otherwise uniform medium. Typical examples areair bubbles and striations
(Schlieren in German) in glass and density variations of the air in a wind
tunnel. A typical schlieren set-up is shown in Fig. 6.43.
Referring to the figure, a parallel bundle of light, exemplified by raysc, d,
and e, is incident from the left on the glass block U.By means of lenses L1
and L2 with focallengthf, the block (i.e., the representative cross section
a) is
imaged onto the plane b. A schlieren stop V in the center of focal plane g
stops all directrays like c and d. Thus, in the absence of scattering, the image
plane b will be dark. Ray e is scatteredby air bubble A (actually, an infinity of
rays is generated in the scattering process), travels past schlierenstop V, and
generates an image of A at A'. Hence, inhomogeneities in the medium Uwill
be imaged as bright objects on dark
a background (dark field imaging).
224
Figure 6.42 Two-dimensionalimageobtainedwiththeapparatus
(From Ref. 36.)
of Fig. 6.41.
225
Selected Applications
Figure 6.43 4f configuration for schlieren imaging.
It is obvious that a propagating sound field may be regardedas a special
distribution of refractive index variations and hence visualized by schlieren
methods. This was first accomplished by Toepler in 1867 [37]. His basic
method hasbeen used for acoustic visualizationof transducers ever since, as
was mentioned in the introduction.
The problem at hand is to analyze whatwe see exactly when we visualize
a
sound field by schlieren methods. The simplest way
to attack the problem is
to use the straight undiffracted ray analysis (SURA) that was developed in
Chapter 3. Let us assume that in Fig. 6.43 a three-dimensional sound field
s(x, y, z, t ) propagates in theXdirection, contained within the boundaries
of
the glass block. This will give rise
to a refractive index variation&(x, y, z, t),
which, according to the three-dimensional generalizationof (3.6), is given by
&(x, y, z, t)=C’s(x, y,
2 9
t)
(6.108)
According to the three-dimensional generalization of (3.17), the phase8 of
the light at the output of the glass block is given by
L
8(x, y L, t ) = -kv J&(x, y, z, t ) dz
(6.109)
0
where we have left out constant terms.
If Ei represents the phasor of the uniform, normally incident light field,
then the field E(x, y, L, t ) is given by
Chapter 6
226
if it is assumedthat lela1 (weak interaction).
Let us now model the sound fieldas
1
s(x, y , z, t ) = -&(x, y , z) exp(jQt - jKx) + C.C.
2
(6.11 1)
where S, denotes the complex sound profileand c c stands for the complex
conjugate.
Upon substituting (6.1 11) into (6.108), the result into (6.109), and the
overall resultinto (6.1 lo), we find
y , L, t ) = Ei
L
--1 jkvC’Ei exp(-j&) exp(jQt)jSe(x,y , z) dz
2
0
L
1
--jkvC’Ei exp(jKx) exp(-jQt)jSi(x,y, z) dz (6.112)
2
0
We recognize theh t term in (6.112) as the background light, the second
term as the upshifted scattered light and the third term as the downshifted
scattered light. These three components will come to separate foci in the
focal plane g of Fig. 6.43 due to the fact that their nominal directions of
propagation differ by twice the nominal Bragg angle. We now replace the
schlieren stop of Fig. 6.43 by a knife edge that blocks both. the background
light and the downshifted light. The phasor field of the upshifted light in
plane b is thengiven by
L
1
E+(x,y ) = -- jkvC’Ei exp(-jKx) exp(jQt)lS.(x, y, z) dz
2
0
(6.1 13)
where we have ignored the image inversionby the 4f system. Whatwe see in
plane b is the image intensity
(6.1 14)
It is not immediately obvious what (6.1 14) represents. However, if S, is
predominantly independent of z-say, the sound is generated by a long ( 2 )
transducer of limited height (-the
schlieren picture shows the evolution
227
Selected
in the Y direction of a one-dimensional sound field along the propagation
direction:
This evolution is evident, for example, in Fig. 2.4, where the left side of
the picture represents the IFresnel field of the sound and the right side
evolves graduallyinto the Fraunhofer field (far field).
In the general case, the interpretation of (6.114) is not self-evident. It
should be noted though that (6.113) represents a projection in the sense of
tomography. Projections at different angles may be obtained by rotating the
sound cell about the X and Y axes. By proper tomographic processing and
addition of such projections, it should be possible to reconstruct the entire
three-dimensional sound field. In Ref.
38 it is argued and demonstrated that,
in two dimensions, such a reconstruction is equivalent to Bragg diffraction
imaging.
The method we have used to analyze schlieren imaging (SURA) is no
longer applicable at high frequencies;it runs into the same difficulties as we
have encountered with simple sound column configurations where @>l.
How then shall we proceed in this potentially useful high-frequency case?
A straightforward approach is to use the three-dimensional plane-wave
interaction analysisto be discussed laterin Sec. 8.5. A treatment along those
lines may be found in Ref. 39. However, it is more instructive to start from
first principles based on the wave vector diagram. Such an approach has
been followed in Ref. 40.
Figure 6.44 is a drawing from that reference
illustrating wave vectors of soundand light participating inthe interaction.
X'
X
Figure 6.44 Wave vector diagram for three-dimensional schlieren interaction.
(From Ref.40.)
228
Chapter 6
In the drawing the sound propagates nominally in the X direction and the
light in the Z direction. The incident light is a plane wave characterized by
the vector QO. The sound is three-dimensional, and vectors OA, OB, and
OC represent some plane waves in the (continuous) spectrum of the sound.
These particular vectors havebeen chosen so as to form wave vector
diagrams with the incident plane
wave vector QO, giving upshifted diffracted
vectors QA, QB, and QC. The wave vector triangle QOA lies in the X-Z
plane, and the other two are generated by rotating QOA about the Z axis. It
is clear that in general the sound vectors selected in the interaction form a
continuum that lies on the hollow cone OABC. By the same token the
diffracted light vectors lie on the hollow cone
QABC.
We shall denote the fictitious sound field that is represented by the
selected sound vectors on the hollow cone by S+, and the (real)
corresponding diffracted light field by E+. Their angular spectra will be
denoted by S+and 8,. Now, two points are worth noting:
1. The selected sound vectors on the hollow cone with apex at 0 all have
the same z component K,. The same applies to the upshifted vectorskz
on the hollow cone with apex
at Q . Therefore, both these fields represent
diffraction-jree patterns (generalized diffraction-free beams[41]) that do
not spread by propagation.
2. The diffracted light vectorskd and the selected sound vectorsK have the
same X and Y components (kd,=K, and kdy=Ky), and hence the two
corresponding fields have the same pattern (E+ QT S+). This pattern, of
course, is the schlieren imageof S.
In Ref. 39 it has been shown experimentally that indeed the schlieren
image does not suffer propagational diffraction. Figure 6.45, taken from
that reference, shows how the schlieren pattern of part of a sound field is
invariant to propagation. Figure 6.46 shows the amplitude of the selected
sound spectrum along the circle bounding the hollow
cone.
Now let us derive the form of the selected sound field S+, which, as we
have seen, is identical to the schlieren image. The angular spectrum S+
follows fromthe angular spectrumS by a filter operation
S+=R+$
(6.116)
where the filterR+ expresses the factthat the K, components are constant:
KZ=- K sin @B
(6.117)
229
Selected
.Figure 6.45 Experimental schlieren image of a sound cell formed by a 4f system
at various distances z from the image plane: (a) z=O, (b) z=2J (c) z=4J (d) z=6J
The light is incident so as to maximizethe complexity of the image. The part of the
image near the transducer is not shown. (FromRef. 40.)
as follows from the triangleQOA in Fig. 6.44. Following Ref.42 we now use
K, and K, as the spectrum variables rather than YA and ?/A as we did in
Sec. 3.3 [for a definitionof yand V, see Fig. 8.12@)].The filterR+may now
be defined as
l
?
+
+
=2n6(K, K sin $B)
(6.118)
where 2n is an appropriate scaling factor.’
The soundfield S+at some arbitrary x is given by
s+=s(PS+)
(6.119)
where s denotes the Fourier transform and P the propagator, i.e., the
factor P=exp(-jlY,x) that accounts for the phase shift upon propagation.
230
Chapter 6
Figure 6.46 Amplitude of the selected sound spectrum along the circle bounding
the hollow cone of diffracted rays. The width of the pattern is due to the finite
illumination aperture. (From Ref.40.)
Selected Applications
23 1
[See (3.13 1) with K cos y=K,]. By the same token
S+=P-' 3-
-' (S+)
(6.120)
We now substitute (6.116) into (6.1 19):
s+=s ( P A+
(6.121)
and then apply (6.120) to S, instead of to S+, and substitute into (6.121):
s+=s[P A+P-'
s -'(S)]
(6.122)
because the operatorsE A+,and P-' commute, (6.122) may be written as
s+=s[R+F" (S)]
(6.123)
Finally, using the convolution propertyof the Fourier transform,we get
where * denotes two-dimensional convolution.
It is seen readily that (6.124) results in the following expression for the
schlieren image
00
exp(- j k i sin $B )S(x,y, z)dz
S+= exp(j k i sin &)
(6.125)
d
In our case of a limited sound field as in Fig. 6.43 the integration limits
may be replaced by0 and L.
Now the sound phasorS is related to the sound profileSe, used in(6.1 1 l),
as follows:
1
2
1
= -S(x, y,z)exp(jQt) + C.C.
2
~(x,y,z,t)=-S,(x,y,z)exp(jC2t-jlYx)+c.c.
(6.126)
so that
S=Se exp(-jKi)
(6.127)
Chapter 6
232
Substituting (6.127) into (6.125) gives
(6.128)
For low-frequency operation where QGl, it maybeshown that the
(6.128) varies but little overz. In
exponential term under the integral sign in
that case
S+ = exp(-j&l
-
Se(x,y, z) dz
(6.129)
Thus, (6.128) predicts a schlieren image identical to (6,113), which was
derived‘with the SURAmethod valid for Q e l . This gives us some
confidence in the correctness ofour analysis.
At high frequencieswe must use (6.128) or (6.125). Let us consider in the
latter an integration along the Bragg line 2’ (Fig. 4.5) that is inclined at an
angle (PB with respect to the Z axis:
zf =z, yf=y,x’=x+z sin +B
(6.130)
As a first approximation it may be assumed
that
S(xf, yf, z)=exp[-jfi
sin
(PB]s(%
y, z)
(6.131)
i.e., the amplitude 1
4 does not change appreciably over this small distance.
Then (6.125) may be written as
(6.132)
Thus, the schlieren image, even
at high frequencies, is seento consist of an
integration of the sound field,but along Bragg lines rather than along lines
parallel to the Z axis. Most important is that S+ still constitutesa projection
in the tomographic sense.
To a first order in OB, the diffracted light field has the form of (6.132)
multiplied by the interaction term- */zjkvC’=-l/&C=-ju [39,42]:
on
S(x’, y’, z’)dz’
E+(x,y)= -juE,
4
(6.133)
233
Selected .Applications
6.12
PROBING OF SURFACE ACOUSTIC WAVES
Surface acoustic waves (SAWS) have become of increasing importance in
signal processing [43], and a noncontacting probeis needed for checking of
diffraction, reflections, etc. Similar to the situation in bulk waves discussed
in Sec. 6.10, there exist two ways of acousto-optically probing an acoustic
surface wave. In the first method, thewave is considered to form a reflective
phase grating on the surface of a mirrorlike substrate. A relatively broad
beam of coherent light is incident at an angle. The reflectedbeamis
accompanied by two diffracted orders, one of which is measured by a
photodetector [44].Because the height of the corrugated surface is very
small, of the order of 10-lo m, it acts as an ideal Raman-Nath grating, but
diffracts only very little light. Therefore, the dominant noise in the detection
system is of thermal origin, and, consequently, the technique has all the
disadvantages of direct detection [45]. Heterodyne detection with its
inherent shot noise (quantum noise) character is inherently superior, and
as discussed in Sec. 6.10.This time,
fortunately may be used in the same way
however, a polarization method is not feasible, and knife edge detection is
commonly used. As before, the explanation may be given either in terms of
periodic beam deflection or in terms of heterodyning. Because the SAW
interaction is much simplerthan the bulk interaction discussed in Sec.6.10,
' / l /
/
/SUBSTRATE
/ / /
//
Figure 6.47 Probing of surface acoustic waves by focused light beam and knife
edge (heterodyne) detection.
234
Chapter 6
we shall give a brief analysis of the former at this,time. A typical setup is
shown in Fig. 6.47.
A beam of coherent lightof total power P is almost normally incidenton
a surface acoustic wave of depth h. When the wave moves through the beam
focus, it will periodically, at the sound frequency, deflect the reflected beam
by ?A@, where
A@=2hK
(6.134)
The spot (far field)on the photodetector isof angular width28, and, hence,
the fractional peak variationof the input power is ofthe orderof AN20. If a
is the sensitivity ofthe photodetector in amp/W, the peak RF current out of
the diode equals
(6.135)
From (6.135) it is clear that, because K=QlV, the sensitivity increases
linearly with the acoustic frequency. A limit is reached, however, when the
wavelength A becomes equal to the sized of the focused spot, because at that
point the net effect
just averages out. Thus, we may expect maximumcurrent
when d ~ N 2i.e.,
, 8=iv2d=AIA. From (6.135) we then find that
Now let uslook at the heterodyne model. This time, the interpretation of
Fig. 6.47 is different. The dotted lines now do not represent a periodically
deflected beam, but rather two diffracted beams separated by A@=AIA.The
distance from the focus to the photodetector is denoted by E. The fields on
the photodetector are sketched in Fig. 6.48. For simplicity, the diffracted
spots are shown as squares of size 281x281 rather than as circles. The
undiffracted center spot0 with amplitudeEi partially overlapsthe upshifted
diffracted spot O+ with amplitude E+ and the downshifted spot 0-with
amplitude E-. The relative displacementsof the spots areIA4=1ivA. E+ and
E- are given by
(6.137)
where the peak phase shiftv (Raman-Nath parameter) is givenby
v=2kh
(6.138)
.
235
Selected Applications
Figure. 6.4% Heterodynecurrentcontributions on thesurface of the photodetector.
Wherever there is overlap with
current densities are givenby
the undiffracted beam, the heterodyne
Z h = aE,,EI, = j ( f ) l E o r
(6.139a)
(6.139b)
It is clear from (6.139)
that in the regionI, where all three beams overlap, the
plus and minus contributions cancel locally. In regions
IV and V, there is no
overlap; hence, no current. Finally, the output from I1 cancelsthat from I11
unless the latter region is shielded from the light by a mask. This is, however,
precisely what the knife edge K does. With K in place, the total current is
then equalto the output from I1 alone:
I, =Zk x ( 2 l O ) x ( ~ ) = - j ( f ) / E 0 ~
h
~2x1'~-
(6.140)
Realizing that the total power P is given by
P=IE01*(281)~
(6.141)
236
Chapter 6
we find with (6.138)and (6.140)
(6.142)
which is equivalent to (6.135), but, in addition, gives some information
about the phase of the current. The factor -j refers to the fact that the
maximum current is caused for maximum slope (under the focus) of the
SAW, a factwhich we did not take into account in the deflector model.
it is readily seen from Fig. 6.48 that
As for the maximum current IIplmax,
this occurs when area I1 reaches the maximum value of le. This happens
when the displacement lUL equals le, the same conditionwe derived before.
Hence, the two models are equivalent. However, only the heterodyning
model reveals what happens when d>N2, namely, a complete cutoff when
d=A. After this, i.e., whend>A, no more signal is generateddue tothe lack
of overlap of the heterodyning fields. In the near field, however, the beams
do overlap, and running amplitude fringe patterns occur periodically due
to
Talbot imaging [13]. A matching amplitude grating, placed in one of these
positions, followed by a photodetector, will again produce a heterodyne
current [151. A complete review of various kinds of heterodyne probing may
be found in Ref. 46.
6.13 SIGNAL PROCESSING
Acousto-optic devices are widely used for signal processing.
Most nontrivial
applications makeuse of the fact that, with the information signal
modulated on the acoustic carrier, the acou_sto-optic
cell acts as a temporary
storage medium that can be addressed in parallel by a wide optical beam.
The diffracted light then carries a moving “image”
of the modulation signal.
As was pointed out in Chapter 1, parallel processing for display purposes
was pioneered in the 1930s and further developed in the 1960s. More general
signal processing evolved gradually in the 1960s and 1970s. This, too, has
been reviewed in Chapter 1.
Acousto-optic signal processing may take placein the image plane or the
frequency plane of the associated optical system. Detectionof the resulting
signals may be by baseband or by heterodyning techniques. Integration for
higher resolution may be over space
or overtime.Finally, the second
dimension may be used to further increase frequency resolution, or to
display special timefrequency distributions.
In this section we shall concentrate on the fundamental mechanisms
common to all these techniques. An extensive discussion of architectures
may be found in Ref. 47.
Selected Applications
237
6.13.1 Image Plane Processing
Figure 6.49 shows how a Bragg cell may be thought of as containing a
moving image of a modulated sound beam. The electronic signal to the
transducer is represented by the analytic signal e‘(t) exp(in,t), where Q,
denotes the carrier frequency and e’(t) is the modulation signal. The real
physical signal at the transducer is given by Re[e‘(t) exp(ind)l. BY allowing
e’(t) to be complex, both phase and amplitude modulation can be modeled.
For general phase modulation,we write
e’(t)=expp(t)]
(6.143)
where, for an FM signal,
<p(t)=acos(sLmO
(6.144)
and for a chirp signal
(6.145)
Figure 6.49 Bragg cell carrying an amplitude image of the modulation signal
e(-x+ V&). (left) Upshifted interaction; (right) downshifted interaction. (From
Ref. 48.)
238
Chapter 6
For conventional AM modulation, we write
e'(t)= 1 +acos(R,t)
(6.146)
The analytic signal in the soundcell may be written as
So(t) = d[t -
$1
exp(jRct- jKx)
(6.147)
= e(x - Ct) exp(jR,t
- jKx)
as indicated in Fig. 6.49.
As discussed in Sec. 6.11, we may write for the phasor of the upshifted,
scattered light
El(t) 0~ jEe(-x+ Vst) exp(jL$t-jkpx)
(6.148)
where E is the incident light and p the nominal Bragg angle. Note that in
Fig. 6.49 (left), taken from Ref. 48, the scattered light is indicated by its
analytic signal, which includes the factor exp0.m).
In (6.148) the exponential part expresses a plane wave travelling upward at
an angle p, and having been upshifted in frequency by Rc.The first factor
indicates that the light carries an amplitude image of the modulation signal.
For downshifted interaction, the situation is as in Fig. 6.49 (right), and
the scattered light may be expressed as
El 0~
jEe*(-x+ Vst) exp(-jR,t+jkpx)
(6.149)
In the rest of the discussion, we will emphasize upshifted operation.
One of the simplest signal processing operations to perform is convolution
with a fixed mask pattern. The principle of operation is shown in Fig. 6.50.
As shown, an image of the sound amplitude in upshifted light is formed on
the amplitude mask g(x), i.e., the phasor amplitude distribution immediately
to the left of the mask is given by
En]-a jEe(x+ Vst) exp(jact+jkpx)
(6.150)
and immediately to the right of the mask the field is
Err,+ 0~ jEe(x+ Vst)g(x) exp(jR,t +jkpx)
(6.15 1)
A photodetector with pinhole aperture is situated in the focal plane of the
lens L3 and centered on the diffracted beam. The offset of the pinhole
Selected Applications
239
Sound in cell:
e(c"+\t ex pj(Q2 - K x I
transparency
current from
photo detector
Pinhole on axis
Signal to transducer
e'( t
in diffracted light
expj(ln, t
je ( x + V , t ) e x p [ j ( o t ~ , ) t + j k p x ]
Figure 6.50 Convolution of signal carried by Bragg cell with a fixed mask. (From
Ref. 48.)
corresponds to an angle - p in the Fourier transform plane. The amplitude
at the aperture Eapis therefore given by the inverse Fourier transform
evaluated at that angle:
(6.152)
Finally, the current i(t) out of the photodetector is proportional to the
square of the amplitude Eap.
Ih(x +
I-
i(t) =
l2
V't)g(x) dx
(6.153)
Note from Fig. 6.50 that in practice the function e(x+V,t) is limited by
the aperture to a range x= -D/2 to x=D/2. Thus the integration limits in
(6.153) should be replaced by these proper limits. It is readily seen that
(6.153) represents a finite convolution if we use a reflected coordinate system
for the mask plane so that g(x) is replaced by g ( - x ) . Then (6.153) may be
written as
Chapter 6
240
(6.154)
If no pinhole is used in front of the photodetector, as shown in Fig.6.51
(left), then the currentis given by
i(t) = jle(x +V’t)g(-x) dxrdx
(6.155)
d
Equation (6.155) represents a less useful output than (6.154), but even
(6.154) is limitedto positive values.
If the pinhole is not centered on the diffracted beam, but offset by an
angle as shown in Fig. 6.51 (right), then the current is given by
If, using (6.149), we applythe same reasoningto the downshifted light,we
find that (6.153) must be replaced by
all light collected
I
3
Current from
photodetector:
2
j ( e ( x t + x ) g ( x ) l dx
24 1
Selected
i(t)
a / j e * ( x + V s t ) gdx
(x)
(6.157)
4
with the pinhole aperture being placed in the center of the downshifted
beam, and g(x)referring to the nonreflected X coordinate.
The fact that the conjugate signal appears in(6.157)makes this a
correlation rather than a convolution. Here, too, we are limited to positive
values of the output. Moreover it is difficult to make g ( x ) a complex
function; phase-shifting masks arenot easy to construct.
To achieve a more general phaseand amplitude mask function,
we use the
heterodyning technique shown in Fig.
6.52. Here the undiffracted light is
not
stopped, but modified by an optical filter to appear in plane c (the former
mask plane) as reference field &(x). For the sake of simplicity, a simple
symbolic light path has been shown in Fig. 6.52.It goes without sayingthat
in actuality the path could be quite involved and the optical filter quite
complex. If we now observethe currentat Q, delivered by the photodetector
(without pinhole) as a result of the heterodyning of the reference field and
the Bragg diffracted field,we find
W
I ( Q , ) a E: ( x ) e ( x+ Vst)exp(jkj3x) dx
(6.158)
a
Sound in cell
eCrc+Vst )expj(fiCt-Kx)
Signal to
transducer
e'(t)exp(jilct)
yeterodyne current a t L?c:(E,(xle(Vst+x)exp(jk~x)d~
Reference field
for shaping
reference field
Image of sound cell
in diffracted light
j e (~+~,t)exp[j(wtSl,)ty'kpx]
Figure 6.52 Processing by heterodyning with reference field. (From Ref.48.)
242
Chapter 6
where I(aJis a time-varying complex amplitude (time-varying phasor).
It will be clearthat we have a wide choice forthe 'effective mask function"
geff(x) [compare with (6.153)]:
(6.159)
geff(X)=Er*(x)exp(ikDx)
Moreover, theoutput is not squared, but directly proportionalto the finite
correlation of e and gd. Thus, if for example, E,(X)=er(x) exp(ikpx) (i.e.,
propagating in the same direction as the upshifted beam), then
(6.160)
which represents atrue correlation.
Note, by the way, that a correlation can alwaysbechanged into a
convolution by the use of a reflected coordinateand a conjugate operation,
i.e., the convolution of h(x) and g(x) is equal to the correlation of h(x) and
g*(-).
To derive (6.158) we have implicitly applied the quasi theorem (Sec. 6.6),
because we superimposed the reference fieldand the upshifted field in plane
c, although the photodiode (assumed infinite) is located in plane
d, which is
an inverse Fourier transform plane ofc. According to the quasi theorem, it
makes no difference where we calculate the heterodyning current. Had we
calculated thiscurrent in plane d,we would have found
-
I(a,)a Is-'*[e(x)
exp(jk/3x)]s"[E~(x)]
(6.161)
dx'
S
where x' is the coordinate in plane d, proportional to the spatial frequency
f x . Using Parseval's theorem and the general properties of the Fourier
transform, it is readily shown that (6.161) is indeed equivalent to (6.158).
Thus, image plane processing and frequency plane processing are quite
equivalent, and in most cases the simple insertion an
ofappropriate lens will
transform one into the other. Nevertheless, it is instructive to consider
frequency plane processing in its own right. We will take up that subject in
the next section.
6.13.2 FrequencyPlaneProcessing
We will now analyze the operation of the Bragg cell of Fig. 6.49 in terms of
frequencies, i.e., we considerthe soundcell to be a frequency analyzer rather
than a signal delay line addressed
by light.
243
Selected
First we write the modulation signal e'(t) of Fig. 6.49 in terms of its
frequency spectrumS(Q,,,):
(6.162)
As showninFig.
6.53, alens L1 focuses both the diffracted and
undiffracted light in planeb. In particular, the light diffracted by the carrier
at Q, is focusedat P,corresponding to a deflection angleof p relative to the
horizontal, or 2p relative to the incident light. The lightat P,therefore, has
a frequencyw+Q, and anamplitude proportionalto the carrier, i.e.,to S(0).
Now, a frequency Qm of e'(t) will correspond to a frequency Qc+Qm of the
total signal in the sound cell. This frequency will diffract atlight
a somewhat
larger angle with respectto the incident light:
Q m
A( 2p) = -
kV,
Figure 6.53 Braggcellasspectrumanalyzer
(From Ref. 48.)
(6.163)
in frequencyPlaneProcessing.
244
Chapter 6
The light diffractedby this component will cometo a focus inQ such that
the distance PQ=xm’ is given by
(6.164)
where f is the focal length of lens L].Thus, in Q there exists a light
amplitude proportionto S(Qm) with a frequency ofo+Q,+Qm.
Continuing this reasoning, wefind that the entire spectrum of e f ( t )
exp(iR,t) is displayed around point P . Not only are the amplitudes S(Qm)
correctly represented by the light amplitudes, but there exists also a one-toone frequency correspondence. What is displayed around P is the actual
physical spectrum of e’(t) exPGact), each frequency being superimposed
upon the light frequencym.Thus, the phasor light field E(x, t ) along b may
now be written as
j:
E(x, t ) = j exp(jQct) S(Q,)S(x -x,,)
exp(flmt)dx,,
(6.165)
’,x and Qm is as in (6.164).
where the relation between
Expression (6.165) is approximate because no account has been taken of
the light spreading dueto the finite aperture sizeD. If this is done, the delta
function in(6.165) must be replaced by a sinc function:
6(x -x&)+sinc
nD(x - XL)
f;l
(6.166)
Now assume that a reference field&,(X) at frequency o is also incident at
plane b and that all the light is collectedby a photodetector, as indicated in
Fig. 6.54, which shows one particular component at Qm being focused in
plane b. It is readily seenthat the time-varying amplitude of the
total phasor
heterodyning current at Q, is given by
ea
I(Qc, t ) = JS(Qm)E;(x’)S(x*-x&) exp(ji2,t) dx’dx&
(6.167)
which may be writtenas
(6.168)
Selected Applications
Diffracted light field
jexP(i(w+S1~+S1,)t)S(S1;Zm)
245
Diffracted light field
je(-x+Vst)exp[j(W+fiJt+jk~x]
Figure 6.54 Relation between frequency plane (left) and image plane (right)
processing. (From Ref.48.)
where dxm’ a a,,,
[see (6.163)] and
(6.169)
Let now S‘(&) represent the frequency spectrumof l(Qc,
t), i.e.
(6.170)
Comparing (6.168) and (6.170) we find
S’(Qm)aS(&)Eh*(Qm)
(6.171)
Equation (6.171) sums up the essence of frequency plane processing: the
reference fieldEb*(Qm) acts asan arbitrary phase and amplitude filter for the
signal spectrumS(Qm).
Figure 6.55 shows a practical system for frequency plane processing[49].
Although the light paths are somewhat complicated, the reader
will have no
difficulty recognizing the essential elements of Figs. 6.53 and 6.54. The
reference field E h is generated by the box labelled “L.O. Optical System,”
which operates on the diffracted light in a separate optical path.
In a specific
Chapter 6
246
application, the optical system consists of a simple lens as shown in Fig.
6.56. This causes a curved reference field:
[
Eb = exp - j - ( x
2kR
-x,)
'1
(6.172)
where x
, denotes the Xcoordinate ofthe focus formed by the lens, a distance
R in front of the detector plane. The coordinate x is related to the temporal
Figure 6.55 A practical system for frequency plane processing. (From Ref.49.)
Local Oscil
I
Figure 6.56 Details of the reference field used in Fig. 6.55. (From Ref. 49.)
247
Selected Applications
sound frequenciesfprojected on this plane by lens L1 with focal length 11 in
the followingway:
(6.173)
Substituting (6.173) into (6.172), we find
(6.174)
Thus, the referencefieldisequivalent
to asimple parabolic filter or
dispersive delay line. The dispersion can be adjusted by changing R, i.e.,
varying the position of reference lens
L2.
Some results obtained with this device are shown in Fig. 6.57 in which
pulse stretching (a measure of dispersion) is plotted vs. radius of curvature
of the reference field.
“Theoretical
2
0
5
I
1.5
2
W
3
3.5
Experimental
4
4.5
5
Figure 6.57 Pulsebroadening vs. referencefieldradius
Ref. 49.)
R
(cm)
of curvature.(From
248
Chapter 6
6.13.3 TimeIntegratingProcessing
In an interesting variant upon the schemes outlined above, the illuminating
light is modulated and the integration is carried out over time rather than
space [50, 511. In the simplest example, the configuration of Fig. 6.50 is
slightly modified by re-imaging the field immediately to the right of the
mask (plane c) upon image plane d by means of lens Ls. This is shown in
Fig. 6.58. In image plane d the image of c falls upon an integrating
photodetector arrayA . Assume that the illuminating light amplitude is given
by Ein(t), then with (6.151) the field
E,+ is given by
E m + cc jEin(t)e(X+
V.J)g(x)
exp(iln,t+jkpx)
(6.175)
Ignoring imageinversion by the lens, the field at the integrating
photodetector is proportional to this. The local accumulated charge at the
array q(x) may be written as
(6.176)
\A
2f
Em+
Figure 6.58 Modification of the configuration of Fig. 6.50. The plane c is imaged
upon plane d.
Selected
249
where T is the integration time of the device, and the mask has been
removed [g(x)= l].
For conveniencewe shall write(6.176) as
(6.177)
where u =xlV..
It is clear that (6.177) represents a correlation with the correlation shift
given by -U. The values are given by the charges on the diodes located at
x=uV., and are read out of the diode array in a time-sequential fashion,
by
appropriate electronics.
As follows from (6.177) the correlationis necessarily restricted to positive
functions. However, negative functions are allowed by proper biasing. To
that purpose we choose
le(t+u)I2= V22[1+s2(t)]
where sl(t),s2(t)>
- 1. If it is
(6.179)
assumedthat
(6.180)
0
then (6.177) may be written as
T
q(u)aIs,(t)s*(t+u)dt+biasterm
(6.181)
0
which indicates atrue correlation of real functionsover a finite time interval.
of
It is also possibleto perform Fourier transforms with the configuration
Fig. 6.58. To achieve that, we choose s2 as a chirp and SI as the same chirp
modulated by the signals(t) to be analyzed:
s1(t)=s(t)
coss(uhnt+at2)
s2(t)=cos(uhnt+at2)
substituting (6.182) and (6.183) into (6.181), we find
(6.182)
(6.183)
Chapter 6
250
T
q(u) Is(r)cos(2w,r
0
+ w,u + 2ar2 +au2 +2atu) dt
(6.184)
i
+
0
s(r)cos(2aru + w,u
+au2)dt + bias
term
NOWwe choose ~ @ 1 2 ~ ~ t l mand
a x ,make 12autlmaxof the order of the
bandwidth of s(r). In that case the first term in(6.184) integrates out to zero
and q(u) may be written
T
exp(-jw,u-
I
juu2)fs(r)exp(-j2atu) dr
0
(6.185)
We recognize in the integral the finite Fourier transform S(0=2au) of s(t).
Let us write
exppfl2ua)l
S(2ua)=IS(2ua)l
(6.186)
then (6.184) becomes
Equation (6.187) indicates that a chirp-type fringe pattern cos[wmu+au2
-@(2au)] develops across the photodiode array with the fringe amplitude
modulated by IS(2au)l and its phase by 4(2au). With appropriate electronic
processing both amplitude and phase can be retrieved; hence, in this mode
of operation atrue Fourier transform can be obtained.
It is noteworthythat the frequency resolution6f= UT, i.e., it depends on
T rather than on the transit time z of the sound through the light beam,
as is
the case with space integrating configurations. The frequency range isbyset
Af=2~1ul=2a(D/V,)=2ag where D is the aperture of the sound cell. The
chirp rate mustbelimited
so thatthe maximum range 2aT of the
instantaneous frequency doesnot exceed the bandwidth B of the soundcell:
2aTSB. Putting everything together we iind that N=AflS f S B g where N is
the number of resolvable frequencies. This number is still the same
as in the
space integrating configuration;however, in the time integrating method, a
trade-off between Sf and Af is possible by varying
a and T.
So far, the Y dimension has not been used in any of the signal processors
discussed. A simple example of its use is the extension of the Fourier
transformer just discussed to parallel operation [52, 531. Figure 6.59 shows
such an arrangement taken fromRef. 53.
25 1
Selected Applications
2-0 TIME INTEGRATWG
DETECTOR
LED
- 0
PI
LI
L2
L3
L4
p3
h(0
TOP VIEW
SIDE VIEW
Figure 6.59 Multichannel time-integratingspectrum analyzer. (From Ref. 53.)
In the horizontal plane
(X)the operationis much as described above, with
s(t)=sn(t)modulating the chirp of the incident light. The subscript n refers
to an array of independent, light-emitting diodes (LEDs) arranged in the Y
direction. As shown in the top view, lens L2 collimates all light sources
horizontally to wide beams for illumination of the sound cell in plane P2.
Lenses L2 and L3 image the sound cell horizontally upon plane P3, which
contains a two-dimensional arrayof time integrating detectors.
The side view shows how in the vertical direction ( Y ) ,lens L1 collimates
the individual light source
to narrow beams passing through the small height
of the sound cell. LensL4 re-images theLEDs in the vertical direction upon
the arrayof detectors.
It will be clear that this configuration performs a Fourier transform
operation on a numberof signals sn(t)in parallel.
Instead of using the Y axis for parallel operation, a second sound cell may
be aligned along it, as shown in Fig. 6.60, also taken from Ref. 53. Careful
252
Chapter 6
LI
L2
PO
PI
p2
p3
Figure 6.60 Time-integrating triple product processor. (From Ref. 53.)
analysis will show that in this configuration the signal a(t) from the light
source LED will be multiplied by b(t-xlV,)=b(t-ut) of the horizontal
sound cell and c(t-ylVs)=c(t-u2) of the vertical sound cell. (No bias
voltages are shown in the drawing,but it is assumed they have been applied
appropriately.) The integrated chargeq(u1,u2) may be written as
T
0
A device characterized by (6.188) is called a triple product processor and
may be used for many different purposes
[54].
In one interesting application [53] the signals a, b, and c are chosen as
follows:
a(t)=s(t) cos(oht+at2)
(6.189)
Selected
253
b( t) =S( t )
(6.190)
c(t)=cos(aht+at2)
(6.191)
By doing the same kind of analysis as we did on (6. 18l), and making
suitable assumptions aboutah, we find
q(u) a cos[ahu+au2-O(2au)]lF(2au)(
(6.192)
where IF1 and 8 are the magnitude and phase of the so-called ambiguity
function
(6.193)
Notice that F displays correlations along U I (Xdirection) and frequencies
along 242 ( Y direction).
It is also possible to display coarse frequency resolution along UI and fine
frequency resolution alongut. For this the signalsb and c in (6.188) have to
be a fast chirpand a slow chirp, respectively [55].
For more detailed information about acousto-optic processors for
time-frequency representations, see Ref 56. Bi-spectral processing is treated
in Ref. 57, and a processor for synthetic aperture radar is discussed in Ref.
58.
REFERENCES
1. Gottlieb, M., Ireland, C. L. M., and Ley, J. M. Electro-optic and Acousto-Optic
Scanning and Deflection,Marcel Dekker, New York
(1983).
2. Korpel, A., ZEEE Spectrum, 5,45 (1968).
3. Gordon, E. I., Proc ZEEE, 54, 1391 (1966).
4. Chu, R.S., Kong, J. A., and Tamir, T., J; Opt. SOCAm., 67, 1555 (1977).
5. Chu, R.S. and Kong, J. A., .lOpt. SOCAm., 70, 1 (1980).
6. Magdich, L.N. and Molchanov, V.Y., .lOpt. Spectrosc., 42,299 (1977).
7 . Korpel, A., Adler, R.,and Desmares, P., Paper 11.5, International Electron
Devices Meeting, Washington, D.C.(1965).
8. Korpel, A., U.S. Patent 3,424,906, Dec. 30, 1965.
9. Korpel, A. Adler, R.,Desmares, P., and Watson, W. App. Opt., 5, 1667 (1966).
10. Goutzoulis, A. P. and Pape, D. R.,eds. Desing and Fabrication of Acousto-optic
Devices, Marcel Dekker, New York(1994).
11. Korpel, A., Adler, R.,Desmares, P., and Watson, W., ZEEE .l Quantum
Electron., QE-I, 60 (1965).
254
Chapter 6
12. Korpel, A., and Whitman, R.L., Appl. Opt., 8, 1577 (1969).
13. Goodman, J. W., Introduction to Fourier Optics, McGraw-Hill, New York (1968).
14. Gradhsteyn, I. S. and Ryzhik, 1. M., Table of Integrals, Series and Products,
Academic Press, New York(1965).
15. Korpel, A., Laub, L. J.,and Sievering, H.C., App. Phys. Lett., 10,295 (1967).
16. Korpel, A. and Whitman, R.L., .lOpt. SOCAm., 8,1577 (1969).
17. Whitman, R. L. Laub, L. J.,and Bates, W. J., ZEEETrans. Sonics Ultrasonics,
SU-15, 186 (1968).
18. Laub, L. J., “Apparatus and methods for scanning phase profilometry,” U.S.
Patent 3,796,495,March 12, 1974.
19. Laub, L. J., Paper ThB16,Meeting Opt. Soc. Am., New York, Spring 1972.
20. Hecht, D. L., IEEE Trans., SU-24, 7 (1977).
21. Korpel, A. App. Phys. Lett., 9,425 (1966).
22. Korpel, A. ZEEE Trans., SU-15, 153 (1968).
23. Korpel, A. “Acousto-Optics,” in Applied Solid State Science, Vol. 3 (R. Wolfe,
ed.), Academic Press, New York(1972).
24. Korpel, A. “Eikonal Theory of Bragg Diffraction Imaging,” in Acoustical
Holography, Vol. 2 (A. F. Metherell and L. Larmore, eds), Plenum, New York
(1970).
25. Korpel, A. and Young, A., Acta Polytechnica Scandinavica, Applied Phys., 150,
221 (1985).
26. Szengessi, 0.I., Proc ZEEE, 60, 1461 (1972).
27. Korpel, A. Proc SPIE, 232,90 (1980).
28. Korpel, A. Znt. 1 Nondest. Test., I , 337 (1970).
29. Korpel, A. 1 Ac. SOC Am., 49, 1059 (1971).
30. Smith, R. A., Wade, G., Powers, J.,and Landrey, C. J., 1 Ac SOCAm., 49,1062
(1971).
31. Korpel, A. “OpticalImaging of UltrasonicFields by Acoustic Bragg
Diffraction,” Ph.D Thesis, Universityof Delft, The Netherlands (1969).
32. Korpel, A. and Mehrl, D. J., App. Opt., 28,43534359(1989).
33. Korpel, A. and Mehrl, D.J., Proc 1988 ZEEE UltrasonicsSymposium, pp.
735-737 (1988).
34. Mehrl, D.J. Liu, Z. C., and Korpel, A., App. Opt., 32 5112-5118 (1993).
35. Haykin, S. Communication Systems, Wiley, New York (1978).
36. Korpel, A., Kessler, L. W., and Ahmed, M., .lAc Soc. Am., 51, 1582 (1972).
37. Toepler, A. Poggendofl7s Ann., 131,33,180 (1867).
38. Chen, Y. M., Ph.D Thesis, University of Iowa (1994).
39. Mehrl, D. Korpel, A., and Bridge, W. App. Opt., 29,47664771(1990).
40. Korpel, A., Yu, T.T.,Snyder, H. S. and Chen, Y M., .l Opt. Soc Am., II(A),
2657-2663 (1994).
41. Durnin, J. 1 Opt. SOCAm., 4(A), 651 (1987).
42. Korpel, A., Mehrl, D., and Lin, H.H., Proc 1987IEEE Ultrasonics Symposium,
pp. 515-518 (1987).
43. Acoustic Surface Waves (A. A. Oliner, ed.), Springer, New York(1978).
44. Ippen, E. P., Proc ZEEE, 55,248 (1967).
Selected Applications
255
45. Yariv, A, Optical Electronics,Third Ed., Holt, Rinehart andWinston, New York
(1985).
46. Whitman, R. L. and Korpel, A. Appl. Opt., 8, 1567 (1969).
47. Berg, N. J. and Pellegrino, J. M., eds, Acousto-Optic Signal Processing, Marcel
Dekker, Inc., New York, 1995.
48. Korpel, A. “Acoustic-optic Signal Processing”Optical Information Processing
E. Nesterikhin and G. W. Stroke, eds.), Plenum Press, New York, p.171 (1976).
49. Whitman, R., Korpel, A., and Lotsoff, S., “Application of Acoustic Bragg
Diffraction to OpticalProcessing Techniques,” Proc.Symp.
Mod. Opt.,
Polytechnic Press, Brooklyn, New York,
p. 243 (1967).
50. Sprague, R. and Koliopoulis C., App. Opt., IS, 89 (1976).
51. Sprague, R. A., Opt. Eng., 16,467 (1977).
52. Casasent, D. and Psaltis, D., App. Opt., 19,2034 (1980).
53. Psaltis, D. Proc. ZEEE, 72,962 (1984).
54. Lohmann, A. Proc. ZEEE, 72,989 (1984).
55. Lee, J. N., and Vanderlugt, A., Proc ZEEE, 77,1528 (1989).
56. Nathale, R. A., Lee, J. N., Robinson, E. L., and Szu, H. H., Opt. Lett., 8, 166
(1983).
57. Kauderer, M. H., Backer, M. F., and Powers, E. J., App. Opt., 28,627 (1989).
58. Psaltis, D., and Wagner, K., Opt. Eng, 21 ,822 (1982).
v.
This Page Intentionally Left Blank
7
Related Fields andMaterials
In this chapter we want to ultimately arrive at a more precise description of
the intrinsic acousto-optic properties of liquids and solids than the
simplified one we have used so far. Before doingthat, however, we shall give
a brief review of acoustics and anisotropic optics in order to provide the
necessary foundation.
7.1
ACOUSTICS
Although the word “acoustics” originally was synonymous with “audible
acoustics,” the technical meaning has shifted graduallyto the point that the
term now also connotes ultrasonic propagation in liquidsand solids in the
frequency range from l to 4 GHz. In liquids, an acoustic wave consists of
density variationsAplpo (called condensation) brought about by hydrostatic
pressure [l]. There exists a simple relationship between excess pressure
p and
condensation S:
Sound waves in liquids are longitudinal,i.e., the particle displacement, say5,
is in the direction of propagation, say X. The situation for an infinitesimal
257
258
Chapter 7
cube of liquid is shown in Fig.
7.l(a). Applying Newton's second law yields
where p has been approximated by po. Also, the fractional density change
equals the fractional change in displacement:
Combining (7.1),
(7.2)
and
, (7.3)gives the wave equation
with travelling wavesolutions of the form
where the sound velocity Vis given by
Note from (7.1) that A, the bulk modulus, signifies stiffness, i.e., resistance
against compression.
A longitudinal wave in an isotropic solid or along certain specific
directions in a crystal is characterized by equations similar to (7.1-7.6),
with (7.1)written as
where, in comparison with the parameters pertaining to liquid, 0 1 1 = -p,
611=-s=d@Sx, and CII=A.The subscript l1 in 0 denotes a stress in the
1(X) direction acting on a plane perpendicularto the 1(X) axis. Similarly, 611
denotes a fractional extension (tensile strain) in the 1 direction of a line
element along the same direction. The parameter c11 is called a stiffness
constant. Alongother directions, the subscripts 22 and 33 are used. It will be
clear that in an isotropic mediumcll=c22=cg3.
Related Fields and Materials
259
Figure 7.1 (a) Compression of elementary cube in longitudinal wave motion. (b)
Distortion and sideways displacement
of cube by shear wave.
Chapter 7
260
It is the simple kind of wave considered so far that we have used in our
acousto-optic interactions. Beforewe discuss more complicated wave
motions, it is of practical interestto derive some simple relations concerning
power flow and particle velocity. With (7.3) the particle velocity u=dglSt
may be written as
U = - - jd
SdX
dt
so that with (7.5)
U = U0 cos(Qt-cfi)
(7.9)
where
U, =sa"=s,v
K
(7.10)
In atypical
isotropic solid such as glass, the strain is limited to
approximately
so that, with V d x lo3 ds,
particlevelocities are limited
to -0.4 d s .
The total kinetic and potential energy stored per unit volume in thewave
is given by
J
W = O.5PoUo2 -
m3
(7.1 1)
and is related to the power flowor intensity I,( ~ / m *
as)follows:
W="1
V
(7.12)
[Equation (7.12) is most easily remembered by thinking of,awaveof
intensity, I, filling up a cube of1 X 1X 1 meter with energyW during the time
(1/V) it takes thewave to traverse the cube.] Combining(7.10-8.12) yields
I , = 0.5povu;(7.13)
= 0.5pov3s;
The quantityro V is called the mechanical impedance
of the medium. If(7.6)
is substitutedinto (7.13), we find with (7.1) and writing V= V / V
26 1
Related Fieldsand Materials
OSP;
I, =POV
(7.14)
or the equivalent expression for solids
0.5T;
I, =POV
(7.15)
where POand TOdenote peak values of pressure
and stress.
Note that (7.13-7.1 5). are analogousto the electromagnetic case withpoV
taking the place of the intrinsic impedanceq=
U0 substituting forHo,
and TOor POsubstituting forEO.From (7.13-7.15) it follows that
m,
TO(or Po)=po VU0
(7.16)
The analogy is very useful; it enables, for instance, the calculation of
(perpendicular incidence) reflection coefficients between different mediaby
using the corresponding electromagnetic relations.
Equation (7.13) is of great importance in acousto-optic applications
as SO
is directly proportional to An and hence to the Raman-Nath parameter
v=kAnL. We will discuss that in more detail later, but it will be obvious
already from (7.13)that media withlow Vwill, in general, require less power
to achieve a certain SO,and hence An. Such media are therefore more
efficient from an acousto-optic pointof view. Comparing, for instance, glass
and water, we find that typically Vglass~3Vwater,and also pglass~3.5&ater, SO
that Is,glassZlOOIs,water for equal strain (condensation). Unfortunately,
acoustic losses (which generally increase with frequency) are much higher in
liquids than in solids, so that the improved efficiency can only be realizedat
relatively low frequencies, e.g.,fc20-50 MHz for water.
In a longitudinal (dilatational) wave, the elementary cube is made to
change its volume only,not its essential shape as is evident from Fig.
7.l(a).
In shear (distortional) waves, the cube changes its shape only. This is
illustrated in Fig. 7.l(b), which shows a distorted and displaced cube. Note
that the displacement q is perpendicular to the direction of propagation X.
The active force actingon the element in theY direction is the result
of shear
forces 0 2 1 acting to the left and the right
(7.17)
Note that the subscript21 indicates a force acting in the Y)
2( direction on a
Chapter 7
262
plane perpendicular to the 1(X) direction. Note also that, with respect to
Fig. 7.1, the usual condition of linear vibration holds, i.e., 012=021 [2]. The
total distortion anglea+P=&=& is called the engineering shearstrain (in
most nonengineering conventions, half the total angle is called the shear
strain and denoted &I), and the following relation applies:
021=c66&1
(7.18)
where C66 (the notation will be clear later) is a rigidity constant.It is readily
seen from Fig.7.l(b) that, for small anglesa and p,
(7.19)
It should be noted that the quantity a-& called the rotation, plays a
negligible role in acoustical dynamics[2].
In what follows, we shall often denote XJXI, ~ 3 x 2 ZJX~,
,
g+, 7 7 3 5 2 ,
From (7.17-7.19) we find, realizingthat d2t/dydx=0,
c+&.
(7.20)
leading to shear or transverse waves of the kind
(7.21)
where V,, the shear velocity, is given by
(7.22)
As for particle velocity and powerflow, relations analogous to (7.8-7.12)
apply, whereas formal analogies
to the electromagnetic field may
be drawnin
the sameway as before.
As far as acousto-optics is concerned, it is important to note that
longitudinal wave motion causes changes in the densityof dipoles and may
cause changes in the orientation of dipoles, whereas shear
waves only affect
the orientation. In liquids, where hydrostatic pressure is the active agent,
only density changes .are involved,
apart from peculiar effects due to
streaming of the liquid,to be discussed later.
Related Fields and Materials
263
It will be clear that in the general case of a crystal, the stress-strain
relations are more complicated than (7.7) and (7.10). Internal oriented
bonding forces will cause complex cross-coupling of motions such that the
generalized form of Hooke’s law is given by [3]
(7.23)
An often usedconstant is Young’s modulus l?
(7.25)
which represents the stiffness of a cube free to contract laterally when
extended longitudinally; it determines the velocity of propagation in a rod
thin compared to the wavelength. Also frequently used is Poisson’s ratio v
that is the ratio of lateral contraction to longitudinal extension in theabove
example:
v=-
a
+ P)
With respect to the rather cumbersome notation of (7.23), it should be
remarked that a more efficient abbreviated notation is also in use and is
264
Chapter 7
so that (7.23) becomes
where repeated indices are to be summed over. The cgs are generally called
stiffness constants.The inverse relationto (7.28) is
where the sus are called compliance constants. An extensive list of elastic
constants may be found in Ref.5.
To analyze acoustic wave propagation in crystals, the matrix relations
(7.28) have to be used rather than the simple scalar relations we have used
before. This makes for a much more complex situation. For an arbitrary
propagation direction in the crystal, there exist generally two quasi-shear
waves (particle motion not quite transverse) and one quasi-longitudinal
wave (particle motion not quite longitudinal) [2]. For specific directions
along the crystal axes, the waves may be proper longitudinal and shear
waves. In an isotropic solid, there exist in every direction one longitudinal
wave and infinitely many shear waves(i.e., the transverse direction of
particle motion or polarization is arbitrary). In liquids, only longitudinal
waves can propagate.
The three types of wave motion possible in an arbitrary direction in a
crystal havemutually
orthogonal displacements and different phase
velocities. The latter dependon the direction of propagation.It is customary
to show this graphically by the construction of a so-called slowness surface
that essentially marks the endpoints of wave vectors. (As K=WV, a simple
scaling I K / f l l = 1/V transforms the locusof wave vectors into such a slowness
surface.) An example is given in Fig. 7.2 that shows the cross sections of
three slowness surfaces with the cubeface
of a GaAs crystal [2]
(piezoelectric effects ignored). The three curves shown refer to the three
265
Related Fieldsand Materials
PURE SHEAR,
QUASILONGITUDINAL
[1001
Figure 7.2 Slowness curves in cube face of GaAs.(Adapted from Ref. 2.)
possible modes discussed before, in this case two quasi-waves and one pure
wave. It should be noted that the amount of diffraction (spreading) of a
beam propagating in a particular medium is proportional
to the curvatureof
the slowness surface. Thus, the shear
wave “beam” represented by point B in
Fig. 7.2 exhibits the same amount of diffraction as in the isotropic case
because it lies on a circular slowness surface. On the other hand, the quasilongitudinal wave at A exhibits less diffractionthan would correspond to an
isotropic circle throughthat point. By the same token, the quasi-shearwave
at C exhibits more diffraction. It is clear that such effects are of great
importance in acousto-optic applications, as they may well determine the
interaction efficiency and useful length of a soundcell.
Another important effect of anisotropy isthat, in general, the direction of
wave propagation isnot the sameas that of energy propagation.The latter is
always perpendicular to the slowness surface, whereas the former is in the
direction of the radius. Thus, in point D of Fig. 7.2, S, is the propagation
direction (wave normal), whereas se designates the energy flow (Poynting
266
Chapter 7
vector). The severe “beam walk-off’ that this effect may cause is strikingly
illustrated in Fig. 7.3 [6]that shows a quasi-longitudinal wave in quartz
propagating at a slant from the generating transducer at left bottom. Note
that the wavefronts arenot parallel to the direction (se) the beam as a whole
is travelling in.
In addition to the bulk waves discussed so far, there also exists a surface
acoustic wave [7]. This is a mixed shear and longitudinal evanescent wave
that clings to the surface of a solid.Its amplitude decays rapidly (i.e., within
one wavelength) away from the surface, and it has generally the lowest
velocities of allacoustic waves. Recently,it has become of importance alsoto
acousto-optics [8], where it is used to interact with optical thin film guided
waves in planar configurations.
As in electromagnetics, it is possible
to guide acousticwaves in plates, rods
[9], and thin surface strips [8]. The analysis of such structures is, however,
much more complicated than inelectromagnetics,because
of the
phenomenon of mode conversionat boundaries.
Figure 7.3 Quasi-longitudinal wave in quartz. (From Ref. 6.)
Related
7.2
and Materials
267
OPTICAL ANISOTROPY
As in acoustics, the crystal structure imposes severe constraints on the
possible modes of propagation. Thus, optical propagation in an arbitrary
direction is, in general, only possible for two plane waves with well-defined
directions of B, H, E, and D. Although B=pH, because of the absence of
magnetic anisotropiceffects, the relation betweenD and E is more involved:
Dj=qiEi
i, j = 1,2, 3
(7.30)
where again summation is over repeated indices.
The consequence of (7.30) is that, for the two possible plane waves
referred to above, E and D are not in the same direction. It may be shown
[lo] that the wave normal S, (i.e., the direction of the wave vector k) is
perpendicular to the plane of D and H, whereas the direction se of the
Poynting vector is perpendicular to the plane of E and H. The general
situation is illustrated in the diagram of Fig. 7.4. The two possible wave
motions in the direction S, do have orthogonal directions of D but not
necessarily of E. Also, their phase velocities are, in general, different.As in
acoustics, it is possible to construct “slownesssurfaces’’(calledindex
surfaces in optics) consistingof the endpoints of wave vectors whose length
depends on direction. At any point, the direction of the Poynting vector is
perpendicular to the index surface. It will be obvious that such wave vector
Figure 7.4 Field vector configuration for anisotropic propagation.
268
Chapter 7
surfaces are of great importance in establishing Bragg conditions in acoustooptics. We will return to this later.
There exists a convenient construction, due to Fresnel, for finding phase
velocities and polarizations of D of the two waves possible in any given
direction. It depends on the fact that,
by choosing a coordinate system along
the principal dielectric axes of the crystal, eq.(7.30) maybe expressed in the
simple form
D1
DF
IEI
(7.31a)
~22E2
(7.31b)
The construction referred to above now consists of the following. First,
construct the ellipsoid
(7.32)
where n?I=&iII&, n22=~22/~0,
and n33=~33/~o.
The ellipsoid defined by (7.32)
and shown in Fig. 7 4 a ) is called optical indicatrix, index ellipsoid, or
ellipsoid of wave normals.
Next, consider a wave propagating in the direction S, and construct a
plane through 0 perpendicular to S, as shown in Fig. 7.5(b). The
intersection of this plane with the index ellipsoid is the ellipse L. The
directions of the two possible dielectric displacements D, and Db now
coincide with the major and minor axes of L. The appropriate refractive
indices for the two allowed plane waves are given by the lengths of the two
semi-axis. Thus, wave “a” is characterized by na=a, and wave “6” by nb=b.
Both k, and kb are, of course, in the direction of S., In general, there exist
two directions spl and sPz for which the intersection L degenerates into a
circle. These directions are called the optical axes of the crystal. A wave
propagating in the direction of an optical axis is degenerate, i.e., its D
polarization is arbitrary in the plane of L, with no effect on propagation
velocity. In cubic crystals and amorphous media such as glass and plastic,
rill =n22=n33, and the ellipsoid degenerates into a sphere. Each direction
forms an optical axis, and the medium is called isotropic.It is media of this
kind that we have considered so far. When two of the ellipsoid’s semi-axis
are equal, say n11=n22=n0, n33=ne, the medium is called uniaxial. The
optical axis is in the 3 direction,
no is called the ordinary refractive index,
ne
the extraordinary refractive index. The corresponding waves are called
269
Related Fieldsand Materials
zI
a
Figure 7.5 (a)Indexellipsoid.
characteristics in direction.,S
b
(b) Constructionforfindingpropagation
ordinary and extraordinary waves. If no<n,, the ordinarywave travels faster,
and the crystal is called positive uniaxial; if no>ne, it is called negative
uniaxial. Because, as we will see later, uniaxial crystals have rather unique
acousto-optic properties, we will consider them in some more detail here.
Figure 7.6(a) shows a positive uniaxial crystal with a wave propagating in
the S, direction in the Y-Z plane, at an angle 8 relative to the optical axis. It
is seen readily that of the two allowed waves, the ordinary one (a) is always
subject to the same refractive index na=no. The extraordinary wave (b),
however, sees a refractive index nb that depends on 8. Let us now plot the
loci of na and nb when 8 varies from0 to 2z. These are shown in Fig. 7.6(b).
As na is constant, it lies on the circlena(8)=no, while it is readily proved[l l]
that nb(8) is defined bythe ellipse
(7.33)
It will be clearthat curves no(@ and nb(e) are the cross sections with Y-Z
the
plane of the index surfaces discussed before. Multiplying n, and nb with ko
Chapter 7
270
IZ
a
b
Figure 7.6 (a) Wave propagation in direction S, in the YZ plane of positive
uniaxial crystal. (b) Angular dependence of ordinary and extraordinary refractive
index on angle of propagation.
results in two curves giving the loci ofwave
the vectors k,and kb. As we will
see later, such curves are of great convenience in the analysis of anisotropic
acousto-optic diffraction.
7.3
ELASTO-OPTICS
So far we have assumedthat there exists a simple scalar relation between
the
refractive index variationAn and the sound amplitudeS:
A?l=c'S
(7.34)
To the extent that S stands for the condensation in a liquid, thisis, in fact,
quite true. In that case, C' may be found from the Lorentz-Lorenz relation
v21
n2 -1
ANa
n2 +2
=
where A is a constant, N the density of dipoles,
(7.35)
and a the molecular
ields
Related
and Materials
27 1
polarizability. Following Pinnow [13], we differentiate (7.35) with respect to
N.Taking into account that N a p,we find
(7.36)
where
(7.38)
The condensation S with which we have characterized a longitudinal sound
wave is related to the density changeby
(7.39)
with (7.34), (7.36), and (7.39), letting Ap+dp, h 4 n , we find
(7.40)
It is clear that (7.40) represents both the effect of a change in density of
dipoles as well as the change in molecular polarizability due
to compression
as expressed by the factor h.In liquids the latter effect is small,and hence
C' is approximately given by (7.40) withh = O . For water (n= 1.33), we find
C'=O.31; for mostother liquids, the value is closeto this. In isotropic solids,
the factor Ilo is not always negligible; more important, however, is the fact
that in such solids, AO ispolarization-sensitive: the mediumbecomes
birefringent under stress, This is, of course, plausible as the dipoles tend to
line up parallel to the strain.
In crystals, the situation is far more complicated. In the first place, the
dipoles tend to move along constrained directions whenever a strain is
applied. This means that in regard to the factor A, the polarization
sensitivity is a complicated (although linear) function of the various strains,
and the latter themselves, according to Sec. 7.1, depend on the acoustic
propagation direction in a nontrivial way. Also, pure shear waves do not
cause any density variation at all; any effect they exert must come about
through a direct (anisotropic) change in the polarizability, In view of the
above, it is clear that the simple relation (7.36) is no longer sufficient in the
general case.It must be replacedby a tensor relationthat expresses the effect
Cha#ter 7
272
of dilatational or distortional strain in any directiori on the propagation of
light of any polarization in any other direction. The vehicle for doing so is
the strain-induced deformation of the index ellipsoid discussed in Sec. 7.2.
In its most general form,
this ellipsoid may be writtenas
(7.41)
where
XI=X,
m=y, x3=z,
n l = h l l , n2=n22, n3=n33, n4=n23,
ns=n31, and
n6=n12.
As we have seen before,for the unperturbed ellipsoid
(7.32) when referred
to the principal dielectricaxes, the coefficients, n4, n5, and n6 vanish.
However, upon the application of strain, they may acquire a finite value. By
the same token, the coefficientsnl, n2, and n3 may change in value. We may
thus say that strains both distort the ellipsoid and rotate it in space. All this
may be succinctly expressed bythe relations
(7.42)
where the repeated index convention applies,and the S,s are the abbreviated
strains as defined in Sec.7.1. The material constantsp are called strain-optic
coefficients. The number of independent p s varies from 36 in a triclinic
crystal to three or four in a cubic crystal. Inan isotropic solid, only twop s
are neededto describe all phenomena:
(7.43)
with allother coefficients zero.
In actual fact, not even(7.42)issufficientlygeneral
to describe all
phenomena. Nelsonand Lax [l 5,161 have shownthat not only shear but also
( l / n 2 ) if the mediumis optically
local rotation maycausechangesin
birefringent. (With respect to Fig. 7.l(b), a shear is defined as a+P, a
rotation as a-P, the latter signifying a rotation of the centerpoint of the
distorted cube.) The necessary additional parameters are directly related to
the coefficientsn1-n6 [15,17].
It is of interest to consider an application of (7.42). Let us assume that in
an amorphous medium, a longitudinal sound
wave is propagating in the+X
direction [Fig.7.7(a)], so that the appropriate strain is denoted by
r?~~ldxl=Sl.
Let the light propagate nominally in the + Z direction and be
273
Related Fields and Materials
a
Figure 7.7 (a) A longitudinal wavepropagating
propagating alongX with particle motionin Y.
along X. (b) Shearwave
polarized along theX axis. It is evident that we have to consider changesin
i.e.,pll is the appropriate coefficient. We find with (7.42)
nl due to SI,
If the light is polarized in the Y direction, the relevant change is in
the appropriate coefficient p21:
n2
and
274
Chapter 7
An2
=- O S ~ : ~ , , S ,
(7.45)
Evidently, the acousto-optic scattering coefficient will depend
on the optical
polarization. This makespossible an interesting effect.If the light is
polarized in the direction at 45" to the X axis, i.e., X in Fig. 7.7(a), it is
readily seen that the effective scattering is determinedby
However, if p1lfp21, light will now also be scattered into the orthogonal
polarization Y' with an effective strain-optic coefficient [l71
Both (7.46) and (7.47) may be derived easily by considering polarization
components along Y and 2.
An example of pure orthogonal scattering is shown in Fig. 7.7(b). Here, a
shear wave propagates in the +X direction with particle motion along Y
(i.e., d&/dXl+a&dX2=S6). As the light propagates in +Z and hence is
polarized in theX-Y plane, the affected coefficients of the optical indicatrix
are n1, nz, and n6 with corresponding strain optic coefficients p16, p26, and
p a . According to (7.43), the first twop s vanish in an amorphoussolid. The
coefficient p6 creates a term 2 A ( l / n 6 ) x 1 x 2 = 2 p ~ S a x ~inx the
~ index ellipsoid
of (7.41). It may be shown that this changes the indicatrix from a sphere
(amorphous solid) to an ellipsoid with axes along X ' , Y', and 2 [l 13. The
change in lengthof the initially equalX and Y' semi-axis is given by
If now the light is polarized in either the
X or Y direction, the scattered light
with polarization in the same direction vanishes because of the opposite
signs of A n x * and Any,. Light is, however, scattered with polarization in the
orthogonal direction. Its amplitude is determinedby an effective strain-optic
coefficientp66=0.5(pll-p12). Note that a shear wave with particle motion in
the Z direction (i.e., d&/dx1+d&/dx3=S5) wouldaffect n l , n2, and n6
through p15, p25, and p65), all of which are zero according to (7.43). Such a
shear wave would thus not cause any scattering. This is somewhat plausible
when it is realized that in that case, the distorted rectanglesof Fig. 7.l(b) lie
in the X-2 plane and not in the plane of optical polarization.
Related
275
and Materials
From the examples above, it will be clear that the analysis of the general
interaction case can be quite complicated, with the effective strain-optic
coefficient usually a linear combination of the ones defined by (7.42) [18].
Tabulated values of the latter coefficients may be found in many review
articles and application-oriented books [14,17-201.
For actual applications, it is convenient to use a figure of merit that
Pd
indicates the relative efficiency of a material in terms of diffracted power
per unit sound intensityI,. For a weak interaction,we have
pda
(&I2
(7.50)
and, from (7.44) and (7.49, ignoring tensor aspects,
n6p2SZ
(7.51)
With (7.13) we find
(7.52)
where M2 is a commonly used figure of merit [21] that lends itself to easy
experimental evaluation [21,22]. Other figures of merit, M I and M3, are
more specifically relatedto special devices such as light deflectors [18,23].
In this book, wehave throughout used simple constants C' (3.6), (3.10)
and C (3.76) to denote the relation between An and S. It will be seen with
(7.44) that
C = -0.5n3p
(7.53)
C= -n2p
(7.54)
where p is the one opto-elastic
constant characterizing liquids. Some liquids,
however, exhibit dynamic birefringence under the action of a sound field
[24].Wellabove
the relaxation frequency of the liquid, it acts as an
amorphous solid in the sense
that it exhibitstwo coefficientspi 1 and p21 such
that
2&
PI1 - P21 =n37
(7.56)
where 'tis the relaxation timeand Ssthe so-called Sadronconstant [25].
Chapter 7
276
REFERENCES
1. Thurston, R. N., “Wave Propagation in Fluids and Normal Solids,” in Physical
Acoustics, Vol. IA (W. F! Mason, ed.), Academic Press, New York
(1964).
2. Auld, B. A., Acoustic Fields and Waves in Solids,Wiley, New York (1973).
3. Kolsky, H.,Stress Waves in Solids, Dover, New York (1963).
4. “Standards on Piezoelectric Crystals,” Proc IRE, 3 7 1391 (1949).
5. Randolf-BornsteinTables, NewSeries(Hellwege, K. H. and Hellwege,A. M.,
eda), Springer Verlag, New York(1979).
6. Staudt, J. H. and Cook, B. D., J: Acoust Soc. Am., 41: 1547 (1967).
7. Acoustic Surface Waves(Oliner, A. A. ed.), Springer Verlag, New York
(1978).
8. Tsai, C. S., ZEEE Trans., CAS-26 1072 (1979).
9. Meeker, T. R. and Meitzler, A. H., “Guided Wave Propagation in Elongated
10.
11.
12.
13.
14.
Cylinderand Plates,” in PhysicalAcoustics, Vol. IA(Mason, W.
P.
ed.),
Academic Press, New York(1964).
Born, M. and Wolf, E., Principles of Optics, Pergamon, New York (1965).
Yariv, A., Optical Electronics, Holt, Rinehartand Winston, New York (1985).
Von Hippel, A. R., Dielectrics and Waves, Dover, New York (1956).
Pinnow, D. A., ZEEEJ: Quant. Electron., QE-6:223 (1970).
Nye, J. F., Physical Properties of Crystals, Oxford University Press, New York
(1 960).
15. Nelson, D. F. and Lax, M., Phys. Rev. Lett., 24: 379 (1970).
16. Nelson, D. F. and Lax, M., Phys. Rev. B, 3: 2778 (1971).
17. Korpel, A., “Acousto-Optics,” in Applied Solid State Science (Wolfe, R. ed.),
Academic, New York (1972).
18. Dixon, R. W., J: Appl. Phys., 38 5149 (1967).
19. Musikant, S., Optical Materials, Marcel Dekker, New York (1985).
20. Gottlieb, M.,Ireland, C. L. M. and Ley, J. M., Electro-Optic and Acousto-Optic
Scanning and Deflection,Marcel Dekker, New York(1983).
21. Smith, T. M. and Korpel, A., ZEEEJ: Quant. Electron., QE-1: 283 (1965).
22. Dixon, R. W. and Cohen, G. M.,Appl. Phys. Lett., 8: 205 (1966).
23. Gordon, E. I., Proc ZEEE, 5 4 1391 (1966).
24. Riley, W. A. and Klein, W. R., .
l
Acoust. Soc Am., 4 5 578 (1969).
25. Jerrard, H.C., Ultrasonics, 2: 74 (1964).
8
Special Topics
In this chapter we willdiscussbrieflysome
aspects of acousto-optic
diffraction that fall somewhat outside the scope of this book,
but that are of
relevance to device applications or are otherwise of intrinsic interest. The
chapter also contains a complete three-dimensionalweak interaction
formalism using the plane-wave spectraof arbitrary fields.
8.1 ANISOTROPICBRAGGDIFFRACTION
In Sec. 7.3 it was pointed out that a polarization change in scattering is
likely to occur upon interaction with a shear
wave. If the medium is optically
anisotropic, then the incident and scattered k vectors may be of unequal
lengths. This changes the condition for Bragg diffraction in some essential
aspects, which we will now investigate.
Let the interaction mediumbe a positive uniaxial crystal with the optical
indicatrix oriented as shown in Fig. 7.6(a). We assume that a pure shear
wave with polarization along Y propagates in the direction of the optical
(2)
axis. The shear wave causes orthogonal scattering of an extraordinary
incident wave, propagating in the Y-Z plane, intoan ordinary wave
propagating in the same plane.It will be clear that the wave vector triangle
must now be constructed subject to the constraints imposed by Fig. 7.6(b).
277
278
Chapter 8
Such a construction is shown in Fig.
8.l(a). Note that for a given k,two kls
are possible:kla mediated by Id, and k l b mediated by b.When $i increases,
Ilr, will decrease, indicating Bragg angle behavior for frequencies down to
zero. In this limit, ki and kla are parallel and opposite to Id.
Another interesting aspect of anisotropic diffraction is that multiple
forward scattering, and therefore Raman-Nath diffraction, is, in general,
not possible. In Fig. 8.l(a), for instance, there does not exist, in general, a
second wave vector Kb, equal in length to K, and directed upward, that
would rescatter the kl, on the circle into k'l, on the ellipse. It is also
puzzling at first glance that kl, represents upshifted light (positive Doppler
a
b
Figure 8.1 Anisotropicinteractioninpositiveuniaxialcrystal.(a)Shearwave
along opticalQ axis, light wave inX- Y plane. (b) All waves inX- Y plane.
Special Topics
279
shift), whereas it is clearly directed downward with respect to the sound
(negative diffraction angle). The paradox is resolved when
we realize that the
sound “sees” the incoming lightki as upshifted by a larger amount than the
subsequent downshift. In other words, the total diffraction angle is positive.
From all of this, it appears that new and unexpected effects are possible in
anisotropic diffraction.
A popular confimration is one in which, with referenceto Fig. 7.6(a), the
sound and light waves all propagate in theX-Y plane perpendicular to the
optic axis. It is evident in this case that the two appropriate refractive
indices, no and ne, do not depend on the direction of light propagation. A
diagram analogous to Fig. 8.1(a) is shown in Fig.8.l(b). It is clear that
and it hasbeen shown by Dixon[l] that
Note that if ne=no, eqs. (8.2) and (8.3) revert to the familiar ones for
isotropic Bragg diffraction.
An interesting configuration is shown in Fig. 8.2(a). Here, the interaction
is collinear (Bragg angle equals 90°), yet occurs
at a relatively low frequency
m i c a l values for F’in are of the order of 10-100 MHz [l]. The maximum
frequency for which interaction can take place applies to the collinear
scattering configuration of Fig. 8..2(b).
Finally, Fig. 8.3(a) shows a scattering configuration that is particularly
suited for wide bandwidth beam steering [2,3]. The center frequency,
280
Chapter 8
a
b
Figure 8.2 Collinear interaction in uniaxial crystal. (a)
frequency.
Low frequency, (b) high
corresponding to K,,is given by
The wide bandwidth in deflector applications is due to the fact that K, is
tangential to the inner circle. Hence, to a first order, its direction does not
change when Q deviates from Q,. For a given width of the angular planewave spectrum of the sound, the applied frequency may varyby a relatively
28 1
Special Topics
a
b
Figure 8.3 Tangential interaction in uniaxial crystal. (a) Basic configuration at
center frequency,(b) explanation of wide bandwidth, and (c) second-order operation.
282
Chapter 8
C
Figure 8.3 (Continued).
large amount before no more acoustic plane waves are available for
interaction. This may be seen fromthe construction in Fig.8.3(b), similar to
the one used before in isotropic Bragg diffraction (Fig. 6.6).As in Sec. 6.3
[eq. (6.131, we assume that the bandwidth is determined by the condition
L&= k2z.From Fig. 8.3(b) if follows that
giving, with criterion(6.13,
-=(!E)
B
0.5
K
For the relatively large Q to which (8.8)applies, we see that the bandwidth
has indeed improved over
that of an isotropic deflector(6.16).
So far, we have, in our discussion of anisotropic deflection, dealt with
linear polarization only. In certain crystals, the birefringent aspects of
propagation and acousto-optic diffraction apply to clockwise and
counterclockwise polarization components rather than orthogonal linear
polarization components. This is called optical activity. A case in point is
Te02 (paratellurite)that is used extensivelyfor low-frequency “birefringent”
283
Special Topics
beam deflection [4]. An interesting application of Te02 is described by
Chang and Hecht [5]. To increase resolution in deflection, they use the
second-order mode of Bragg diffraction, shown in Fig. 8.3(c). The incident
light is scattered into kl byK1, and subsequently rescattered by K2 (=K1)
into k2. It is of interest to analyze this with the tools developed in the
preceding chapters. It is obvious that we are dealing with a case of two
simultaneous pure Bragg interactions. With a combination of (3.101),
(3.102) and (3.105), (3.106);we
find
dE2
- -0.25jkCSE,
"
dz
(8.9)
dE1 = -0.25jkCS * E2
- 0.25jkCSE,
(8.10)
dEo - -0.25jkCS * E,
dz
(8.11)
dz
"
Note that eqs. (8.9) and (8.11) are conventional coupling equations and
quite analogous to (3.101) and (3.102). Equation (8.10) is a combination of
both the latter equations and describes simultaneous contributions to E1
from E2 and Eo. The solutionsto (8.9-8.1 1)may be written, with the proper
boundary conditions,
Eo= EiCOS'
z)
--
[2k
(8.12)
(8.13)
(8.14)
where it is assumed that the interaction lengthis limited to z=L.
It is of interest to compare these expressions with
(3.103), (3.104) for firstorder Bragg diffraction. We notice that 100% diffraction is still possible; the
required value of v ( v = d 2) is, however, larger by a factor d 2 than the
value needed ( v = z ) for first-order Bragg diffraction. Also note that the
284
Chapter 8
maximum of lE# equals 0.5lEf at v=dg 2). The other half of the powerat
that point is equally divided between the zeroth
and the first order.A plot of
this interesting behavior is shown in Fig. 8.4.As for the bandwidth of the
second-order deflector, the same construction may be appliedto Fig. 8.3(c)
as used in Fig. 8.3(b).It is seen readilythat the bandwidth is governed
by the
same considerations as in a first-order deflector if the frequency sensitivity
of kl is ignored.We find
a factor of two smaller than the first-order case(6.16). Thus, the benefits of
larger deflection angle are nullified by the decrease in bandwidth, and the
total number of resolvable points remains the same.
If in the configurationof Fig. 8.3(c), ki and kl are interchanged, then two
orders, kz (upshifted) and kl (downshifted), are generated simultaneously
due to the degeneracyof the Bragg angle condition.
A theoretical analysisof
this case, along the lines of (8.9-8.11), has been given by Warner and coworkers [4].
In light of the above examples, it will be evident that a comprehensive
theory of anisotropic light diffractionwould probably betoo unwieldy to be
of much use. Nevertheless,it is of interestto see how sucha theory could be
constructed from basic notions
of induced cross-polarization.An attempt at
such a formulation has been made by Cherykin and Cherkov [q,to whom
the reader is referred for
further details.
1.o
0.8
0.6
0.4
0.2
0.0
Figure 8.4 Interaction behavior in anisotropic second-order Brag diffraction.
Special
8.2
285
Topics
ACOUSTO-OPTICTUNABLE FILTERS
In acousto-optic tunable filters, use is made the
of inherent selectivityof the
diffraction process to electronically move the optical passband. Most filters
of this kind use anisotropic interaction, whichis the reason they are
discussed in this chapter. Before analyzing a typical anisotropic filter, let us
consider two simple isotropic configurations.
Perhaps the simplest wayof
making a tunable optical filter or
spectrometer is shown in Fig.
8.5. It depends for its operation on the change
in direction of the deflected beam when the light wavelength changes, and
functions asfollows.
A parallel beam of quasi-monochromatic light, ray a in the figure, is
incident at an appropriate angle on a low-Q Bragg cell operating at a
frequency$ The width of the beam is limitedto D by the entrance pupilof
the sound cell. The diffracted beamb is focused by a lens of focal length F
on a pinhole in the focal plane, situated
at XO. For the diffracted beamto fall
on the pinhole the wavelength of the light must satisfy the relation
(8.16)
or
r+
L
4
I
Figure 8.5 Isotropic acousto-optic filer using a pinhole for selectivity.
Chapter 8
286
In deriving these equations, we ignore the refraction of the light when
leaving the sound cell. By changing the frequency of the sound (and hence
A), a desired optical wavelength can be made
to pass through the pinhole. In
other words, Fig. 8.5 represents an electronically tunable optical filter.
The width M of the passband may be estimated as follows. Thespot size
formed by the focused beam b is approximately FAJD. A change M moves
the center of thespot by F A ( ~ ~ B ) = F A NThus,
A.
the passband is determined
by the condition
FAA --FA
"
A
D
or
AA=-AA
D
(8.18)
The spectral resolutionR=AJAA is then given by
R = -D
A
(8.19)
In the device of fig. 8.5, the pinhole aperture is essentialto the operation.
The selectivityof the soundcell itself plays no role,
as the cell has a low Q. It
is, however, possible to reverse this situation by removing the pinhole and
relying on the Q of the sound cell instead. The maximum deviation AA is
then determined by the condition that the change in the Bragg angle equal
the angular widthof the sound spectrum:
A A
"_
2A
A
-L
(8.20)
or
(8.21)
A similar condition determines the acceptance angle v/ of both the above
devices:
A
L
(8.22)
A more common configmatichi uses collinear anisotropic interaction, as
shown in Fig. 8.6. The appropriate condition for interactionis given by eq.
(8.4):
I
Special Topics
287
Figure 8.6 Collinear on-axis and off-axis upshifted operation.
h V
f =-
4
(8.23)
where An=ne “ n o
The tolerance on the interaction consists off being indeterminate to the
extent Af= l/- V/L,
where z is the sound transit time and L the collinear
interaction length. With(8.23)we find
(8.24)
where
R=-L h
&
(8.25)
The acceptance angle of the device may be estimated from Fig.8.6. This
K) and
shows upshifted interactionboth for the on-axis case (sound vector
the off-axis case (sound vector
K’,angle I,@. It is readily seenthat
288
Chapter 8
(8.26)
Hence,
(8.27)
But
2w v 2w
K,"Ka=AKa=--27rAf "-=v V L L
(8.28)
From (8.27)and (8.28) it follows that
(8.29)
Because of the square root dependence, this device has a relatively wide
acceptance angle. This is also intuitively obvious from Fig.
8.6, which shows
that &' changes but slowly when vincreases.
Figure 8.7 shows a typical anisotropic collinear filter, together with its
tuning curve[A.More detailed informationmay be found inRefs. 8-1 1.
8.3
LARGEBRAGGANGLEINTERACTION
In the case of large Bragg angles, the direction of the incident light is closer
to parallel than perpendicular to the sound. A typical interaction
configuration for downshifted scattering is shown in Fig. 8.8(a), and for
upshifted scattering in Fig. 8.8(b). In the analysis to follow we shall limit
ourselves to downshifted scatteringat the exact Bragg angle.
Following [12], we firstnote that the boundary conditions for this case are
given by
Eo=Ei
for x=O
(8.30a)
E-I=O
for x=L
(8.30b)
We start from the generalized two-dimensional Raman-Nath
(4.33), leaving out all ordersother than 0 and -1:
equations
289
Special Topics
REJECTED
LIGHT
ACOUSTIC
ON
INCIDENT
LIGHT
q-15
il
fSELEtTED
L I y T
-c
POLARIZER
140s
2
z
x>W
130
-
120
-
110
-
100
-
90
-
I
I
I
I
I
I
I
I
1
80-
3
S
ANALYZER
IEZOELECTRIC
70
-
w 50-
40
-
3020
I
350
400
I
I
I
I
I
I
450
500
550
600
650
700
J
WAVELENGTH, nm
Figure 8.7 Collinear CaMo04 acousto-optic tunable filter. (From Ref. 7.)
290
Chapter 8
a
c
ound
L
V
Figure 8.8 ConfigurationforlargeBragganglescattering.(a)Downshifted
interaction, (b) upshifted interaction. (Adapted from Ref.12.)
V:E~(p)+kZE,(p)+0.5kzCS(p)E_,(p)=0
(8.31)
(8.32)
In analogy to (4.34),we now assume &(p) and E-l(p) to be plane waves
propagating the X direction with amplitudes Eo(x)and E-I(x). The sound
wave is as given by (4.35).Assuming a rectangular sound column model and
29 1
Special Topics
slow variations of Eo(x) and E-l(x), i.e., ignoring second derivatives,we may
readily derive the following coupled equations from (8.31)
and (8.32):
-jkCS
-dEo
="dx 4 sin#B
(8.33)
E-l
dE-l - +jkCS *
E
O
dx
4sin$,
(8.34)
"
With the boundary conditions (8.30a) and (8.30b), we find the following
solution:
Eo = Ei
cosh[kCISl(L -x) /4sin #B]
cosh[kClSIL/4 sin#B]
jm
E-1= -
S*
Ei
sinh[kCISI(L-x)/4sin $B]
cosh[kCISIL/4 sin# B ]
We note that the interaction behavior, shown in Fig. 8.9, is of
an essentially
different character than in the caseof small Bragg angles. There no
is longer
any periodic behavior, but rather a gradual transfer of energy from EO
backward into E- 1. This transfer reaches lOO?! only for L +CO.
Upshifted
interaction [Fig. 8.8(b)] shows a similar behavior starting from x=O rather
than x=L.
0.2
Q 4 1 ,\;
,
0.0
Figure 8.9 Interaction behavior of large Bragg angle scattering.
292
Chapter 8
8.4 ACOUSTO-OPTIC SOUND,AMPLIFICATION AND
GENERATION
In Chapter 3 we discussed the possibility of sound amplification and
generation, implied in the quantum-mechanical interpretation of
downshifted interaction, Energy conservation on a per quantum basis
written as
appears to hint at both possibilities because one photon is released to the
sound for every photon generated in the downshifted light. Overall energy
conservation may be writtenas
(8.38)
P,'+ P/I +P,'= 0
where the primed PSdenote net powers flowing into the interaction region.
Let Nj(j=O, - 1,S) be the corresponding net numberof quanta per second
flowing into this region at frequency mi. Then,
j = O , -1,
P)=NjhWjy
S
(8.39)
where 0s denotes a. Now (8.37) implies that the N,s are equal as there exists
a one-to-one quantum exchange. Thus, we may write
@o
@-I
@S
From (8.39) and (8.40), it then follows that
(8.41)
(8.42)
Note that multiplying (8.41) by W 0, (8.42) by @-I, and subtracting the latter
from the former gives(8.38).
Equations (8.41) and (8.42) are derived here from quantum-mechanical
considerations. They can be extendedto include more frequencies[131, such
Special Topics
293
as would, for instance, occur in Raman-Nath diffraction. Rather startling
perhaps is the fact that (8.41) and (8.42) can be derived classically for a
general system of nonlinear interactions through a nondissipative process.
Details may be found in the classical papers of Manley and Rowe [14]. The
equations (8.41) and (8.42) are aspecialcase ofwhat are called the
Manley-Rowe relations.
It is of interest to calculate the actual sound amplification for a practical
configuration, sayalow-frequency
(-40 MHz) Braggcellused
for
modulation. Such cells are characterized by diffraction efficiencies 7 on the
order of 100% per 0.1W of sound power. Thus,
(8.43)
where CSis of the order of 10 W-1. With regard to (8.42), the amount of
power U s
delivered to the sound beam equals-PIs, and by the same token
P - I equals -P'-l. Hence,
Combining (8.43) and (8.44), we find
(8.45)
In an experiment at 45 MHz by Korpel and co-workers [15], the following
parameter values applied: Pi G lo3W, C,2 10, ws/w-l G
APs/Ps2
The results of the experimenthavebeendiscussedin
Chapter 2 and
presented graphically in Fig. 2.10.An experiment by Chiao and co-workers
[ l q has demonstrated amplificationof thermal sound (stimulated Brillouin
scattering) in quartz at a frequency of 30 GHz and a Bragg angle of go", at
which all beams are collinear. It is clear from (8.45) that for such high
frequencies the generated sound power may be much larger; frequently,
it is,
in fact, the source of internal damage in crystals subject to intense laser
pulses.
The factthat the Manley-Rowe relations (8.41) and (8.42) may be derived
classically makes us suspectthat there may be a classical explanation for the
sound amplification discussed above. This is indeed the case. The force
driving the sound field is wave
a of radiation pressure causedby the moving
interference pattern of incident and diffracted light [12, 17-19]. For large
angle downshifted Bragg interaction [see Fig. 8.8(a)], it is found that
[l21
294
Chapter 8
Is
=Is0
r(L-X)
COS’ r L
COS’
(8.46)
(8.47)
with
0.5
r
(:I,)
(8.48)
where I,o is the initial sound intensity,
I, is the amplified sound intensity,and
it has been assumedthat the incident light intensity hasnot been noticeably
depleted.
A plot of I - ] and I, as a function of x,for TL=0.7, is shown in Fig. 8.10.
Note that the sound increasesin the “Xdirection, the scattered light in the
-Xdirection. For rL=ld2, the sound and diffracted light intensities become
infinite, and the devicenow functions as abackward wave oscillator.
However, the assumption of constant pump intensity is violated at that
point, and the simplified theory breaks down.It does, however, indicatethat
a threshold effect exists in stimulated Brillouin scattering.
As a h a 1 point of
interest, it should be noticed that the Manley-Rowe relations are satisfied
locally in the interaction described by (8.46) and (8.47). It may be shown
readily that
2 ,
Figure 8.10 Interactionbehavior of amplifiedsoundandscatteredlightin
optically pumped backward wave amplifier.
Special Topics
295
(8.49)
In isotropic substances, collinear sound generation is possible for high
frequencies only. The 90" Bragg angle requires that K=2k or ws=(Vlc)w.
Typical sound frequencies in solids for visible light pumping
are of the order
of 10'O Hz. By using anisotropic diffraction in crystals, as in Fig. 8.6, it is
possible to generate sound at lower frequencies (-lo8 Hz). Details may be
found in Refs. 20and 21.
8.5
THREE-DIMENSIONALINTERACTION
So far, we have limited our discussion by and large to two-dimensional
interaction, although mostof the basic equations, e.g., (4.32), (4.41), (4.108),
are formulated in three dimensions.It is, however, very difficultto work out
the consequences of the full dimensional equations. The simple rectangular
sound column, being such a convenient fiction in two dimensions, to
fails
be
even that when a dimension is added. The plane-wave interaction formalism
loses its nice one-to-one correspondence aspect; as remarked before, one
incident plane wave may nowinteract with a sound of sound and light wave
vectors, as shown in Fig. 8.11(a). Consequently, even for weak interaction,
simple wave interaction equations such as(4.54) and (4.55) do not exist. As
pointed out in Sec. 6.1, if it is really necessary to analyze three-dimensional
configurations, the volume scattering integral(4.37) is used [21, 221. Yet this
approach seems cumbersome, even more so when it is realized that, in
principle, a single integration should suffice. This is so because the soundmodulated polarization over which the spatial integral extends is itself
proportional to the product of sound and light fields. These fields satisfy
Helmholtz's equation, and, hence, their N3 spatial values are completely
determined by their N values in onearbitrary cross section, or, alternatively,
their N values in their angular plane-wave spectrum. In short, the (F)*
values specifying the combined fields in the volume integral exceed the
necessaryminimum of (P)2by N. Thus, by using an appropriate
formalism, it should be possible to reduce the triple integration to a single
one. It is plausible to try and develop such a formalism by using angular
plane-wave spectra. The fact that any particular scattered plane wave is
contributed to by a one-dimensional distribution of sound and light waves
[i.e., those with wave vectors on the cone of Fig. 8.1
l(a)] would presumably
A theory along
account for the one integration remaining in this formalism.
these lines, but in a somewhat conceptual form, was developed by Korpel
296
Chapter 8
IZ
Y
b
Figure 8.11 (a) Cone of incident light vectors ki, interacting with cone of sound
vectors K to generate scattered light vector kl. (b) Interpretation of corresponding
mathematical formalism.
Special Topics
297
[l21 using the Hertz vector formulation of Sec. 4.1 1. The basic equations
derive from (4.145),(4.163), and (4.166) with the weak scattering
assumption that the scattered light e’ is very weak relative to the incident
light ei. For time-harmonic quantities, we find in terms of the arbitrary
spatial component phasorsrI+(r), rI-(r), S@),and E&).
n+(r)= &j
exp(-jkR) dz,,
(8.50)
(8.51)
+
where dz,,=dx’dy’dz’, and the subscripts
and
downshifted light.
Next, all fields are decomposedinto plane waves:
-
refer to up- and
S(r) = JG,(K) exp(-JK.r) d o K
(8.52)
Ei(r)= JGi(k) exp(-/k-r) d o k
(8.53)
n+(r)= JG+(k)exp(-Jk.r)
do,
(8.54)
n-(r)= JG-(k) exp(-Jk.r)
dok
(8.55)
where doK and dok are infinitesimal areas on the half-spheres that form the
loci of the endpoints of K and k. We recognize in the Gs the plane-wave
spectra of the interactingfields.
Analogously to the development in Sec.4.5, it can now be shown that far
away from the interaction region
(r-w)
(8.56)
298
Chapter 8
Substituting (8.52) and (8.53) into (8.50) and (8.51), evaluating the integral
for R+w, and comparing the result with
(8.56) and (8.57), we find that
The mutual relationof the vectors k, kB, and Kg is illustrated in Fig.8.1 l(b).
As expected, the final results involvean integration over a one-dimensional
distribution of interacting wave vectors lying on a cone.
Although (8.58) and (8.59) give a compact account of three-dimensional
interaction, these equations are not immediately applicable to real-life
situations. Recently, a more practical version was developedby Korpel and
co-workers [23]. In this formalism, the angular plane-wave spectra E(@,$’)
and S(y, 7‘) are defined as in (3.133) and (3.144) (with the assumption of
weak interaction), extended to two dimensions. Paraxial propagation is
assumed, the sound along X, the light along Z. The angles 4, Cp,’ y, y ‘ are
shown in Fig. 8.12(a) and 8,12(b). Following a method similar to the one
just discussed and retaining terms downto second order in 4, Cp’, y , y ’, the
following interaction equations are obtained.
A geometric interpretation of (8.60) is shown in Fig. 8.13
for a specific
direction of incident light. The angles $ and @’
in (8.60) are denoted by $1
Special
IY
299
Topics
X
IY
X
b
Figure 8.12 Propagationanglesandwavevector
(From Ref. 23).
for (a) sound and (b) light.
300
Chapter 8
Figure 8.13 Interpretation of paraxial mathematical formalism. (From Ref 23.)
(8.62)
@:=($)Yt
(8.63)
Special Topics
301
From the argumentof S, it follows that
y=-m,+”-y9’+(k)y’’/2
K
With (8.62), (8.63), eq. (8.64) may also be written as
F -@B
(8.65)
It will be seen from Fig. 8.13 that (8.63) defines the cone, with apex P,of
scattered wave vectors, whereas (8.65) defines the cone, with apex 0, of
sound vectors.
It may
be
shown
readily
that for two-dimensional
a
situation
characterized by
(8.66)
Equations (8.60) and (8.61) revert to the two-dimensional interaction
equations (3.160) and (3.161) derived earlier.
In many cases, (8.60) and (8.61) may be written in a simpler form. If the
angular spectra of sound and light are characterized by angular widths A@,
A#, A x A y ’ ,then it may be shown that the (K/k)y‘*terms may be ignored if
(8.68)
A sufficient condition for ignoring the
y ’ $ l term is
(8.69)
while the term with(Klk)y may be ignored if
(8.70)
where the notation(A@, Aj)min means the smaller ofA@iy Ayl
302
Chapter 8
If, as is often the case, (8.68-8.70) are all satisfied, then
k-l(@,@')
= - 0 . 2 5 j k C ~ i ( $ + 2 @ B , @$ *'()- @~- m
@B,
-cD
y')
($1
(8.72)
A typical example of an interaction configuration that may be analyzed
conveniently is the interaction of two Gaussian beams shown in Fig. 8.14.
The sound beam with axis in the X-Y plane and waist L (2L between lle
amplitude points) at ys causes diffraction of the incident light beam with
axis X - 2 and waist w at the origin.
The following expressions apply:
S(0, x, y , z ) = So exp
1
(8.73)
Figure 8.14 Interaction of Gaussian sound and light beam. (From Ref. 23.)
Specia1 Topics
303
(8.74)
(8.75)
(8.76)
L
It is readily shown that (8.68-8.70) apply if
U’ <<
(8.77)
L
If (8.77) is assumed, the substitution of (8.74) and (8.76) into (8.60) yields
-k2w2($- 2@8- @i)2- k2w2qS2- K2L‘(-@, +@)2
4
1
(8.78)
It is readily seen that a maximum occurs for @i= - @B, @ = @ B as is to be
expected. Upon changing ys, the amplitude of this wave, is then seen to trace
out the y-dependence of the sound beam.
8.6
SPECTRAL FORMALISMS
In the usual definition of the plane-wave spectrum, the phase reference is
located at the observation point on the nominal axis of propagation, say, the
2 axis. We will call such a spectrum with a shifting phase reference point a
local angular spectrum. The components of this spectrum change in phase
when the field propagates. This change expresses, of course, what is
commonly called diffraction.
In a nonhomogeneous or nonlinear medium, there will be additional
phase and/or amplitude changes imparted to the components by material
interaction or self-interaction. In acousto-optics configurations, such
changes come about through interaction with a sound field. Two kinds of
effects then should be kept in mind: diffraction effects and interaction
effects. The local plane-wave spectrum takes both into account in a
straightforward manner.
ussed
304
Chapter 8
The virtual plane-wave spectrum is the local plane-wave spectrum at z
back-propagated to the origin through the unperturbed medium. By
definition its value does not depend on z if no interaction mechanism is
involved. If interaction of some kind takes place, then the virtual plane-wave
spectrum changes with z, i.e., it depends on which particular local planewave spectrum is back-propagated. The advantage of using this kind of
spectrum is that diffraction effects are already included
by its very definition.
The propagation equations then describe interaction effects only, and are
generally easier to handle. Its application to acousto-optic interaction has
been
3.3.
f
It is sometimes more convenientto consider the spatial profile of the field
only, rather than including the spatial carrier. Such an approach makes it
possible, for example,to devise algorithms for acousto-optic interaction
that
circumvent the necessity of processing the high resolution spatial carrier
[24]. The spectrum of the profile is in all essential respects analogous
to the
spectrum of a modulating signal in communications [25]. Like the latter, it
suffices for the analysis of signal processing, here to be understood as
interaction and propagation.A discussion of this method-sometimes called
the Fourier transform Approach-in the context of acousto-optics may be
found in Refs.26 and 27.
The spatial profile spectrum discussed above is a local spectrum in the
same sense as the conventional angular plane-wave spectrum. Like the latter,
it too may be back-propagated to the origin.It then becomes avirtualprofile
spectrum, implicitly accounting for diffraction effects in the unperturbed
medium, and explicitly representing interaction in the perturbed medium.
A unified treatment of spectral formalisms, with relevance to acoustooptics has beengiveninRef.
28. In this section we shall follow that
treatment closely.
8.6.1 The Local and the Virtual Plane-Wave Spectrum
In Sec. 3.3.1 we defined the plane-wave spectrum for two dimensions in
terms of the angle 4. For the present discussionwe will extend the definition
to three dimensions and use the wave vector components kx and ky rather
than the angles 4 and 4':
Special Topics
305
where
(8.80)
for paraxial propagation.
The spectrum so defined is called thevirtual spectrum, because, according
to (8.79), the local field atz is obtained by propagating allthe plane waves in
the spectrum to z as if there were no interaction. The virtual plane-wave
spectrum is obtained by back-propagating the actual local plane-wave
spectrum to the origin asif the medium were unperturbed:
(8.81)
where the local planewave spectrum is definedby
(8.82)
with the inverse
(8.83)
According to (8.81) and (8.82), the relation between the virtual spectrum
and the local spectrum is givenby
(8.84)
If no interaction is involved, the virtual plane-wave spectrum &kx, ky;z)
2. If there is interaction, then it depends on which
particular plane-wave spectrum is being back-propagated, i.e.,it depends on
z. The advantage of using this kind of spectrum
that
is diffraction effects are
included by its very definition. This isnot the case with the local spectrum
is independent of
306
Chapter 8
A(kx,ky; z). Each of the plane waves in the spectrum propagates as
exp( -jk,z) in the nominal propagation directionso that
The quantity H is called the propagator. It specifies the evolution of the
spectrum in the absence of interaction. For the virtual spectrum the
propagator equals unity in the absence
of interaction.
For paraxial propagationwhere lkxl,Ikylekwe may write
(8.86)
8.6.2 The
Local and Virtual Spatial Profile Spectrum
It is sometimes more convenientto consider only the spatial profile
and take
the spatial carrier for granted. Such an approach makes it possible, for
example, to devise algorithms for acousto-optics interactions that
circumvent the necessity process the high frequency carrier [24]. This is
implemented by writing
where E e is the spatial profile.In communication theory thisquantity would
[25].
be called the complex envelope
The local profile spectrumis defined as
with the inverse
Substituting (8.87) into (8.88), we fmd readily, after comparing with(8.84)
z)z)=exp(ikz)A(kx,
ky;ky; Ae(kx,
(8.90)
The propagationlaw for A , follows readily from that of A by substitution
(8.90) into (8.85):
307
Special Topics
(8.91)
where
H,(k,,k,)=exp(jkz-jkz,z)=exp
( y;
j-+j-
7;)
(8.92)
for paraxial propagation.
A virtual profile spectrum may now be defined
by back-propagation:
-
A,&, k,,; z) = exp
(8.93)
Substituting (8.90) into (8.93) and then (8.84) into the result, we find
Ae(kx, ky; z)=E(kx, ky; Z)
8.6.3
(8.94)
SpecialAcousto-OpticProfileSpectra
Here we are concerned with spectra of a particular order
n that propagate in
a particular direction k,. It then seems reasonable to define spatial profiles
yen in the following way:
En(x,y, z)=
v&,
y, z) exp(-ik,xx-jk,yy-jk,zz)
(8.95)
By taking the Fourier transform
of (8.93), we find
(8.96)
where
YfW=9”’(yot)
(8.97)
We readily find the propagation law for Y e n from (8.95) by invoking the
propagation law for A, [(8.91) with A , replaced by A,,]:
308
Chapter 8
where
in the paraxial approximation. The propagatorHenis sometimes called the
transfer function for propagation [27].
A virtual acousto-optic profile spectrum can be defined in the usual way
by back-propagating:
qen
(kx, k y ;Z )
For convenience, the various quantities introduced, their mutual relations,
and their propagators are shown Table
in
8.1.
Table 8.1 List of Various Quantities Describing Propagation, Their Definitions,
Mutual Relations, and Propagating Factors [Adapted fromRef 28.1
Quantity
Propagator
(transfer function)
Definition
real physical field
e=Re[E exp(jot)]
g=E, exp(-j kz)
E n = v m exp(-jk,
r)
A=9-'+[E]
E=A exp(jk, z)
Ae=Y-'[Et]=A exp(jkz)
y=Y"[ven]
exp[-j(k-kxz/2k-kyz/2k)z]
1
exp~(kxz+kyz)z/2k]
e~plj(k~~+2k,,~k,
+kyz+2kn,ky)z/2k]
&A,
exp[-j(kxz
+ky2)z/2k]=B
qm=6(kx knXky kny;z)
= Y e n exp[-j(kx2+2knXkx
+ky2+2knyky)z/2k]
+
+
1
1
REFERENCES
1. Dixon, R. W., IEEE .lQuunt. Electr., QE-3: 85 (1967).
2. Lean, E. G. H., Quate, C. F. and Shaw, H. J. App. Phys. Let., IO: 48 (1967).
3. Collins, J. H., Lean, E. G. H. and Shaw, H. J. App. Phys. Lett., 11: 240 (1967).
4. Warner, A. W., White, D. L. and Bonner, W. A., .l
Appl. Phys., 43: 4490 (1972).
Special Topics
309
5. Chang I. C. and Hecht, D. L.App. Phys. Lett., 27: 517 (1975).
6. Parygin, V. N. and Chirkov, L. E. Sov. 2 Quant. Electr., 5: 181 (1975).
7. Chang, I. C. Acousto-optic tenable feelers, inAcousto-optic Signal Processing
(N. J. Berg and Y N. Lee, eds.) Marcel Dekker, W, 1983.
8. Harris, S. E. and Wallace, R. W. .lOpt. Soc Am. 5 9 774 (1969).
9. Xu Jieping and Stroud, R., Acousto-Optic Devices,Wiley, New York (1992).
10. Goutzoulis, A. P. and Pape, D. R. (eds.). Design and Fabrication of AcoustoOptic Devices, Marcel Dekker, New York (1989).
11. Magdich, L. N. and Molchanov, V. Ya. Acoustooptic Devices and Their
Applications, Gordon & Breach, New York(1959).
12. Korpel, A. “Acousto-0ptics,”in Applied Solid State Science,Vol. 3 (R. Wolfe,
ed.), Academic Press, New York,1972.
13. Weiss, M. T. Proc IRE, 4 5 113 (1957).
14. Manley, J. M. and Rowe, H. E. POCIRE, 44: 904 (1956).
15. Korpel, A., Adler, R. and Alpiner, B. App. Phys. Lett., 5: 86 (1964).
16. Chiao, R. Y, Townes, C. H. and Stoicheff, B. F! Phys. Rev. Lett., 12: 592 (1964).
17. Quate, C. F., Wilkinson, C. D.W. and Winslow, D. K. Proc IEEE, 53: 1604
(1965).
18. Kastler, A. C. R Acad. Sci. B, 259 4233 (1964);
260:77 (1965).
19. Caddbs, D. E. and Hansen, WW
. . Lab. Phys. M. L.Rept. 1483 (1966).
20. Hsu, H. and Kavage, W. Phys. Lett., 15: 206 (1965).
21. Piltch, M. and Cassedy, E. S. Appl. Phys. Lett. 17: 87 (1970).
22. Gordon, E. I. Proc. IEEE, 54: 1391 (1966).
23. Korpel, A., Lin, H. H. and Mehrl, D. J. .lOpt. SOC Am.A., 4: 2260 (1987).
24. Venzke, C., Korpel, A. and Mehrl, D. App. Opt., 31: 656 (1992).
25. Haykin, S. Communication Systems, 3rd Ed., Wiley, New York (1994).
26. Tam, C. W. A Spatio-temporal Fourier Transform Approachto Acousto-optic
Interactions, Ph.D. Thesis, Syracuse University(1991).
27. Banerjee, P. P.
and Tarn, C. W. Acustica, 74 181 (1991).
28. Korpel, A., Banerjee, P. and Tam, C. W. Opt. Comm, 97: 250 (1993).
This Page Intentionally Left Blank
Appendix A
Summary of Research
and Design Formulas
GENERAL PARAMETERS (Fig. A . l )
K=21r
-
A denotes soundwavelength.
k = 21r
h
ildenotes light wavelength in medium.
Q =GL
h
Klein and Cook parameter.
v=k,AnL
Raman-Nath parameter. k , denotes propagation
constant in vacuum. See also after P.
An= -0.5nipS0
Sound-induced peak refractive
index
also after V.
no
index
Refractive
P
Appropriate elasto-optic coefficient
7.3).(Sec.
s0=(2l~~"
v-3y.5
Amplitude of condensation Ap/p or strain (Sec.
7.2).
A
change. See
of medium.
31 1
312
Appendix A
L
L
4
Figure A.l
Is
Sound intensity.
Po
V
Density of medium.
Sound velocity.
Figure of merit (Sec. 7.3).
P
lvl=kv(
Acoustic power radiated by transducer.
)
0.5M2PL
h
2k=X
@BzK2A
Bragg angle. For large angles @B*sin
RAMAN-NATH DIFFRACTION (Fig. A.2)
Conditions:
Q
.
1
,
Qv41
Diffraction angles:
+n=+o+2n+~
Diffracted orders:
In=lEn12=I&In(v')12
@E,
313
Summary of Research and Design Formulas
Figure A.2
where
v’ = v
sine(
WOL
A plot of ( A 3 with +0=0, is shown in Fig. 3.3. For a standing sound wave
Refs.: Sec. 3.1, eqs (3.13), (3.14), (3.45-3.47), (3.49).
profiled sound column, see Sec. 3.35, eq. (3.173).
For diffraction by a
BRAGG DIFFRACTION (Fig. A.3)
Conditions:
Q 1, Q/v ))l
and A$ small enough so that only Bragg orders are generated.
Diffracted orders:
3 14
Appendix A
-l
.
Figure A.3
Plots of (A.7) and (A.8) are shown in Fig. 3.9. For conventional Bragg
diffraction, A@=O and
(A. 10)
Plots of (A.9) and (A.10) are shown in Fig. 3.8. Identical expressions apply
for the - 1 order. Refs.: Sec. 3.2.3, eq. (3.124). For diffraction of a profiled
light beam, see Sec. 3.3.4, eq. (3.168) and (3.169).
BRAGG MODULATOR (Fig. A.4)
Relative (small) modulation index for weak interaction:
(A.ll)
where Km=2dAmand A,,, is the modulation wavelength. An identical
expression applies for - 1 order operation. Refs.: Sec. 6.5, eqs. (6.44), (6.51).
Summary of Research and Design Formulas
315
Figure A.4
BRAGG DEFLECTOR (Fig. A.5)
Deflection angle:
(A. 12)
~ the Bragg angle at the center frequency
Fe.
where @ B is
Deflected intensity for weak interaction
as a function of F:
(A.13)
Refs.: Sec. 6.3, eqs. (6.12), (6.14-6.16).
Bandwidth (one half-frequency difference between zeros):
(A. 14)
where QCis the Q at center frequency. Identical expressions apply for the
-1
order. For beam steering deflectors, see Eq.6.25. For anisotropic deflectors,
see Sec. 8.1.
316
Appendix A
Figure A.5
WEAK INTERACTION OF ARBITRARY FIELDS IN TERMS OF
PLANE WAVES
Two-Dimensional Interaction (Fig.A.6)
81(g)=-O.25jkC~(-g+g~)BXg-2g~)
(A. 15)
8-1(g))=-o.25jkcS"*(-g-gB)Bxg+2gB)
(A. 16)
where C=-pn,2 and S" and B are angular plane-wave spectra with phase
reference at the origin. An interpretation of (A.15) and (A.16) is shown in
Fig. 3.14. Refs.: Sec. 3.3, eqs (3.138), (3.144), (3.160), and (3.161).
317
Summary of Research and Design Formulas
Three-Dimensional Interaction (Fig. A.7)
&(@,@’)=-0.25jkCI&
-@+$B
“ l
-Y@’+-Y’’,Y’
k
(A. 17)
d(y’lA)
L
For the definition of 4,
(8.61).
(A. 18)
@‘,
yand y‘, see Fig. 8.12. Refs.: Sec. 8.5, eqs. (8.60),
STRONG INTERACTION OF ARBITRARY FIELDS
Two-Dimensional Case in Terms of Plane Waves
di
-
m,
-= -jaE,,-lSi-l- jaE,,+IS;+l
6%
”
Figure A.7
4
(A. 19)
Appendix A
318
where (see Fig. 4.5)
(A.20)
and a=0.25kC=-0.25kpn$ Refs.: Sec. 4.7. For explicit solution by path
integrals, see Sec. 4.8.
Two-Dimensional in General Terms
V,2E,(p)+k2E,(p)+0.5k2CS(p)~~-~(p)
+0.5k2CSr(p)E,+l(p)=0
(A.22)
Three-Dimensional in General Terms
v~,(r)+k~E,(r)+O.Sk~CS(r)E,-,(r)
+O.5k2CP(r)En+l(r)=O
(A.23)
Refs.: Sec. 4.4, eqs. (4.32), (4.33).
EIKONAL THEORY
Amplitude of diffracted ray at interaction point:
where the interaction pointS, is defined by the local ray triangle (Fig. 4.12).
kss+ksi=ksl
(A.24)
Summary of Research and Design Formulas
319
Also,
(A.25)
(A.26)
(A.27)
(A.28)
and A&)= -?if
C>O, T if C<O.Similar relations applyto the
4
- 1 order.
Refs.: Sec. 4.9;eqs. (4.99), (4.117), and (4.1 19); Sec.6.9.2.
VECTOR EQUATIONS
(A.29)
(A.30)
(A.31)
e(r, t)=ef, t)+e’(r, t)
d2n(r,t )
e(r,t)=V[V,~(r,t)]-~€~-
(A.32)
at2
ei(r,t)=VIV.ni(r,t)]-p$o
e’(r,t)=VIV.n‘(r,t)]-pU,&o
d(r, r)=EoCs(r, t )
Refs.: Sec. 4.1
1.
d2ni(r,t )
dt2
(A.33)
dZd(r,t )
dt2
(A.34)
(A.35)
320
Appendix A
REFERENCES
1. Grottlieb, M., Ireland, C. L. M.. and Ley, J. M. Electro-Optic and Acousto-
Optic Scanning and Deflection,Marcel Dekker, New York (1983).
2. Korpel, A. “Acousto-Optics”, inApplied Solid State Science, Vol. 3 (R.Wolfe,
ed.), Academic Press, New York (1972).
Appendix B
The Stationary Phase Method
The stationary phase method appliesto the evaluation of the following type
of integral [l]:
I = Limk,,
i
h(x) exp[jkg(x)] dx
4
where h(x) and &x) are real functionsof x and k is a real constant. Now, in
any part of the integration interval where dg/dx# 0, the phase angle kg(x)
will go through many multiples of 2a if k becomes very large. In that case,
Re{explikg(x)]}=cos[kg(x)] will oscillate very rapidly and, unless h(x) has
discontinuities, this will completely average out the real part of the integral
(B.1). The same holds for the imaginary part of the integral involving
sin[kg(x)]. The only point on X where there is a contribution to (B.l) is the
so-called stationary phase pointx,, where dg/dx=O.
Expanding g(x) around the stationary phase point x,, the integral (B.l)
may be written
where g"(x) = dg/dx.
32 1
322
AppendixB
Substituting a new variable x‘=x-xp into (B.2), we find (€3.4) using the
following integral formulap],
[exp(-p’x’
(:j2)
kqx) dx = -exp
P
Re(p’) 2 0
The integration rangeof (B.l) need not always be - W to QJ.If k+ QJ,then
any integration range is okay, as long as it includes x,. If there are more
stationary phase points, their contributions
have to be summed.
REFERENCES
1. Erdelyi, A. Asymptotic Expansions, Dover, New York (1954).
2. Gradshteyn, I. S. and Ryzhik, I. M. Table of Integrals, Series and Products,
Academic Press,New York (1965).
Appendix C
Symbols and Definitions"
Local angular planewave spectrum.
Local profile spectrum.
Material constant: a=kCl4.
Light velocity in medium.
Light velocity in vacuum.
Material constant such that An= C'S; C'= -0.5Pno3
Refractive index variation.
Amplitude of plane wave of refractive index variation; may
be complex: An= C 'S.
Generalized light phasor with time variation slow relative to
the associated frequencyo
Spatial profile of light field.
Phasor referring to the general nth-order field.The associated
frequency equals o : En(x, z, t)=En(X, z) exp(jnR t).
Phasor referring to general nth-order field. The associated
frequency equals o+nQ. See above.
323
324
Appendix C
As above.
En@)
Ef"'(x, t; z)
As
above
three
indimensions.
Virtual phasor field along the X axis,i.e.,back-projected
physical field E(x, r, z).
EP)(x, t; z)
En(")(x; z)
Asabove but referring to nth order: En(")(x, t; z)=En(")(x;z)
exp(jnL2 t).
Seeabove.Back-projectedfieldEn(x,
2).
&(")(x)
Short for En(")(x; m),nn'.
EP)(x)
Short for
EP)(x;
-m)=E,(x,
En
Amplitude of nth-order plane wave, with the origin as a
phase reference point.
En(z)
Virtual plane-wave amplitude. The corresponding real
physical fieldis &(X, Z)=En(z)X exp(-jkx sin P-jkz cos $,,).
The corresponding virtual (back-projected) field is En(")(x,
Z)=En(z)X exp(-jkx sin e n ) .
-5
Amplitude of incident plane wave.
Sometimes
shorthand for
z) when clear from the context.
the incident light field Ei(x,
E($; 2)
Virtual angular plane-wave spectrum oflightwithphase
reference at the origin. The corresponding physical field is
0).
k sin$
x exp(-jkx sin$ - jkz cos 9) d 2K
The corresponding virtual (back-projected)
field is
E'"(x;
2) =
p($;2)
k sin$
x exp(-jkx sin$) d 2K
&(e)
Short for E n ( @ , W), n# i.
J%e)
Short for E,(@, -m); sometimesused
from context.
EO
Dielectric constant
medium.
of
E '(r,
0
forEl(@, z)whenclear
Time-varying part ofdielectric constant.
325
Symbols and Dejnitions
EO+&
'(r, t).
Dielectic constant of vacuum.
Fourier transform operator.
Inverse Fourier transform operator.
Angle used for optical waves; positive counterclockwise with
respect to the 2 axis.
Propagation angleof nth order: 4 n = $ o+n@ B .
Incident angles.
Bragg angle:4 B z 0% IA.
Phase of the sound.
Angle used for sound waves; positive clockwise with respect
to the X axis.
Light intensity of the nth-order planewave: In=lEnI2.
Light propagationconstant in medium.
Sound propagationconstant.
Light propagationconstant in vacuum.
Light wavelength in medium.
Light wavelength in vacuum.
Sound wavelength.
Permeability of vacuum.
Refractive index: n(x, z,t)=no+6n(x, z,t).
Constant part of the refractive index.
Elasto-optic coefficient.
Three-dimensional position vector.
Two-dimensional position vector.
Generalized sound phasor with time variation slow relative
to
d.The associated frequency equalsR.
Time-independentsound
equals d.
phasor. Associatedfrequency
Appendix C
326
S
Plane-wave sound amplitude; sometimes shorthand for the
field S(x,z) when clear from the context.
Angular plane-wave spectrum of sound with phase reference
at the origin. The physical fieldat (x,z) is given by
Ksin y
2n
x exp(- jivz sin y - jKx cos y ) d -
,+e
z)=S*[z($,-e
S(x2,z)=s[z($
S(&,
B),
21. Sound field along a Bragg line.
E),
z]. Conjugate sound field along a
Bragg line.
V
Raman-Nathparameter,
peak phase shift: v=kGJC'IISI
=kLIqIqI2=2aLIq=KvAn L.
Accumulated peak phase shift
:v'(z) = 0 . 5 z l klCllS(z)ldz.
V
Sound velocity.
Susceptibility of medium.
Peak time-varying susceptibility.
Point on Bragg line:xn+=z($n+$
B).
"-4
B).
Point on Bragg line:;x =z($
Eikonal function.
kY,{r)+KYls(r)-kYI(r).
Angular light frequency.
Angular sound frequency.
Index
Acoustic holography, 29
Acoustic microscope, 29
Acoustic surface waves (see Surface
acoustic waves)
Acoustics, 257-266
Acousto-optic tunable filters, 285-288
anisotropic, 287
isotropic, 285
Amplification, of sound
by acousto-optics,25,292-295
Amplitude, of diffracted ray, 23, 11 5
Angle convention
in three dimensions, 299
in two dimensions,36
Angular planewave
spectrum, 20,21,71,304
coupling ofwaves in, 101
of light field, definition, 67
rescattering in, 71
of sound field, definition, 68
Anisotropy
in acoustics, 264-265
[Anisotropy]
in optics, 267-270
Backward wave amplifier, 294
Beam deflector, 16, 19, 175-182
using birefringence, 27,279
Beam steering, 19,178-181
Born approximation, 15
Bragg angle, 6
incidence at multiple
values of, 140-143
large values of, 288,292
tracking of, 19
Bragg diffraction, 6-8, 58-61
in anisotropic crystals,277-285
criteria for, 63,64
in downshiftedlupshifted
configuration, 59, 70
eikonal theory of, 109-118
equations, 60
higher order, 140-143
in birefringent medium, 283
327
Index
[Bragg diffraction]
imaging, 21,22,206-219
in pulsed mode, 23
rules of, 209
by sampling, 23,219-223
in three dimensions, 214-219
using birefringence,27
sound field probing by, 23,219-223
Bragg lines, 102,105,160
Brillouin scattering, 24
Broad light beaminteraction, 77-79
Condensation, 257
Coupled modes, 26,86
Coupled waves, 20,21
Correlator, 19
Corrugated wave front, 37
Curved sound wave fronts
strong interaction, 80-83, 118-125
weak interaction, 116-1 18
Debye-Sears criterion (see
Raman-Nath criterion)
Doppler shift ofscattered
light, 14, 17,24,42
Dynamic birefringence, 275
Eikonal theory, 23
of Bragg diffraction, 109-118
of Braggdiffraction imaging, 210-214
Elasto-optic coefficient, 26,272
Elasto-optics,270-275
Electro-opticeffect, 16
Energy, conservation of,
in interaction, 24,292
Feynman diagrams, 10
in curved soundwave front
interaction, 118
in ideal Bragg diffraction, 108, 109
in intermodulation products, 203
in Monte Carloalgorithm, 160
in scattering calculations, 106, 107
Figure of merit, 26
Fourier plane, processing in, 19,
242-248
Fourier transform, use in angular
plane wave spectrum, 67-69
Fourier transform method, 154-156
Fresnel diffraction, 11
Fresnel image,23,28
Frequency shifting, 17
for electro-optic heterodyning, 28,246
for interferometricvisualization, 28
Gaussian beam
strong interaction with sound
column, 172-175
weak interaction, 169-172, 302, 303
Generating function, 13
Generation, of sound by
acousto-optics,25,292-295
Grating
cascaded, for analysis, 52-56
induced by sound
wave, 7, 13,27,40-44
induced by surface acoustic
wave, 27
Helmholtz equation, 14
Hertz vector, 129
Heterodyning, 17, 19,28
as explanation of modulator, 186
in SAW probing, 233-236
in signal processing, 241,248
in sound field probing,219-223
Hologram, 14,26
Hooke’s law, 263
Image plane, signal
processing in, 19,237-242
Index ellipsoid, 268
Intermodulation products, 200-206
Klein-Cook parameter, 9,49
Longitudinal wave, 257
Lorentz-Lorenz relation, 26,270
Manly-Rowe relations, 293
Mathieu functions, 11-13, 16, 89
329
Index
Measurements of optical
fields, 193-196
Modal theory, 90
Modulator, 17
bandwidth of, 184-189
Momentum, conservation of,
in interaction, 24
Monte Carlo simulation, 156-1 67
Multiple scattering, 71, 106
NOA method, 143, 144
Normal modes, 11,26,88-90
Numerical approach, 135-168
by carrierless split-step
method, 147
by eigenvalues, 143
by Fourier transforms, 154
by integration, 137
by matrix multiplication, 146
by Monte Carlo simulation, 156
for multiple Bragg incidence, 140
by successive diffraction, 144
Oblique incidence,4 5 4 8
Optical axes, 268
Optical indicatrix (see
Index ellipsoid)
Orthogonal scattering, 273-274,277
Parallel processing, 17,236
Paraxial propagation, 40
Path integrals, 106-109
Phased array deflector(see
Beam steering)
Phase modulation, 42
Phase shift
accumulated, through sound
field, 80
through sound beam, 45
Phase synchronism, 20,58
Phasor
definition of, 36
generalizationof, 39
Photon-phonon collision, 24,292
Plane wave
analysis of Braggdiffraction
imaging, 206,210
correspondence, 21
coupling, 70-72
spectrum (see Angular plane
wave spectrum)
strong interaction in two
dimensions, 98-106
weak interaction in three
dimensions, 295-303
weak interaction in two
dimensions, 23,72-77,96-98
Polarization, of scattered light, 125
Poisson’s ratio, 263
Poynting vector, of sound, 265
Profiled light beam,77-79
Profiled sound column,79-83
Profilometer, 198,200
Q (see Klein-Cook parameter)
Q spoiler, 29
Quasi theorem, 189-193
Radiation pattern, of sound, 71
measurement of, 21,76
Raman-Nath,
criterion, 49
equations, 13, 15, 55, 56, 87
generalized, 14,92
truncation of, 136
normalized parameter, 90
parameter, 42
regime, 56
Ray
amplitude ofdiffracted ray, 23, 115
bending, 14,49, 50,
imaging by, 23,
focusing, 51, 52,
theory (see Eikonal theory)
tracing
diagrams, 111
in Bragg diffraction imaging, 210
in curved wavefront
interaction, 11 1, 122
330
[Ray1
trajectories, 10
Recursion relations, 15, 57
Rescattering (see Multiple
scattering)
Sadron constant, 275
SAW (see Surface acoustic waves)
Scattered field, calculation of, 129
Schaefer-Bergmann patterns, 18
Schlieren
imaging ofsound
fields, 23,223-233
diffraction free patterns in, 228
and tomography, 227
Scophony system, 17, 19
Shear wave, 262
Signal processing
heterodyning in, 241-248
in frequency plane,242-248
in image plane,237-242
with time integration, 248-253
spectrum analyzer, 251
triple product processor, 252
use of Quasitheorem in, 190
Slowness surface, 264
Sound visualization
by Bragg diffraction
imaging, 21, 22, 206-219
by Bragg diffraction
sampling, 23,219-223
by Schlieren imaging, 223-233
Index
Spectra, 303-308
local planewave, 305
profile, 306-307
acousto-optic profile, 307-308
virtual plane wave, 304
Spectrum analyzer, 19, 182-184,251
Split-step method, 147-150
carrierless for acoustooptics, 150-1 54
Standing wave, of sound, 38
Stationary phase method, 321-322
Surface acoustic waves, 266
probing of, 233-236
visualization of,28
Transition probability, 161, 162
Uniaxial crystal,268-269
anisotropic interaction in, 278
beam deflector using, 282
collinear interaction in, 280
Vibration measurement, 196-198
Virtual field, 54,69,73
Wave vector diagram, 20,25,70
for beam deflector, 177,280
X-ray diffraction, 6, 9
Young’s modulus, 263
Download