Open Access Paper
28 July 1997 Advanced applications: an overview
Author Affiliations +
Abstract
Applications of materials in white-light optical imaging, remote-sensing systems are discussed. Image formation in terms of wavefronts and the influence of materials on the quality of images is given. The rationale for why some space optics structures are large is presented. Other topics are applications of adaptive spectrometers, adaptive optics, and the control of unwanted radiation. Optical materials limit the next-generation high-performance optical systems.

1.

INTRODUCTION

Optical telescopes, instruments, and focal planes are used together to image and enable measurements on the characteristics of distant scenes. Optics and materials science are intimately intertwined. It is the interaction of light with matter that enables us to control the direction light travels, to measure its intensity and to extract information of importance about the characteristics of the scene.

Today, I want to leave you with the knowledge of how the properties of optical materials are used to control the image formation process, and how recent research into new optical materials structures and controls have led to new, previously unimagined capabilities for imaging systems.

New materials development enables new optical instruments, and, similarly, innovative applications of optical systems drive new materials development. Materials are used in optical systems in several ways: One is to control the sphericity of the wavefront; another is to control the separation between optical elements; another is to control unwanted radiation—either thermal background emission or scattered light, as in a coronagraph; and still another is to enable spectral analysis of the scene.

2.

WHAT IS AN OPTICAL SYSTEM?

Properties of electromagnetic radiation, whose measurement reveals an aspect of the nature of the source or the intervening medium, are 1) intensity as a function of position, 2) polarization state, and 3) intensity as a function of wavelength. Optical instrument systems are built to measure, intensity as a function of time, polarization state (two linear state and dimensional scene. The intensity measurement can be a time average, a measure of intensity fluctuations with time, or a measurement of the time of the arrival at the detector of each individual photon across the image scene.

A white light source (object) is placed to the left of the optical system, and radiation flows left to right. A fairly accurate model of an extended field-of-view white-light scene is that it is composed of an ensemble of non-interacting (non-coherent) point sources (intensity delta functions) each with a different intensity to represent a changing scene intensity with (x, y). Each of these point sources is then mapped through the optical system to form an image at the image plane on the right. If the optical system operates on each point source in the same manner, then the optical system is said to be “spatially linear.”

A point source on the object gives a spherical wavefront which diverges or expands spherically outward. The optical system operates on the expanding wavefront by turning it around from an expanding or diverging wavefront into converging spherical wavefront, converging to a point in the image space. An optical system is said to have “wavefront errors” if the converging wavefront is not spherical. The deviation from a spherical wave is called an “aberration.” In general, these aberrations are field dependent.

Only a portion of this wavefront is operated on by the optical system to form the image. If we could invent an optical system to collect the entire wavefront as it expands into four pi steradians and turn that wavefront around to make it contract, or converge on a point in image space, then there would be no diffraction effects. Both diffraction effects and wavefront errors affect image quality. The dominate error in most imaging systems is wavefront errors, that is, the optical system does not produce a perfect converging wave.

In addition, the transmittance of most optical systems is wavelength or color dependent and, in general the “color” of the object is changed as it is mapped through the optical system. Also, the transmissivity with polarization state changes for different optical systems.

2.1

Diverging to converging

In the previous paragraph, we saw that image formation requires a bend or change in the curvature of a wavefront from expansion to contraction. This bending is done by the interaction of the wavefront with matter. It is the interaction between an incoming optical electromagnetic, complex amplitude wavefront and materials that enables the formation of images. Reflection of a wavefront from a curved surface coated with a specularly reflecting material or transmission of the wavefront through a dielectric lens of uniform or non-uniform index of refraction are common methods used to change the direction of the complex amplitude wavefront. Materials affect the performance.

2.2

Complex amplitude, phase, and image formation

An image is formed on a detector, for example, film, solid-state focal plane, or image-intensifier cathode. The detector is a material that converts the photon energy into thermal energy or electrons. Focal plane materials are sensitive to intensity or power and not directly to complex amplitude wavefronts. Good image formation in broad-band white light requires that all of the spherically converging optical wavefronts combine together in one point at the focal plane. The intensity distribution that makes up the image plane irradiance distribution (the image) is the modulus squared of the sum of all of the complex amplitude waves across the field. The optical path difference of all the rays that come to a point must be precisely the same to form a quality diffraction-limited image.

3.

THE ROLE OF OPTICAL MATERIALS IN IMAGE FORMATION

Optical materials and structures science and technology establish the limits on image brightness and image quality.

3.1

Image brightness

3.1.1

Introduction.

An important aspect of image formation is the brightness of the image. Image brightness is determined by the size (diameter of the light-collecting area and the internal relay optics). The structure around the optics must both support the spacing of the optical elements to tolerances of less than one micrometer and be large enough to pass enough radiation to the focal plane to give the desired signal to noise ratio or image brightness across the scene. In this section, we define optical throughput and explain why, for many applications, telescopes cannot be made miniature—and indeed for exploration of the deep universe, the largest apertures possible are needed. These large-precision apertures stress the limits of modern material science and structures. Structures 1 to 20 meters in size must be held to within 10-6 meter tolerances for both ground and space optics. In this section, we will ray-trace an optical system to show the relationship between image brightness and the size of the structure.

3.1.2

Ray trace and the optical invariant.

First-order optical design includes system analysis to ensure that as much of the energy from the source as needed to give a required signal-to-noise ratio falls on the detector.

In this section, we use the ray-trace equations to show that the area-solid angle product is an invariant throughout an optical system and must be preserved if one wants to maximize the power on the detector for the systems resources of aperture and focal length. This carries the name “Lagrange or Helmholtz invariant.”

A measure of the capability of an optical system to pass radiation is called by several names: “luminosity,” “étendue,” or the “throughput.” The unit for this quantity is either cm•radian or cm2•steradian.

3.1.2.1

Ray-trace equations.

In the paraxial approximation, which we shall use here, the optical system is represented by a series of pupil and image planes. There is only one object plane, but an optical system can have several image planes imbedded within it. There is only one system pupil plane, buried within it. A compound optical system will have several images of this pupil plane. When the system is viewed from object or image space, however, only one pupil plane appears.

We will find that describing an optical system using these two rays is sufficient for most design and analysis tasks. Figure 1 shows a schematic of any optical system reduced to an object, pupil, and image plane.

Figure 1.

Perspective view of object plane, pupil plane, and image plane used to show the definition for marginal rays and angles, chief rays and angles, and ray heights (y). The chief ray passes from the edge of the image plane through the center of the pupil plane. The marginal ray passes from the center of the object plane through the edge of the pupil plane and then crosses the axis at the image plane.

00245_psisdg10289_1028903_page_4_1.jpg

The chief ray is indicated by the solid line in Figure 1. It is the ray traced from the extremity or edge of the object and that passes through the center of the pupil. Since this ray passes through the center of the pupil, it “sees” no power, and the ray does not bend. The pupil plane is defined as the plane normal to the system axis at the point where the chief ray, as viewed from either image or object space, passes through the optical system axis.

The marginal ray is indicated by the dashed line in Figure 1. The marginal ray is the ray traced from the center of the object through the rim or edge of the pupil. At the pupil, this ray is refracted, and if the system is an imaging system, the marginal ray crosses the axis at the image plane.

The chief ray makes angle u̅ to the system optical axis and the marginal ray makes angle u to the optical axis at the object plane as shown in Figure 1. Note that the angles do not change when a ray traverses a medium of constant index of refraction. Angles are therefore assigned the subscript of the medium. For example, the chief ray angle between the object and the pupil is indicated by u̅1 (which also equals u̅2) and the marginal ray angle is indicated by u̅1. Note that the angles and the height of the chief ray from the system axis are indicated by the variables u̅ and y̅. The angles and height of the marginal ray at the system axis are indicated by the variables u and y without the bar. Note also that at the object plane, the height of the marginal ray, indicated by y1, is zero. Also, at the image plane, y3 is zero. At the pupil plane, the height of the chief ray, indicated by y̅2, is zero.

We will use the paraxial ray trace equations to derive important relationships between the marginal and chief rays. Consider any two rays propagating through an optical system. These two rays are not necessarily the chief and marginal rays. For the purpose of distinguishing between these two rays here, we shall refer to them in the bar and the unbarred notation.

The ray angle after refraction at an interface between materials of index n1 and n2, with power at surface 2 of ϕ2, is given by

00245_psisdg10289_1028903_page_5_1.jpg

where, the power ϕ2 is

00245_psisdg10289_1028903_page_5_2.jpg

and where R2 is the radius of curvature of the surface 2 and n1, n2 are the indices of refraction on either side.

If we select another ray angle and indicate this with a bar over it, u̅2 (after refraction at a surface of power ϕ2 into a medium of index n2) is

00245_psisdg10289_1028903_page_5_3.jpg

After this ray leaves a refractive or “ray-bending” surface, it travels in a straight line (assuming the ray is propagating in an isotropic medium).

The height of a ray at surface 2 is given by y2. It is a function of the ray height at surface 1, (given by y1), the thickness t1 and the angle, u1. The height of this ray at surface 2 is

00245_psisdg10289_1028903_page_5_4.jpg

By symmetry, for the ray indicated by a bar over the top, we repeat the above argument and can write

00245_psisdg10289_1028903_page_5_5.jpg

The power ϕ2 is a characteristic of the surface and is therefore the same for both of the rays. The power for the ray is the same as that for the barred ray. We combine equations (1) and (2) at a plane within the optical system to give

00245_psisdg10289_1028903_page_5_6.jpg

Using equations (3) and (4), and rearranging so that all the terms for plane 1 are on the left-hand side and those for plane 2 are on the right-hand side,

00245_psisdg10289_1028903_page_5_7.jpg

This equation shows that there is a property of an optical system that is true at any two arbitrary planes within the system. This is called the Lagrange invariant, and the symbol H is often used as the shorthand representation.

If we rewrite equation (5) with the object plane on the left-hand side, that is, y1 = 0, and the pupil plane on the right-hand side, that is, y̅2 = 0, we have the marginal and chief rays respectively, and then we write

00245_psisdg10289_1028903_page_6_1.jpg

where we have set n1 = n2 = n, since both the object and the pupil are immersed in a medium of the same refractive index.

3.1.2.2

Conservation of area solid-angle product and detector power.

An object space source will have a certain radiant emittance (watts per meter squared). The optical system collects this radiation and re-images it onto the focal plane detector.

Consider an object plane and the pupil plane separated by distance d. We learned above that the Lagrange invariant must be satisfied to ensure no vignetting (obstruction of light by the optical system) within the system and to thus ensure that the power falling on the entrance aperture will strike the image plane.

To compute the power at the detector of the sensor system, one needs first to compute the power at the telescope aperture received from the source. Radiative transfer takes place between two surfaces: a surface on the object and the surface of the entrance aperture. To identify an area on the object, the chief ray height at the object, y, is rotated around, out of the meridonal plane in a circle. For small u, u =y/d, and the area of the object plane, Ao, is Ao = πy2. We now calculate to the apparent solid angle subtended by the pupil as viewed from the object. The marginal ray angle u1 is converted into a solid angle, given by

00245_psisdg10289_1028903_page_6_2.jpg

Ωp is the apparent solid angle subtended by the pupil as viewed from the object.

The solid angle of the pupil as seen from the object is then Ωp = A/d = πu12. If we assume the medium to be air, and let the index of refraction, n, be 1, then multiplying both sides of (6) by π and squaring them, we find

00245_psisdg10289_1028903_page_6_3.jpg

This is rewritten to give

00245_psisdg10289_1028903_page_6_4.jpg

where Ao is the area of the object being imaged through the optical system, Ωp is the solid angle of the pupil as it appears when viewed from the object, Ap is the area of the pupil, and Ωo is the solid angle of the object as viewed from the pupil. Measurement of faint sources requires large-area collecting surfaces, and the area-times-solid-angle rule expressed in equation (9) must hold at all planes within an optical system.

Many scientific and engineering applications of optical systems require measurements of very faint scene information at the threshold of detection. The largest area possible, consistent with the technologies of structures and materials science, to hold the sub-micrometer wavefront tolerances is required.

3.2

Image quality

Structural elements within an optical system are used to control the spacing between surfaces and sets of surfaces. The simplest example is a structure used to control the separation between a single lens and the focal plane. The lens has its wavefront bending capabilities completely within it. That is by the fact it is a solid block bound by two curved surfaces, separated by a material of uniform index n, means the optical power is fully defined. Structural deformations present within the glass produced by, say mechanical stress or temperature gradients or thermal nonuniformities, will distort the converging wavefront and possibly produce an unacceptable image. For now, consider the lens to be perfect and used at its designed wavelength and temperature. The structure separating the lens from the focal plane is used to control the first order aberrations of tilt and defocus. Tilt and defocus are first-order aberrations and are corrected by “tuning” the structure.

Wavefront aberration errors of astigmatism, coma, spherical, distortion, and field curvature are properties of the optical element. These aberrations are “frozen” into the image-forming element, and tilt of the element or translation (defocus) cannot correct it. The structure that holds the lens cannot be modified to improve system performance.

3.2.1

Adaptive optics of opto-mechanical alignment.

A relayed image of the telescope pupil provides a location within an optical system to correct over the largest field of view for wavefront errors in optical systems.1 In paragraph 3.1, we saw that the area-times-solid-angle-product relationship must not be violated if we are to have an efficient optical system.

Consider a 10-meter telescope whose wavefront errors one might want to correct with a 10-centimeter clear aperture adaptive optics element. The demagnification, or minification factor is 100, and angles on the adaptive optics element are multiplied by a factor of 100. If the desired field-of-view radius (chief ray) is 0.1 degrees, then, in the presence of this demagnification factor, the chief ray at the relayed pupil is 100 degrees, a clear impossibility.

Opto-mechanical alignment in the wavefront-shear direction or in a direction normal to the system axis is made much more sensitive as a result of using a system with significant demagnification for the adaptive optics element. Consider the example used above of a 10-meter aperture imaged onto a 10-centimeter adaptive optics element. The 10-meter aperture will, most likely, be a surface of revolution that is a conic surface or a segment of a conic surface. The optical axis of the 10-meter primary as projected through the optical system will fall on the 10-centimeter adaptive optics element. The axis of the adaptive optics element must be aligned (superimposed upon) the axis of the primary with a precision 100 times larger than that required for the primary mirror were no adaptive optics used. The optical system: Hubble Space Telescope and Wide Field Planetary Camera successfully solved a similar problem, using an on-orbit adaptive optics system.2

4.

IMAGING SPECTROMETERS AND OPTICAL MATERIALS

Today, the value of imaging spectrometry has become well recognized, and there is a need for new optical materials structures development to improve system spectral efficiency and reduce the deleterious effects of wavelength-dependent polarization. Acousto-optic tunable filters, liquid crystal tunable filters, and other imaging spectrometry systems, such as diffraction gratings, non-linear films, and optical coatings require new materials science research.

5.

CONTROL OF SCATTERED LIGHT FOR PLANET DETECTION

Unwanted radiation within optical systems caused by scattered light or thermal emission often limits the performance of optical systems. Breckinridge,3 and along with others, recognized these limits to extra-solar planet detection. Little materials science work has progressed to control scattered light in telescopes for high contrast observations. Materials science research focused on this technology issue will have great benefits for science.

6.

CONCLUSION

The interaction of light and matter enables us to control the direction light travels, to measure its intensity, and to extract information of importance about the characteristics of the scene. Applied material science, research, and development performed in response to optical remote-sensing system requirements has opened new vistas for humanity. Future new capabilities require focused, applied research and development programs.

ACKNOWLEDGMENT

This paper was prepared by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.

REFERENCES

1. 

A. B. Meinel and M. P. Meinel, “Two-stage optics: high acuity performance from low acuity optical systems,” Opt. Eng., 31 2271 (1992). https://doi.org/10.1117/12.59946 Google Scholar

2. 

J. B. Breckinridge and H. J. Wood, “Space Optics: Introduction,” Applied Optics, 32 1677 –1680 (1993). https://doi.org/10.1364/AO.32.001677 Google Scholar

3. 

J. B. Breckinridge, T. G. Kuper, and R. V. Shack, “Space Telescope Low-Scattered-Light Camera: A Model,” Opt. Eng., 23 816 –820 (1984). https://doi.org/10.1117/12.7973388 Google Scholar
© (1997) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
James B. Breckinridge "Advanced applications: an overview", Proc. SPIE 10289, Advanced Materials for Optics and Precision Structures: A Critical Review, 1028903 (28 July 1997); https://doi.org/10.1117/12.279812
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Wavefronts

Adaptive optics

Image quality

Optical components

Image acquisition

Imaging systems

Sensors

RELATED CONTENT

Commercialization of adaptive optics
Proceedings of SPIE (November 06 2002)
Survey of adaptive optic techniques
Proceedings of SPIE (August 25 2005)
Adaptive optics at the IOE, CAS
Proceedings of SPIE (February 23 2009)
A Near Infrared Astronomical Adaptive Optics System
Proceedings of SPIE (September 20 1989)

Back to Top