Open Access
14 March 2024 Designing a new, large, complex observatory: learning the strategic lesson of newness from our experience on the James Webb Space Telescope
Jonathan W. Arenberg, Tiffany Glassman, Elysia Starr, Reem Hejal, Till Liepmann, Charles Atkinson, Nina Altshuler, Annetta Luevano, Marc Roth, Perry Knollenberg
Author Affiliations +
Abstract

We formulate the lessons Northrop Grumman personnel have learned from their work on development of the James Webb Space Telescope. These lessons are strategic in nature and bear on the common behavior during development of all large complex systems, such as astrophysics missions, also known colloquially as Flagships. To justify the expense, a Flagship must be a large leap in scientific capability, demanding new architectures and technologies coupled with an intolerance to risk. We define “The Problem of Newness” based on our experience and data from Webb’s development. This unseen hand was present during Webb, and it is only in retrospect that we have been able to define it and present it as a lesson for the future. Future missions, Flagships in particular, should recognize the challenge of newness as a natural consequence of development and take steps to minimize its impact.

1.

Introduction

This paper is being written just after the completion of the first year of science operations of the James Webb Space Telescope. The authors are grateful for the chance to contribute to this amazing mission and proud of our contributions to the team’s success. Our experience spans the entire period of development and ranges from detailed discipline engineers to two chief engineers. We have learned a great number of lessons, most are what could be called tactical, specific to various instances and aspects of the Webb program. A truly comprehensive paper of every lesson learned on Webb would be far beyond the scope of a single journal article and the resources available to us, so we make no claims that this is a comprehensive list of lessons from Webb.

We have chosen to make this paper about the largest lesson from our experience on Webb. Specifically, we focus on the considerations that are due to the unique development of a Flagship mission and the consequences thereof, which we call the “The Problem of Newness.” We firmly believe that recognizing and managing the impacts of “The Problem of Newness” is the strategic lesson to be learned from our experience on Webb. Since the conditions that result in “The Problem of Newness” are the sine qua non of a Flagship, this lesson is broadly applicable and independent of the science mission.

Figure 1 maps out the attributes of the Webb observatory. First is a large leap in science empowered by the large, novel, deployable optics which are cooled passively by the sunshield. Next, the design must leverage the new technologies and architectures to produce sufficiently high performance to meet the science requirements. Finally, there must be confidence in the performance, to complete, verify, and validate the design and ultimately sell off and then operate the observatory.

Fig. 1

Map of the factors driving the development of the James Webb Space Telescope.

JATIS_10_1_011209_f001.png

Figure 1 is also a map to our discussion. Section 2 begins with a discussion of what Webb is designed to do and what empowers the observatory to achieve these science goals, addressing the top branch of the diagram in Fig. 1 labeled, “Leap in Science Capability.” The latter part of Sec. 2 is devoted to answering the question, why is Flagship development different from other developments? Some of the scope of the middle branch, “High Performance Design,” is the subject for Sec. 3, which describes the developmental history of Webb’s design, specifically knowledge of the design. Section 4 addresses the other aspects of the middle design branch, dealing with model evolution, confidence in prediction and performance reserves. The third branch, “Confidence in Performance,” namely verification and validation, is the subject of Sec. 5. Section 6 gives our summary and a list of lessons for future Flagship class missions. In Secs. 2 to 5, there is narrative discussion of the section’s objective illustrated by anecdotes from the mission.

2.

Why Is Flagship Development Different?

First, a Flagship, such as Webb, must deliver a large leap in science capability and the ability to collect data to answer the most pressing science questions at the time of formulation. At an emotional level, it must be clearly head and shoulders above the previous generation of similar instrumentation. A visible expression of the kind of leap that Webb has made is shown in Fig. 2, an early release image from 2022 showing the same object as viewed by the Spitzer Space Telescope and Webb. Webb’s leap was powered by new systems architectures and technologies. Any future Flagship will also need to be a big leap in its area of science, the foundation of any Flagship mission.

Fig. 2

Comparing of the same region of the sky as viewed by the Spitzer Space Telescope and the James Webb Space Telescope, the increase in resolution and light gathering power is clearly evident. Image credit: NASA/ESA/CSA/STScI.1

JATIS_10_1_011209_f002.png

The need for systems that are a large leap in capability demands new technologies, architectures, and materials. Understanding the behavior of such systems requires detailed modeling of new configurations and an understanding of the performance sensitivities. It also requires exploring new regimes of environmental conditions and understanding the effect these have on the system.

New Environments

The unusual environments, requirements, and architecture of Webb lead to a need for new testing of materials, use of non-standard materials, analysis of sensitivity to new environmental effects, etc.

Anecdote 1: New Materials Testing

An example of the impact of new environments is the high temperatures JWST experienced associated with launch and ascent aerothermal heating as well as long-term direct/concentrated sun exposure. These required new tests of thermal control materials and composites. Many existing thermal control constructions, such as multi-layer and single-layer insulation, were not compatible with the predicted high temperatures. For composites, although they are stable at cold temperatures, they will creep as they approach their glass transition temperature (Tg), as with all plastics. In both cases, appropriate thermal control coatings were studied to ensure the hardware could perform in these environments.

Anecdote 2: Contamination Considerations

The architecture and layout of hardware for the Webb telescope brought concerns over contamination front and center. In addition to the typical concerns about particulate fallout and molecular layer build up, there was a risk of ice buildup on the optics and the sunshield, changing their emissivity and surface radiance. This led to a first principles analysis of the impacts of contamination on optical transmission and emissivity.2,3,4

To make accurate predictions of accumulation during the long assembly and integration phases of Webb, a detailed spreadsheet was developed by Northrop contamination control engineering. In order to get good long term molecular film growth rates, long term measurements were made, but without changing the wafers. The result here was startling, the thickness of molecular layers did not evolve linearly and instead reached an equilibrium value.5,6 This equilibrium thickness is much smaller than is conventionally predicted.

A Flagship is a major capital asset, an engine of scientific exploration and discovery, and a foundation of advancement in the field. Broad general scientific capability is also desired in addition to the performance against the scientific cases driving the design.

Flagships are designed to be key to the achievement of NASA and community science goals: long lived, high performance, and expensive. They are the very embodiment of class A missions, which are the least tolerant to risk of any programs in the NASA portfolio.

In most discussions and studies of Flagships, cost, namely the total integrated cost, is central. However, during development, it is the available yearly funding that is the biggest factor in program success. In the early years of development, year-to-year funding is typically increasing. However, the US Federal budget is often administered through continuing resolution (CR) where programs are given last year’s allocation. This non-technical reality causes work to be deferred, creating gaps in development and increasing the cost and risk of the program.

3.

Webb’s Design Evolution

Webb’s science mission7 drives the need for the large primary mirror (PM) and operating temperature in the 40 K range. The final design for the James Webb Space Telescope was the result of many tens of millions of engineer-hours of effort and was not fully mature at conception. Figure 3 consists of panels of artist’s renderings depicting Webb’s design from conceptual design review (CoDR, 2004) through Mission Critical Design Review (MCDR 2010).

Fig. 3

The evolution of the Webb design from CoDR, through PDR, to CDR.

JATIS_10_1_011209_f003.png

Even with this curated view of the design evolution of Webb, significant change is evident. The CoDR image was for a larger mirror than the eventual flight diameter of 6.5 m and had 37 mirror elements, and a very boxy sunshield with compound dihedral angles. The 37-mirror primary was found to be less optimal for the final flight diameter of 6.5 m which was reduced to meet budgets while delivering Webb’s required science. Some of the thinking behind this trade is used as a textbook example of early systems design.8 Even more telling, the tight fit and rectangular sunshield could not provide a large instantaneous field of regard and offered little, if any, protection from observatory roll about the telescope’s optical axis.

By the time of Mission Preliminary Design Review (MPDR) in 2008, the design had significantly advanced. The sunshield had a single dihedral angle and its final interface to the spacecraft was defined. The sunshield was now an irregular hexagon, with its edges in a catenary shape. At this point in development, the sunshield was deployed by six booms. The accumulated momentum from solar pressure was to be managed by adjusting the pitch of the forward part of the sunshield. The spacecraft’s solar arrays were deployed in a direction perpendicular to the telescope optical axis on each side of the spacecraft. It is also clear that the PM had not yet acquired its “frill,” the black Kapton closeout necessary for stray light mitigation.9,10 The deployable tower assembly and the area of the core were visible to the primary mirror and not completely closed out thermally.

The critical design review (CDR) version of the design illustrates the sleek “light-lines” used to create straight sunshield edges which close out the view of the hot lower regions of the sunshield to the mirror, a stray light risk. Also in evidence, but not in their final configurations, are the “bib” and “frill” closeouts to mitigate stray light from the sky and from the hot, 80 to 120 K, core region around the deployable tower. At the time of CDR, the sunshield’s geometry was fixed, not adjustable in flight, and the aft-flap, seen in the lower right panel, had been added. This aft flap has an adjustable angle that was set prior to launch to minimize momentum build up over the field of regard, a key on-orbit scheduling concern.11 Finally, the boom-based deployment had been replaced by the unitized pallet structures (UPS) that simplified membrane management and deployment.

The design continued to evolve after CDR. The bib stray light closeout no longer rotates down, which left a gap where light can leak. The final flight design is fixed to both the telescope and sunshield rim. The frill stray-light closeout went through several iterations before a flight configuration was determined. But most notably, instrument radiators were relocated from the sides of the integrated science instrument module (ISIM) to a deployable radiator at the back of the observatory, the aft deployable instrument radiator (ADIR).

A typical or traditional system development begins with formulation and a baseline design, culminating in the system requirements review (SRR). At the SRR, requirements are set, and tolerances are allocated and flowed down to lower levels of the system. The SRR is followed by preliminary design and a review, the preliminary design review (PDR). The critical design follows the PDR, culminating in a CDR. The underlying assumption here is that requirements are identified and complete at SRR, as we shall see, this was not the case for Webb.

The development of Webb proceeded under the paradigm of a “typical” development and SRR was held in 2004. At this point, the major requirements documents were released, and the integrated product teams (IPTs) got to work designing. What was not realized at the time was the immaturity of the systems models used to derive the requirements, relative to their ultimate size and complexity. The subject of requirements completeness and maturity is given in Sec. 4.

Due to funding limitations, development of all the IPTs, for the spacecraft, sunshield, and telescope, did not occur at their initially planned coordinated cadence. The telescope, which was on the program critical path, was funded and the spacecraft slowed significantly. Significant slowing meant the spacecraft staffing was minimal, with a deferred ramp up later in the program. In fact, at the time of mission CDR, the spacecraft had not completed its CDR, which was one of the reasons a final review of the system, called the system look back review (SLR) was added to the schedule. SLR was held in May 2014.

The de-phasing of the original plan for system development impacted the program in various ways. At the very root of the staggered development was the obvious difference in maturity between interfaces, i.e., between the telescope and the rest of the system. Typically, interface decisions were made with an eye on the clock, allowing the telescope to remain on schedule. The result on occasion was an unplanned, and unappreciated at the time, shift of risk to the other side of the interface.

Impact of Staggered Development

Priorities dictated focus on difficult problems in the telescope system early on, such as building suitable mirrors, postponing typical activities, such as completion of spacecraft- and observatory-level specifications and interface requirements and control documents (IRCDs), that could affect the telescope.

Anecdote 3: Observatory Vertical Lift

Observatory vertical lift specifications were left out of telescope requirements, leading to a postponed effort after telescope detailed design was completed. The reason for this oversight was that the telescope assembly and integration flow required only horizontal configuration supports. As such, there were no lift points on the telescope for vertical lift, challenging assembly and integration operations. As a matter of fact, observatory integration was originally to be done in a “cathedral” utilizing the horizontal lift points mounted at the base of the telescope legs. Luckily, common sense finally prevailed and lift points on the telescope were introduced at the top of the telescope, facilitating integration and testing (I&T). Late design changes had to be implemented to accommodate this requirement.

Anecdote 4: Star Tracker Location

Common sense also dictates that the optimal location for star trackers is the telescope. However, the star tracker is a function of the spacecraft and was therefore originally designated to be mounted on the spacecraft. A complex truss system to mount the star trackers to the spacecraft was originally envisioned. However, thermal distortion requirements for the star trackers and their support structure could not be met by this configuration. With the telescope design well ahead of the spacecraft, it was deemed very difficult to mount the star trackers on the telescope structure itself at that late stage of its development. An effort was then taken to build a simplified star tracker support assembly (STSA) mounted on the spacecraft for launch with supports that are deployed on orbit, including the star tracker radiator which is mounted on a spacecraft panel. Upon deployment, the star trackers were practically mounted on the telescope since the STSA inboard interface to the telescope was to its deployable tower hub. Care had to be taken for sufficient clearances upon deployment to avoid any unintended sneak paths that would degrade the “1 Hz” isolator that separates the telescope from the spacecraft. This is a typical concern for vibration isolators on complex programs, similar to the wheel isolators of Chandra program heritage that were updated to allow replacement of obsolete viscoelastic damping parts and softened to improve jitter performance.

Anecdote 5: Flight Software Interfaces

Early on, commands between the science instruments and their control computers resident in the ISIM and flight software (FSW) in the spacecraft were restricted to only the scenarios thought of at the time of requirements definition. There was extra work that went into the ISIM-Optical Telescope Element (OTE) to SC IRCD because there were “trust” issues with the on-board script subsystem development by the ISIM side of the house. In the end, the operations of the Observatory must include both the SC FSW as well as the FSW that resides within the ISIM. Because the observatory-level design was postponed, some additional commands were tracked and configured via the IRCD which caused inefficiencies.

As is clear from Fig. 3, the system that was ultimate built and flown for the JWST mission has significantly evolved from the proposal era. The final design embodies the essence of the architecture from even the concept study era: a large deployable PM, passively cooled to achieve the needed temperature. But almost all the implementations of the architecture evolved on the basis of analysis and test in the design phase.12

4.

Understanding Design Performance: Webb’s System Model Evolution

The system design process is recognized as an iterative one, described as the doctrine of successive refinement. This doctrine is illustrated in the NASA Systems Engineering Handbook13 and reproduced as Fig. 4. Under this doctrine, the system is modeled to predict its performance and identify performance failures. The essence of the design process is the mitigation of the identified performance failures. The design’s performance is modeled with increasing resolution in modeling until all requirements are met and greater resolution is not required, usually determined by test and independent analysis. At this point, the system model is representative of the system and all its relevant interactions at the level of resolution necessary to meet mission requirements.

Fig. 4

The doctrine of successive refinement8 (Image credit: NASA).

JATIS_10_1_011209_f004.png

We now develop a mathematical formalism of the development of the systems models and requirements and the system tolerances needed. This formalism provides a framework for the examination of model evolution and its effects on the design process. This formalism captures the impacts of increased resolution on the model and its implications. Figure 5 shows a flowchart of the steps in the process of revising a system model, the terms as symbols will be explained below.

Fig. 5

The flow of activities in maturing the system model and design.

JATIS_10_1_011209_f005.png

The k’th iteration of the system model, represented by the operator Mk, is an abstraction and is taken to mean the process or operations by which the system performance, represented by the technical performance metrics, κi, is generated from the system parameters, wi. The index k should be viewed as the model version number and takes on the values from 0, representing the initial model, through Ω, the last model version. At some iteration k=k*, the requirements are developed for each κi, the tolerance band for each metric is represented as the interval, [αi,βi]. For every iteration of the model, the κi is checked to see if their values fall in these intervals, if they do the requirement is met and if not there is a performance failure. The design proceeds until all details of the design are in the model and all requirements are met.

The first of these iterations, model zero, M0, can be written as

Eq. (1)

[κ1κ2κn]=M0[w1w2wa].

The design point or ideal performance is when all of the elements of the performance vector take on their ideal values, denoted, κ¯i, this occurs when all of the design parameters take on their ideal values μj. So for the ideal set of system parameters,

Eq. (2)

[κ¯1κ¯2κ¯n]=M0[μ1μ2μa].

Perfection of all system parameters is not possible, so we can view Eq. (2) as the center of the distribution for the κi and the real values will deviate from κ¯i. It is possible to calculate the variance (and standard deviation) of the σwi, when all the wi are independent, namely:8

Eq. (3)

σκi2=j=1a(κikwj)2σwj2.

(When the wi are not independent there is an additional term for co-variance between pairs of variables. This term omitted from our current formulation for clarity and without loss of generality.) In Eq. (3), the partial derivative is usually called the sensitivity of κi to wj for the k’th model. The second term, σwj2, is the variance in wj. Given that we know the mean and variance of each κi as a function of the σwj2 we can select the σwj2 to give probability of Pi that each requirement for κi is met with sufficient confidence for the program. The assignment of values to the σwj2 is the allocation of the system tolerances.

For a typical system with little or no development and a good system model, this can be completed by SRR. Webb did hold an SRR, but the system models (as we shall soon see) were far from complete. As the system model evolves, the next iteration looks like

Eq. (4)

[κ1κ2κn]=M1[w1w2wawb].

In this example, the number of wi has increased from a to b, the operator is now M1, and the sensitivities are different, so this leads to different set of values of κi and σκi:

Eq. (5)

σκi2=j=1b(κi1wj)2σwj2.

The process of increased model fidelity and detail continues through the design phase and into integration, test, and ultimately operation. We will represent the ultimate development of the model before its launch as the Ω (last) revision and it is depicted in Eq. (6). In this example, the number of performance metrics has increased, as has the number of system parameters.

Eq. (6)

[κ1κ2κnκN]=MΩ[w1w2wawz].

This final model has the performance metrics κiΩ with variances (uncertainties) given as

Eq. (7)

σκi2=j=1z(κiΩwj)2σwj2.

The discussion that has led from Eqs. (1)–(7) is by design a general description of what might be expected to happen on a development program. To examine the specific case of Webb’s development, we plotted the number of nodes in the model as a surrogate for the number of system parameters as a function of time. In Fig. 6, this is plotted over time from proposal to delivery. In this plot, each data symbol is a recognized configuration-controlled version of the Northrop Grumman Observatory thermal model, each one a major update in terms of design fidelity. The first thing to note is that there are 49 data points on this plot and they are roughly evenly spaced in time, showing that the development of this model was a significant and continuous effort. This figure also shows that the size of the model, i.e., the number of variables and system parameters, increased markedly, with the count at proposal time <5% of that at delivery. More tellingly, at SRR the model contained about 7% of the final node count.

There are two major rapid increases in nodal count, the first is prior to MCDR, when the nodal count nearly doubles. This was due to the inclusion of a very detailed model of the telescope structure in preparation for MCDR as well the opening stages of understanding of the heating of the telescope during launch (launch and ascent heating). The former problem was closed with the reconfiguration of the instrument radiators and the addition of the ADIR. Launch and ascent heating took a great deal of analysis and changes in the launch attitude profile before it finally succumbed.

It is also well worth noting that the data in Fig. 6 were not collected, collated, and plotted until late 2022, a year after launch. We as a team did not ask for or look at this data earlier; the realization that the SRR-era versions were very sparse compared to the final model would have been a very good insight during the engineering change boards, when various product teams were complaining about system engineering “changing requirements” when in fact the proper description was “completing requirements.”

Fig. 6

Number of nodes in the Northrop Grumman observatory thermal model for the James Webb Space Telescope, 2002 to 2022. Also shown on this plot are major program milestones: proposal, SRR, MPDR, MCDR, and SLR.

JATIS_10_1_011209_f006.png

4.1.

Coping with Incomplete System Knowledge

If we take the difference of Eqs. (7) and (3), we get an expression of the “knowledge deficit” for the i’th performance metric for the k’th model:

Eq. (8)

Δik=j=1zSijΩ2σwj2j=1mkSijk2σwj2.

The second summation from j=1 to mk is the relevant number of system parameters for the k’th model. Equation (8) gives the difference between the last estimate of σκi and the current one. When k=0, this is the deficit in knowledge of the system performance. From examination of Eqs. (3), (5), and (7), we can see these form a positive definite, that is every term is positive, so assuming a reasonably sound prediction of κi0, the magnitude of σκi will be known early in the program and is largely invariant over time. This means that the probability of success, of κiΩ being acceptable, comes down to managing σκi. Since the form is positive definite, new knowledge always makes balancing harder. This problem was addressed by adding a reserve, which looks like:

Eq. (9)

σκi2=j=1mkSijk2σwj2+Rki.

The term Rki is the reserve for the i’th performance parameter and the k’th system model. (The units of R in this formulation are the same as the variance in the i’th parameter. We have chosen to write this term linearly as R, but some authors prefer to explicitly denote the exponent (2) for the same value. Either way, this is an explicit expression for reserves, both contingency and margin.) This reserve will be reduced as more terms that impact the i’th performance metric are added, increasing mk. This allows previously defined tolerances for σwj j=1,2mk1 to remain the same as new system knowledge is absorbed.

The determination of the Rk0, the initial design reserves, is usually a difficult process. Any Flagship mission is a new design problem for the reasons discussed in Sec. 2. The final design is not known, and there is little to no historical data to make an estimate of the Rk0 for the science-based κi.

For the key technical performance metrics, such as thermal and mass, a list of threats, liens, and opportunities (TLO) was developed and maintained. The TLO was able to convey at a glance the state of a key metric, with all the current knowledge. The TLO process implicitly expresses Δik, as the knowledge deficit can also be interpreted as performance risk. It was just such formulation of the thermal TLO that made clear the need for the inclusion of the ADIR to complete the thermal “return to green” effort and close the thermal design of the observatory.

Not all engineering disciplines use a reserve as shown in the analysis that developed Eq. (9), a “linear” reserve. Some, such as dynamics, rely on a multiplicative factor, where the predicted performance, jitter amplitude as an example, is multiplied by a scalar factor to cover model incompleteness and other uncertainties. This multiplicative factor on Webb was called a modeling uncertainty factor.

For system parameters, such as power, mass, communication link margins, and the like, there are AIAA, NASA, and even contractor standards that specify expected growth and the required reserves. Many of these specific reserve plans are specified by contract or documented in the systems engineering management plan or similar document.1416 Even with some level of agreement on margin or reserve management, new challenges can arise on a program such as Webb.

Impact of Incomplete System Knowledge

In many areas, the system models were incomplete early in the program and there were often interactions between different disciplines and different parts of the program that were not understood until late in the process.

Anecdote 6: Tracking Mass and CG for LV Compatibility

One such example was the need to not only manage the total mass, the zeroth order moment of the mass distribution, but also the need to limit the movement of the center of gravity (mass) (CG) and the total inertia, the first and second moments of the mass distribution. The need to control the location of the CG arose at MPDR when it was declared that the current prediction was not compatible with the launch vehicle. A method had to be implemented to predict and report this motion as well as set reaction limits that would trigger more aggressive remediation. This method for predicting the migration of the CG was developed in 2008. Subsequently, a plan for reporting and managing CG location and its possible final location was agreed to. This process was used to manage the evolution of the location of the CG and its uncertainty. As history notes, JWST’s launch was very efficient; by extension, the algorithm used to manage CG can also be considered successful.

Anecdote 7: Charging Considerations and Multi-Disciplinary Coordination

Establishing a spacecraft charging methodology early in the program is important to avoid problems later. An issue arose when combining conflicting thermal and coupled electro-static discharge (ESD) requirements. A significant length of wire shielding on harnesses behind each mirror segment was removed to address thermal concerns. Shielded harnesses are used in part to mitigate coupling effects from discharges of nearby materials. Removal of harness shielding made downstream electronics vulnerable to ESD signals and potential damage. This change from original design required analysis and test to demonstrate that nearby discharge sources would not damage actuator and sensor electronics. Because of the cryogenic operating temperatures on the telescope side of the sunshade, materials that would normally not charge and discharge at the low electron flux levels outside Earth’s magnetosphere became potential electrostatic discharge hazards because their electrical conductivity drops many orders of magnitude at those temperatures. Significant effort was also required to define the nominal and worst-case electron charging environments for transit through the radiation belts and during cruise to and operation at L2. Verification of the design required analysis and test to determine if materials would reach discharge thresholds and then to demonstrate that nearby discharge sources would not damage actuator and sensor electronics.

Anecdote 8: System Level Audits of Critical Alignments

The small range of motion of the cryo-actuators on the JWST made the primary to secondary alignment a critical alignment. Focus is determined by the final position of the secondary mirror (SM) with respect to the PM. Since this crosses several areas of expertise (structure, optical, electro-mechanical) and hardware ranging from the mirror blanks to the deployed SM support structure, a dedicated thread analysis was conducted to compute the final distance between the deployed surfaces of the SM and the PM. This consisted of a geometric combination of all contributions to the PM-SM spacing from drawings and final component end item data packages. The audit discovered a 1  mm PM-SM vertex position discrepancy. This is very significant, since the discrepancy was of the same order as the remainder of the cryo-actuator focus range after deployment from the mirror snubbers. In addition, this would have been a very difficult issue to correct if it had been discovered after the OTE assembly was completed (e.g., in cryo-vac testing). The likely solution would have been to compensate by reducing the active actuator ranges on both the SM and PM, creating a risk to the overall telescope alignment. It turned out that the flight SM had been made 1  mm thicker than originally specified, for which a waiver had been granted at the mirror level. However, the waiver had not resulted in a balancing compensation elsewhere in the OTE system until after the audit. Interestingly, the spare SM was not out of thickness tolerance.

Anecdote 9: Integrated Design for Mechanical Interfaces

Complex logistics amplified the difficulties in defining specifications for mechanical interfaces. The efforts required detailed coordination between affected stake holders and numerous iterations of system level analyses. Many mechanical interfaces existed between the telescope and spacecraft elements (SCE), most of which were also deployable interfaces, such as the primary structure interfaces between the four legs of the telescope and the spacecraft. A new non-explosive actuator (NEA) had to be designed for this interface to withstand the launch load and enable low shock levels into the telescope. This was an issue realized early in the program, requiring a large effort for planning and testing. The large preload required on these four NEAs was the subject of numerous structural tests and preload monitoring for extended periods under integration environments as well as during SCE and observatory vibration testing.

Internal mechanical interfaces within subsystems were numerous, and they often could not be tackled except through iterative observatory-level analyses. These required accurate structural models to be maintained, improved, and informed by tests at various stages of the program on components, subsystems, systems, elements, and the observatory. As part of the integrated modeling activities, structural model requirements were specified early on the program to ensure model accuracy. Of importance was to make sure that the models represented the latest designs, correlated well with test results, and were compatible with the observatory system model. The needed accuracy of the model made the observatory physical model quite large toward the end of the program, creating problems with limitations on model size. Model reduction methodologies were employed as needed and where appropriate given analysis objectives.

5.

Confidence in the Performace of the Design: Verification

The third fork in the map in Fig. 1 is that of achieving confidence in the performance of the design. Verification has been part of program development on Webb from the early development through to the present.17,18 Moreover, the link between design and verification has been recognized as critical to enterprise of space telescopes for long time.19

The verification architecture used on Webb can be described as a modified “test as you fly” approach, where possible traditional methods of verification were used. So, much of spacecraft verification is the same for Webb as it is for other missions. The countable few test-as-you-fly deviations, driven by hardware size and other test considerations, are listed in Table 1. This approach, while exacerbated by Webb’s large physical extent, can be considered an evolution of the same path used on Chandra.20

Table 1

Summary of “Test as you Fly” deviations.

Flight/test non-concordanceReason/issueMitigation approach
Sparse optical testingCost of 6.5 m diameter cryogenic test flat prohibitiveCenter of curvature interferometer at NASA Johnson Space Flight Center (JSC) will measure full aperture simultaneously with sparse aperture test
Test approach reviewed by non-advocates
No system level thermal balance testSize of observatory; too large for a deployed testTesting at space vehicle (spacecraft bus and stowed sunshield), ISIM, and OTE+ISIM levels
1/3 scale sunshield
Full-scale model of flight core region (over the space craft and the bottom of the telescope cavity) with flight-like materials and construction. (Core2)
Independent validated models used to integrate to system level
Observatory electrical tests not performed with ISIM at operational temperaturesBus in chamber compromises cryo-optical testingMost electrical connections from warm to cryo are digital (by design)
Critical harness connections not broken after test
Reference pixels provide RT trending to identify issues
Sunshield and OTE deployments are not tested at system level at operational temperaturesNG processes allow for room temperature testing using drag measured at operational temperatures at lower level of assemblyMaterial choices minimize binding concerns
Full deployment testing done after observatory level vibration testing
Mechanisms qualified to operational temperature ranges
Validated tools used to verify deployment margins

JWST is of such a size that a deployed test in the observatory flight configuration was not possible. Simply put, there is no chamber large enough to allow for a full thermo-optical test of the entire observatory. Even if such a chamber existed, it would be prohibitively expensive for a full aperture optical test, see Table 1. Further, developing a flight-like interface at the edge of the sunshield would have been an expensive if not impossible challenge. So, the NG team, NASA and contractors, took the approach to verify thermal performance by use of experimentally validated models. The testing and model verification activities are shown in Fig. 7. Each of these tests was carefully configured and carried pass/fail criteria.

Fig. 7

JWST observatory thermal testing and model verification activities.

JATIS_10_1_011209_f007.png

For example, the test of the core region was to understand the flow of conducted parasitic heat leaks. This first incarnation of the test was done early in the program, 2009, as conducted parasitics are the main heat leaks from the hot to the cold side of the observatory. The first core test, core 1, no longer well represented the system and the test was repeated as core 2, with a more flight-like representation of the hardware.21,22

The one-third scale sunshield, tested at Northrop in 2009, proved telling. On initial analysis of the data, the heat flows and temperatures did not agree with pre-test predictions. On additional review, it was determined that the “gray body” (absorption and emissivity assumed equal) approximation that had been previously used was not sufficient to explain the results and a new method, the “non-gray” (more general, band-specific determination of absorption and emissivity) analysis was developed. This new analysis is far more time intensive, but, due to the understanding gained from the one-third scale sunshield test, such an analysis was used only where the physics demanded and was not required universally. Over one year into Webb flight operations with the system working nominally, it is clear that approach followed by the Webb program is a viable approach to complex thermal system verification.

As JWST is an optical mission, there was considerable effort from the start of the program to think through the verification of optical performance at all levels, from in-process manufacturing through test and flight level performance.17 Upfront was the existential question of how to avoid a Hubble-like issue of flawed optics.23 These lessons were applied with a system-level optical test and an independent review team, the product integrity team (PIT), for the optics. The PIT was active from early design through the end of verification. The system-level test was the subject of study and restudy.24,25 Since Webb’s optics do not meet wavefront requirements in 1 g, examining the optical results from the system test was challenging. A Monte Carlo based tool was developed at Ball, the integrated telescope model, to help understand what test results would indicate an acceptable telescope.26 Like the other tests in the verification chain, the test criteria were established well ahead of the test so that the outcome of the test as a pass or fail was quickly and unambiguously established.27,28

The specifics of how to verify the optical performance and wavefront was an intense area of effort culminating in large part at the time around MCDR. There was a small set (11) of so-called complex requirements which drive the critical performance parameters and cannot be tested end to end on the ground. Each was given a specific plan and included many layers of checks and double checks (we called them cross checks) on top of the verification itself.26,2935

Stray light is notoriously difficult to verify and testing at the system level on something as large and complex as Webb is expensive and practically impossible to make complete enough to be convincing. As such, the stray light performance of Webb was well studied through the design phase of the program.9,10,3641 Figure 8 presents the verification plan for stray light, circa 2016. It features a number of analyses and cross checks. Of note is the interaction of the mid-IR stray light with both thermal predictions and the influence of contamination on changes in emissivity. This latter effect was studied using a first principles approach.3,4 Eventually, a program summit meeting was held to review the results and adopt the analysis as the program’s approach to this problem.42

Fig. 8

Stray light verification plan, circa 2016.

JATIS_10_1_011209_f008.png

Complexity of Verification and Validation by Test-Validated Models

The overall approach of verification through test-validated models was very successful. It required a lot of work in planning the tests, developing the models, understanding the limitations of the test conditions, etc. This approach requires the allocation of a lot of resources to planning the modeling and testing needed. In the end, this approach on JWST was successful in validating the models and the models successfully predicted on-orbit performance.

Anecdote 10: Dynamics Model Validation and Verification by Test

The complexity and time lag in flight hardware readiness required separate dynamic testing for the two elements. Telescope acoustic and vibration testing was conducted at GSFC. Later, the SCE, the spacecraft bus, and sunshield were vibration and acoustic tested at NG in Redondo Beach. The telescope had to be appropriately simulated during SCE testing. Of particular interest was the correct launch load distribution on the four primary SCE to telescope interfaces. The special simulator was designed to mimic the actual telescope lower bay and interfaces, as well as the obvious requirements of mass, center of mass, and overall dynamics. The telescope simulator was tested before its usage for strength as well as a detailed modal survey to ensure its dynamics were properly understood and adequate for SCE testing. NEAs were installed at the primary SCE-to-telescope interface for both vibration and acoustic tests, and their preloads were monitored during the SCE vibration tests.

Telescope modal testing was performed for both stowed and deployed configurations using special fixtures and the test results were used in model to test correlation. Similarly, detailed modal survey tests were conducted on the SCE with the telescope simulator to improve the SCE model. The correlated models were put together analytically to generate the test-correlated observatory model which was used in coupled loads analyses with the Ariane 5, as well as in observatory-level sine vibration test planning and flight worthiness documentation. The deployed version was used in integrated modeling activities to determine line-of-sight and wavefront error contributions from jitter and thermal distortion.

Anecdote 11: Efficient Test, Verification, and Validation for Operations

Stored command sequences (SCSs) were developed early for use in ISIM testing and, because they were available early, I&T took advantage of the synergy to use them as soon as possible once the power on/off sequences were understood and adequately tested out. The SCS tools being in-place early supported rapid creation of single load data file patches that could be loaded to the FSW for use by the engineering model test bed, I&T, and operations.

Coordination of fault management validation testing (rather than just requirements verification) earlier in the test cycle would have helped to highlight unique Webb operation and fault scenarios. Validation testing uncovered issues that were corrected and rectified for launch but could have been addressed earlier.

Anecdote 12: Complexity of Alignments Testing for a Large Optical System

For JWST, the integration of such a complex optical system required significant investment in alignment metrology designs, well-defined coordinate systems, and a variety of alignment tools (e.g., coordinate measuring machines, laser trackers, and photogrammetry). Alignment was a consideration from the beginning of the program, with alignment sensitive structures and interfaces defined on drawings and testing planned to validate important parts of the alignment budget. For example, model predictions of the distortion of the UPS structures under load were validated by testing at the unit level and the test-correlated models were used in the on-orbit predictions. The predicted shifts of the OTE backplane room temperature to cryogenic temperatures were also test validated at coupon levels and at the OTE and ISIM level optical testing at JSC.

A well-defined coordinate system can be critical to meeting overall alignment performance. Though coordinate systems were defined with alignments in mind, there were unpredicted complexities that caused problems during the process. For example, the OTE M-coordinate system was based on a very small reference structure that did not constrain tip-tilt rotations well. The references for this coordinate system were also covered once the aft-optics subsystem was installed; and therefore, the coordinate system had to be transferred to multiple secondary references. The spacecraft J-coordinate system was referenced to a structure that distorted in 1G conditions, resulting in extra uncertainty in all measurements referenced to this coordinate system. All of these issues were ultimately overcome, and the alignment of the entire observatory was successful. But a lot of time was spent solving problems that could have been mitigated with better planning of coordinate system references early in the program.

Part of the early planning is thinking about what level of modeling is appropriate at each phase in the program. Models will inevitably get more complex during the program. You need to understand your problem well enough at the beginning to know what models will be needed and which will need to be integrated together. Think about what inputs have a significant impact on the performance and spend your time on those items. If a term has a small impact, it is ok to just give the prediction a large uncertainty and not spend a lot of energy checking the prediction. If a term could have a large impact, focus more resources on improving predictions, checking all inputs and assumptions, etc.

6.

Summary and Lessons Learned

We have formulated these lessons as being a result of the special conditions for a Flagship mission, namely new technologies, architectures, and designs, performance well beyond the current state of the art, and the low risk tolerance that comes with being a major community and national asset. The endemic risks that come with such a mission cannot be ameliorated, they need to be recognized and planned for.

  • 1. The design challenge of a Flagship, one intrinsic to very nature of such a mission—the full nature of the system is not known until late in the program, should be embraced fully, and widely known by the design team and management. The evolution of system knowledge is evidenced by the example of the Webb observatory system thermal model, Fig. 6.

  • 2. The design will naturally change over development as problems are uncovered and solved, graphically shown in Fig. 3. This should be expected and is a consequence of learning about the system performance. If the design is not changing, technical and management leadership should inquire as to the reasons.

  • 3. Check and recheck assumptions and analytical products that are design assumptions (particularly if made early in the program). In early phases of designs, the designs and solutions are not mature and depend on the experience of the program staff. The underlying assumptions need must be validated to provide a rigorous foundation for a new mission. Anecdotes 2 and 7 provide examples of this lesson.

  • 4. Figure 5 shows the flow of activities that correspond to Eqs. (1)–(9). The requirements set at SRR were not reconsidered in face of evolved system knowledge. Given the challenges of desiging a new system, a future Flagship will do well to always have a “requirements check” run of the (integrated) model, where all parameters are set to the acceptable extreme values and system performance reconfirmed as standard practice.

  • 5. Assuming that (financial) resources will arrive in time early in the program is overly optimistic, only 3 of the last 48 U.S. Federal budgets have not required a CR, the last budget that was passed on time was in 1996. Expect a CR and plan the program accordingly.

  • 6. Verification is an integral part of program design. As shown in Sec. 5, the Webb team has been thinking about verification since the very beginning of the program. As smooth as it was, verification was still a long and expensive process. As we proceed to the Habitable Worlds Observer, with its challenging requirements, even more attention should be paid to verification planning even in the current embryonic program development stage.

  • 7. Typically, lessons learned activities are conducted at the end of a program, which can make such exercises problematic. The program should hold regular lessons learned meetings with the team and record the results, throughout all stages of development. This will help with identifying changes that need to be made on the program as it evolves as well as capturing lessons from every phase for future missions. Over the course of a long program, thought must be given to how to preserve knowledge as people come and go.

  • 8. Model development is a time consuming and expensive effort. Re-use of designs allows for reuse of models. Reuse can be complete when the design is identical, or partial when the same design is used for an “adjacent” purpose. We reused the Webb thermal model to demonstrate that the same basic architecture could be used for the Origins mission. The model was used to show what design changes were required to achieve Origins’ goals.43 Other recent studies have shown that reuse of designs can achieve large savings for subsequent missions.44

Lessons learned from earlier programs have become standard practice for space science mission development. We would be remiss if we did not add our full-throated endorsement of these lessons as well.

  • Develop technology early.

  • Retire risk as early as possible.

  • Assess and manage risk on a regular basis.

  • Focus the entire team on mission success and avoid “stove pipes.”

For all of us, contributing to the development and ultimate success of the James Webb Space Telescope has been the job of a lifetime. We hope that these lessons inform and inspire the success of the next generation of great space observatories.

Disclosures

The authors declare no interests, financial or otherwise in the content of this paper.

Code and Data Availability

The data used in this paper are presented in Fig. 6. It is available from the figure.

Acknowledgments

This paper was prepared on Northrop Grumman internal funds and the personal time of the authors. We would like to thank our anonymous reviewers for their time and suggestions, all of which have been included in this final revision. We especially want to thank our colleague, Jeff Puschell, who read the early manuscript with great care and provided substantive and actionable comments.

References

1. 

G. S. Wright et al, “The mid-infrared instrument for JWST and its in-flight performance,” Publ. Astron. Soc. Pac., 135 048003 https://doi.org/10.1088/1538-3873/acbe66 PASPAU 0004-6280 (2023). Google Scholar

2. 

J. Arenberg, “Effects of ice on the transmission of the James Webb Space Telescope,” Proc. SPIE, 6692 66920S https://doi.org/10.1117/12.73628128. PSISDG 0277-786X (2007). Google Scholar

3. 

J. W. Arenberg et al., “Determination of emissivities of key thermo-optical surfaces on the James Webb Space Telescope,” Proc. SPIE, 9143 91433Q https://doi.org/10.1117/12.2055514 PSISDG 0277-786X (2014). Google Scholar

4. 

J. Arenberg et al., “Radiance from an ice contaminated surface,” Proc. SPIE, 9904 99046G https://doi.org/10.1117/12.2234487 PSISDG 0277-786X (2016). Google Scholar

5. 

J. W. Arenberg, M. Macias and R. C. Lara, “A semi-empirical method for the prediction of molecular contaminant film accumulation (Conference Presentation),” Proc. SPIE, 9952 995202 https://doi.org/10.1117/12.2237795 PSISDG 0277-786X (2016). Google Scholar

6. 

J. W. Arenberg et al., “Long term observations of molecular film accumulation (Conference Presentation),” Proc. SPIE, 12224 122240R https://doi.org/10.1117/12.2632896 PSISDG 0277-786X (2022). Google Scholar

7. 

J. P. Gardner, “The James Webb space telescope mission,” Publ. Astron. Soc. Pac., 135 068001 https://doi.org/10.1088/1538-3873/acd1b5 PASPAU 0004-6280 (2023). Google Scholar

8. 

P. A. Lightsey and J. W. Arenberg, Systems Engineering for Astronomical Telescopes, SPIE Press, Bellingham, Washington (2018). Google Scholar

9. 

P. A. Lightsey et al., “Stray light performance for the James Webb Space Telescope,” Proc. SPIE, 9143 91433P https://doi.org/10.1117/12.2055485 PSISDG 0277-786X (2014). Google Scholar

10. 

P. A. Lightsey, “Stray light field dependence for the James Webb Space Telescope,” Proc. SPIE, 9904 99040A https://doi.org/10.1117/12.2233062 PSISDG 0277-786X (2016). Google Scholar

11. 

M. E. Giuliano and M. D. Johnston, “Multi-objective evolutionary algorithms for scheduling the James Webb Space Telescope,” in Proc. Eighteenth Int. Conf. Autom. Planning and Scheduling (ICAPS’08), 107 –115 (2008). Google Scholar

12. 

J. Nella et al., “James Webb Space Telescope (JWST) Observatory architecture and performance,” Proc. SPIE, 5487 576 –587 https://doi.org/10.1117/12.548928 PSISDG 0277-786X (2004). Google Scholar

13. 

“NASA systems engineering handbook,” https://www.nasa.gov/reference/systems-engineering-handbook/ (). Google Scholar

14. 

“Standard: mass properties control for space systems (ANSI/AIAA S-120A-2015(2019)),” https://arc.aiaa.org/doi/book/10.2514/4.103858 Google Scholar

15. 

“Standard: electrical power systems for unmanned spacecraft (AIAA S-122-2007),” https://arc.aiaa.org/doi/book/10.2514/4.479144 Google Scholar

16. 

G. Karpati et al., “Resource management and contingencies in aerospace concurrent engineering,” in AIAA 2012-5273. AIAA SPACE 2012 Conf. & Expos., (2012). Google Scholar

17. 

C. B. Atkinson et al., “Integration and verification of the James Webb Space Telescope,” Proc. SPIE, 5180 157 –168 https://doi.org/10.1117/12.506410 PSISDG 0277-786X (2003). Google Scholar

18. 

M. Menzel et al, “The design, verification, and performance of the James Webb Space Telescope,” Publ. Astron. Soc. Pac., 135 058002 https://doi.org/10.1088/1538-3873/acbb9f PASPAU 0004-6280 (2023). Google Scholar

19. 

J. A. Crooke et al., “Developing a NASA strategy for the verification of large space telescope observatories,” Proc. SPIE, 6271 627108 https://doi.org/10.1117/12.670209 PSISDG 0277-786X (2006). Google Scholar

20. 

D. Boyd, M. Freeman and N. Lynch, “The CHANDRA X-ray observatory: thermal design, verification, and early orbit experience,” (2000). Google Scholar

21. 

P. Cleveland et al., “James Webb Space Telescope core 2 test - cryogenic thermal balance test of the observatory’s ‘core’ area thermal control hardware,” https://ntrs.nasa.gov/api/citations/20160013539/downloads/20160013539.pdf Google Scholar

22. 

G. Cataldo et al., “Model-based thermal system design optimization for the James Webb Space Telescope,” J. Astron. Telesc. Instrum. Syst., 3 044002 https://doi.org/10.1117/1.JATIS.3.4.044002 (2017). Google Scholar

23. 

L. D. Feinberg and P. H. Geithner, “Applying HST lessons learned to JWST,” Proc. SPIE, 7010 70100N https://doi.org/10.1117/12.786490 PSISDG 0277-786X (2008). Google Scholar

24. 

T. L. Whitman and T. R. Scorse, “Optimizing the cryogenic test configuration for the James Webb Space Telescope,” Proc. SPIE, 6271 62710B https://doi.org/10.1117/12.672994 PSISDG 0277-786X (2006). Google Scholar

25. 

C. Atkinson et al., “Architecting a revised optical test approach for JWST,” Proc. SPIE, 7010 70100Q https://doi.org/10.1117/12.788021 PSISDG 0277-786X (2008). Google Scholar

26. 

J. S. Knight et al., “Integrated telescope model for the James Webb Space Telescope,” Proc. SPIE, 8449 84490V https://doi.org/10.1117/12.926814 PSISDG 0277-786X (2012). Google Scholar

27. 

R. A. Kimble et al., “James Webb Space Telescope (JWST) optical telescope element and integrated science instrument module (OTIS) cryogenic test program and results,” Proc. SPIE, 10698 1069805 https://doi.org/10.1117/12.2309664 PSISDG 0277-786X (2018). Google Scholar

28. 

P. A. Lightsey et al., “James Webb Space Telescope optical performance predictions post cryogenic vacuum tests,” Proc. SPIE, 10698 1069804 https://doi.org/10.1117/12.2312276 PSISDG 0277-786X (2018). Google Scholar

29. 

A. A. Barto and P. A. Lightsey, “Optical performance modeling of the James Webb Space Telescope,” Proc. SPIE, 5487 867 –874 https://doi.org/10.1117/12.550088 PSISDG 0277-786X (2004). Google Scholar

30. 

B. McComas et al., “Optical verification of the James Webb Space Telescope,” Proc. SPIE, 6271 62710A https://doi.org/10.1117/12.672448 PSISDG 0277-786X (2006). Google Scholar

31. 

A. R. Contos et al., “Bringing it all together: a unique approach to requirements for wavefront sensing and control on the James Webb Space Telescope (JWST),” Proc. SPIE, 6271 62710Z https://doi.org/10.1117/12.669072 PSISDG 0277-786X (2006). Google Scholar

32. 

A. A. Barto et al., “Optical performance verification of the James Webb Space Telescope,” Proc. SPIE, 7010 70100P https://doi.org/10.1117/12.790483 PSISDG 0277-786X (2008). Google Scholar

33. 

A. R. Contos et al., “Verification of the James Webb Space Telescope (JWST) wavefront sensing and control system,” Proc. SPIE, 7010 70100S https://doi.org/10.1117/12.786984 PSISDG 0277-786X (2008). Google Scholar

34. 

J. S. Knight, P. Lightsey and A. Barto, “Verification of the observatory integrated model for the JWST,” Proc. SPIE, 7738 773815 https://doi.org/10.1117/12.858349 PSISDG 0277-786X (2010). Google Scholar

35. 

D. A. Porpora et al., “Use of living technical budgets to manage risk on the James Webb Space Telescope optical element,” Proc. SPIE, 9904 99043Y https://doi.org/10.1117/12.2228620 PSISDG 0277-786X (2016). Google Scholar

36. 

P. A. Lightsey and Z. Wei, “James Webb Space Telescope Observatory stray light performance,” Proc. SPIE, 6265 62650S https://doi.org/10.1117/12.672102 PSISDG 0277-786X (2006). Google Scholar

37. 

Z. Wei and P. A. Lightsey, “Stray light from galactic sky and zodiacal light for JWST,” Proc. SPIE, 6265 62653C https://doi.org/10.1117/12.672287 PSISDG 0277-786X (2006). Google Scholar

38. 

T. W. Liepmann, “Cryogenic stray light testing of the James Webb Space Telescope: an easy approach,” Proc. SPIE, 7439 743913 https://doi.org/10.1117/12.825087 PSISDG 0277-786X (2009). Google Scholar

39. 

D. L. Skelton, “Applying the tool: stray light cross-checks of the James Webb Space Telescope,” Proc. SPIE, 7731 77313U https://doi.org/10.1117/12.856722 PSISDG 0277-786X (2010). Google Scholar

40. 

P. A. Lightsey and Z. Wei, “James Webb Space Telescope stray light performance status update,” Proc. SPIE, 8442 84423B https://doi.org/10.1117/12.924852 PSISDG 0277-786X (2012). Google Scholar

41. 

S. O. Rohrbach et al., “Stray light modeling of the James Webb Space Telescope (JWST) Integrated Science Instrument Module (ISIM),” Proc. SPIE, 9947 99470K https://doi.org/10.1117/12.2238827 PSISDG 0277-786X (2016). Google Scholar

42. 

J. Arenberg, “Emissivity calculation review,” (2013). Google Scholar

43. 

J. W. Arenberg et al., “Alternate architecture for the Origins Space Telescope,” J. Astron. Telesc. Instrum. Syst., 7 (1), 011006 https://doi.org/10.1117/1.JATIS.7.1.011006 (2021). Google Scholar

44. 

J. W. Arenberg, “Conceiving and implementing cost effective astrophysics flagship missions,” Proc. SPIE, 12180 1218005 https://doi.org/10.1117/12.2630607 PSISDG 0277-786X (2022). Google Scholar

Biography

Jonathan W. Arenberg is currently the chief mission architect for Science and Robotic Exploration at Northrop Grumman. Prior to working on Webb, he worked on the Chandra X-ray Observatory and co-invented the starshade. During his tenure on JWST, he held several positions, design integration lead, observatory systems engineering (OSE) deputy, OSE manager, ultimately becoming chief engineer. He is an associate fellow of the AIAA and a fellow of SPIE.

Tiffany Glassman is currently the chief engineer for Civil Space in the Northrop Grumman Strategic Space Division, focusing on the development of the Habitable Worlds Observatory mission concept. She held many roles on JWST, including observatory alignments thread lead and sunshield verification lead.

Elysia Starr was the fault management lead for the James Webb Space Telescope. She now supports Next-Gen Polar and functional management for Northrop Grumman Space Systems. She is an aerospace engineering graduate from Syracuse University that has over 25 years of space industry experience. On December 25, 2021, after 15 years on the program, she witnessed the culmination of her work and dedication with Webb’s successful launch and activation.

Reem Hejal was the lead mechanical systems engineer for the program. She is now retired.

Till Liepmann joined the JWST team in 2006 as a systems engineer. He completed the Optical Telescope Element (OTE) Verification Plan and moved into the role of OTE Alignment Lead. Finally, after delivery of the OTE to the Houston test chamber A, till continued working on the sunshield until 2014.

Charles Atkinson spent 24 years on JWST, most recently as the JWST chief engineer after being the deputy telescope manager. Before JWST, he was responsible for the integration and alignment the Chandra X-Ray Telescope among other EO systems. He has been awarded the Robert H. Goddard Exceptional Achievement Award in Engineering, the NASA Exceptional Public Service Medal, the AIAA Goddard Astronautics Award, the Aviation Week Program Excellence Award, and the NASA Distinguished Public Service Award.

Nina Altshuler is a senior engineer and is currently the Spacecraft Charging SME for SSSD at Northrop Grumman. During her time on JWST, she specialized in addressing all areas of the program potentially at risk from spacecraft charging effects, which included performing analysis and significant plasma lab testing.

Annetta Luevano is a senior staff materials and process engineer with Northrop Grumman supporting several roles in the development of the James Webb Telescope cumulating in the position of Parts Materials and Process Manager supporting delivery and the launch of the telescope.

Marc Roth was the space vehicle lead for mechanical systems on JWST.

Perry Knollenberg was the thermal systems lead for the final 13 years of JWST leading the completion the thermal designs, fabrication, integration, testing, verification, launch, and commissioning of the observatory. He also led the thermal technology development and design of NG team that won the nuclear-powered Jupiter Icy Moons Orbiter, was the thermal analysis lead on the Chandra X-Ray Observatory, and served in leadership positions on a number of other NASA and classified programs.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Jonathan W. Arenberg, Tiffany Glassman, Elysia Starr, Reem Hejal, Till Liepmann, Charles Atkinson, Nina Altshuler, Annetta Luevano, Marc Roth, and Perry Knollenberg "Designing a new, large, complex observatory: learning the strategic lesson of newness from our experience on the James Webb Space Telescope," Journal of Astronomical Telescopes, Instruments, and Systems 10(1), 011209 (14 March 2024). https://doi.org/10.1117/1.JATIS.10.1.011209
Received: 13 September 2023; Accepted: 7 February 2024; Published: 14 March 2024
Advertisement
Advertisement
KEYWORDS
Design

Observatories

Telescopes

James Webb Space Telescope

Systems modeling

Space operations

Performance modeling

RELATED CONTENT

SAFIR architecture concept
Proceedings of SPIE (October 12 2004)

Back to Top