Traditionally, the space industry has been highly risk-averse, due in large part to the high cost of projects. The NewSpace industry, which describes recent emergence of private-sector funding in space activities, takes an entirely different approach to challenges and economics of the space industry.
The emergence of NewSpace companies with satellite launch capabilities has put pressure on the costs from established commercial launch providers. It’s also adjusted the requirements for on-board electronics that can be put into orbit. With a cost of launch reduced, it is becoming easier to replace technologies over time, hence why satellite designers need not look to reduce risk to zero, but can calculate risks and make design adjustments accordingly. This “fly/re-try” philosophy opens the door for new commercial enterprises bringing a different approach to satellite technologies. Other missions with more investment in satellite functionality or where distances are far greater, will still require a level of risk reduction that may be cost prohibitive for the moment.
As the barriers to entry are lowered and space becomes more accessible, more technologies can be usefully deployed in orbit. There is a risk, however, with the deployment of emergent technologies by developing companies that the approach to testing may prove inefficient, expensive, and may not even meet the application’s needs.
Even to the most experienced engineer, reducing the cost of test is challenging. There are many testing stages required throughout the life cycle of a satellite payload design. It’s advisable to create a model that could apply to any stage of the life cycle, from design and validation, to manufacturing, and final certification.

Figure 1: Simplified model of the cost of test.
The simplified model (shown in Figure 1), separates the cost of test into four distinct categories: “Test Step Cost,” “Diagnostics and Repair” (or redesign of the payload components and/or systems), “Overhead” and constant costs, along with “Product Waiting.” Each of these categories has elements that commonly fall within their bounds, depending upon the life cycle’s product stage.
A common approach is to simply determine which individual variables have the greatest impact on reducing cost of test. For every stage of the product life cycle, a new test system is defined that implements changes needed to reduce costs. Once the design is in place, variables in the cost of test model begin to become fixed, making it difficult or nearly impossible to make any changes. In addition, as needs arise for more complex systems with wider bandwidths, looking at individual variables isn’t effective.
The solution to this problem is twofold: first, rather than looking at individual variables in the cost of test model, it’s best to look at the entire test system. Second, design the test systems while defining product requirements, rather than as the need arises.
To demonstrate the value of looking from a system and product life cycle level, we’ll consider three specific ways to reduce cost of test: time reduction, increasing accuracy, and confidence. It’s easy to associate reducing time with increasing measurement speed or accuracy with buying an instrument containing better specifications. Taking a system level approach has the greatest impact on test costs. To reduce time from a system level, consider the overall program schedule, from product design and development, through manufacturing and delivery. Instead of focusing on improving instrument level accuracy, consider improving the whole test system’s accuracy. Instead of trying to increase confidence for a single measurement, consider working to improve the consistency of test results throughout each stage of the product life cycle.
Using a holistic approach includes designing test systems at the requirements definition stage, rather than determining testing needs as they arise during the product life cycle. Per the U.S. Government Accountability Office (GAO), 80 to 90 percent of costs for the entire program are locked in during the product requirements definition phase, though only about 10 percent of the total lifetime cost have been spent. These costs include developing test systems that occur much later in the program. This indicates a very large potential for savings if the test system is defined upfront with the product.

Figure 2: Defining test systems upfront with the product can potentially reduce cost.
There is a lot of pressure on the final satellite product to behave as close to the design as possible. If the requirements for each design component are too relaxed, the overall system uncertainty will be too great, at best reducing the yield during final testing and at worse, causing the satellite to fail during operation.
The acceptable margin in the link budget (which represents the minimum power received by the payload such that the resulting bit error rate meets system requirements) is limited by the accuracy of test systems used throughout development. If each system’s measurement uncertainty is known, it’s possible to manage variation between systems and correlate test results from one life cycle stage to the next. This significantly reduces the cost of test because of the time currently required to verify results when using multiple stations or moving to a new stage in the life cycle.
The typical approach to managing system level measurement uncertainty consists of two parts: increasing test margins through analysis of individual system components, and using an extensively measured “golden device” to verify system performance.
System performance is often approximated through assessing the wideband performance and measurement uncertainty of individual instruments in the system. Consequences of increasing the test margins arise as systems become more complex, bandwidths grow wider, and frequencies increase. A large amount of headroom increases the cost of test. Signal routing and conditioning elements, like cables and adapters, are neglected. Using an unverified uncertainty introduces potential for non-repeatable test results. Performance verification of multiple systems with a “golden device” allows for the system to be repeatedly tested, but there’s no way to tell the system’s accuracy relative to the accepted international system of units (SI units).
To provide a greater amount of measurement accountability in the satellite industry, the device must be traceable. Traceability is defined as the result property of a measurement or value of a standard, whereby it can be related to the international system of units via national metrology institutes, through an unbroken chain of comparisons all with stated uncertainties. The great advantage of traceability is ensuring a measurement result is exactly what is reported within a known, derived uncertainty.
Traceability is the key to ensuring system performance and managing variation among systems. Using traceable transfer standards to verify performance allows test systems to be designed around the measurements, rather than an individual device or program. This ensures that from business concept through final acceptance testing, the results will be consistent, which dramatically reduces the cost of test.
Conclusion
Through a systematic approach to test that starts at the design phase of any development, we can decrease program costs. By using instruments that are traceable, we can assure our measurements are accurate from design to deployment. Accuracy of measurement equipment allows for more efficient test; with wider margins, reliable results and less maintenance over time. This article explored ways to make test more inexpensive, efficient, and reliable that break the convention used by satellite designers today.
Filed Under: Aerospace + defense