There was once a time when a temperature sensor was nothing more than a piece of silicon, which changed its resistance as the temperature changed. A pure analog device, it contained no logic elements. When the application processor needed to know the current temperature, it had to measure the voltage at the sensor’s output then perform an analog-to-digital conversion in order to be able to calculate the temperature value.
Today, system designers can instead use a temperature sensor which provides a digital output. ‘Smart’ digital sensors offer various benefits to users, including improved accuracy and built-in compensation, diagnostics and fault-finding capabilities, configurable filters and programmable interrupts. A digital sensor can also use algorithms to derive a value which cannot be directly measured – such as indoor air quality – from one which can, such as the concentration of volatile organic compounds in the air. And because the measurement signal is digitized in the sensor, the overhead on the application processor is greatly reduced, helping to simplify the design and to lower system power consumption.
A digital sensor, however, is no longer a simple component like its analog forerunner, because it runs embedded software. And software introduces a new element of risk. The performance of the sensor hardware can be characterized very precisely and documented in the product datasheet. In addition, industry-standard processes such as the automotive industry’s PPAP (Production Part Approval Process) enable the quality of production units to be quantified in a verifiable way. As a result, the system designer can have a high level of confidence in the expected performance of a sensor’s hardware under known operating conditions.
But how can the designer achieve the same level of confidence in a sensor’s embedded software?
After all, the risk of malfunctions caused by embedded software is real: preliminary findings published in November 2016 on the European Space Agency’s website suggest that the reason for the failed landing of its Schiaparelli module on the surface of Mars was a fault in the module’s control software triggered by an unexpected sensor output condition lasting for no longer than one second. Because of the nature of expensive space exploration projects, it is reasonable to assume that few embedded systems were subject to more rigorous testing in 2016 than the Schiaparelli module – yet even so, a bug remained in the software at launch.
Few designers have to equip their products to withstand the conditions a space module is exposed to, but most do have a duty to achieve a specified quality level and expected operating lifetime. This calls for some method of verifying the probability of an embedded software error; this method must be appropriate to the budget, development resources and development timescale which constrain the designer.
This article helps to address this requirement by describing the ways in which a sensor manufacturer can support verification of the software embedded inside a sensor IC.
Mission or safety-critical sensor solutions
Some types of sensors are used in complex measurement applications in which guaranteed reliability is essential. Examples of sensors of this type that ams supplies include:
- Flow sensors (see Figure 1) used in cold water meters: the invoice that the utility company sends to the customer depends on the accuracy of the sensor’s output.

Fig. 1: TDC-GP30 ultrasonic flow converter
- Gas sensors (see Figure 2): the safety or even the lives of occupants of a building could depend on the sensor’s ability to reliably detect dangerous concentrations of pollutants.

Fig. 2: CCS811 gas sensor solution
- Biosensors (see Figure 3): optical heart rate measurements made by a wearable health monitor affect a doctor’s ability to diagnose a patient’s condition properly.

Fig. 3: AS7000 biosensor
In all these applications, a digital sensor combines measurements – made in hardware – and processing of data, in software. The scope for error in hardware can readily be characterized within the operating parameters of the device, and documented in the datasheet.
But to characterize the scope for error in the sensor’s software, it is best to break the problem down into parts.
Almost every digital sensor consists of the following main elements (see Figure 4):
- An analog front end which performs the raw data acquisition
- Driver code which accesses the hardware
- Algorithms which perform data processing and analysis
- Glue logic to pass the data on to the application itself

Fig. 4: architecture of a typical sensor solution
This means that every digital sensor includes different software components with different requirements:
- Some software elements are coupled to a specific hardware component
- Some software elements have critical timing
- Some of the software demands a high computing capability
- Some code has no specific requirements
Different faults may potentially be found in these different parts of the code base, and test routines must be carefully tailored to uncover each type of fault (see Figure 5). For instance:
- In algorithms, the tests should look for rounding and signed/unsigned errors, as well as buffer overflows
- In the components (which access hardware), it’s important to test that the software responds properly to time-outs
- At the interface to other systems, such as to a host microcontroller or applications processor, the software must cope with being overloaded with interrupts
These categories are reflected in the test programme to which a sensor’s software is subjected: The code is designed in such a way that it can be broken down for testing in sections. In addition, the smaller sections do not require many interfaces to other parts of the code, but are independent of the calling code as much as possible

Fig. 5: on-chip firmware (ROM code) in an ams sensor solution’s on-chip microcontroller
Testing algorithm code is straightforward: for a given set of input values, the tester knows exactly what output to expect. Since this section of code does not need to know where the data is coming from, it’s not dependent on the sensor hardware.
By contrast, testing of driver code for instance (to verify that the sensor performs register accesses correctly) requires simulation of the hardware.
Nevertheless, testing sections of code efficiently finds bugs such as overflows, access events outside buffers, and wrong termination conditions for loops. But these tests are not sufficient to verify the functionality of the complete sensor system.
More than the sum of its parts
More than two millennia ago, Aristotle already discovered that the whole is more than the sum of its parts. This dictum certainly applies to embedded software. After each section of software is tested discretely, it is essential to verify all the sections co-operate properly (see Figure 6).
In particular, the test engineer must ensure that the software can perform correctly in any hardware environment in which it might be used. For instance, a mobile phone is a far more demanding hardware environment than a stand-alone sensor device, because the mobile phone will typically trigger many more interrupts to the sensor. Stress tests are therefore required to check that the various interfaces in a sensor IC can withstand, interrupt overload without dropping a single byte of data.
System testing also needs to verify the system’s sequencing. For instance, the initialization and calculation routine might be verified independently, but the system software must call these routines in the right order, otherwise the sensor will fail.

Fig. 6: the ams generic test framework
Finally, the operation of the software in unusual conditions must also be verified. The Schiaparelli module failed after encountering a particularly unexpected sensor output condition, and of course it is impossible to test for all the possible extreme events that a sensor could be exposed to. The software checks performed are aimed at verifying performance across a very wide range of non-standard conditions. For example, sensor software is routinely tested for proper operation when drawing an artificially high current to check that it withstands unusual fluctuations in the power supply.
How to prove the integrity of a sensor’s software
The theoretical aim of the software testing program for any given sensor IC is to provide to the user a 100 percent guarantee the sensor system will function properly at all times, in all applications, and in every operating condition. This is, after all, what the user would ideally want.
In practice of course, as the example of the Mars landing module shows, there is no such thing as a 100 percent guarantee. In its absence then, how can the user estimate the probability of a sensor IC’s failure due to a software bug?
This question is hard to answer precisely. In the field of automotive electronics, a safety standard for systems exists: ISO 26262. This provides a framework for predicting the failure rate of a system in its intended application. It imposes a rigorous process for analyzing failure modes and measuring failure rates under each mode.
The more rigorous a test process, however, the longer it takes and the more it costs. An ISO 26262-style validation process is usually inappropriate for consumer products for reasons of both cost and time. But reputable consumer product manufacturers still need to have high confidence in the quality and reliability of the sensor ICs that they use.
Summary
The operation of a digital sensor IC’s hardware can be precisely characterized and documented in the product’s datasheet. The proper functioning of the IC also depends, however, on its embedded software, which implements functions such as drivers, algorithms and interfaces.
Bugs or faults in the software have the potential to impair or disable the performance of good hardware, but unlike for hardware, the software functions of an IC are not fully documented in its datasheet. How can design engineers then evaluate the performance, integrity and reliability of a sensor IC’s software?
This article sets out to show the system design engineer how to judge the risk posed to system operation by a sensor IC’s software, and how to evaluate the methods used by a sensor IC’s manufacturer to verify the software’s quality. It describes the typical functions of a sensor’s software, and the ways in which they can fail. It then describes the value of testing sections of code, as well as the system as a whole.
It finishes by describing an internal software quality standard, which users of its sensor ICs can examine to help them to evaluate the company’s processes for verifying the quality of the software element of the sensor ICs that it ships to customers.
Filed Under: M2M (machine to machine)