Sensor Data Fusion: More than Just Sensor Integration

By Carolyn Mathas

Contributed By Electronic Products

Sensor fusion is more than the sum of the individual parts. It is an exhaustive effort in the integration of hardware and software. This article highlights some of the more recent strides in sensor advances and how hardware and software will need to be tied together to work well.

To begin, it should be noted that the terms “sensor fusion” and “sensor integration” are often used interchangeably, but in fact differ, as is evident in the following two definitions and Figure 1.¹

Sensor Fusion: The combining of sensory data or data derived from sensory data such that the resulting information is in some sense better than would be possible when these sources were used individually.

Multisensor Integration: The synergistic use of sensor data for the accomplishment of a task by a system. This differs from sensor fusion in that sensor fusion includes the actual combination of sensory information into one representational format.

Block diagram of sensor fusion and multisensory integration

Figure 1: Block diagram of sensor fusion and multisensory integration.

The use of sensor fusion will continue to identify next-generation intelligent systems. Sensor fusion occurs when the system is able to make intelligent decisions, based on the input of multiple sensors that capture the data and use the captured information to create an enhanced and intelligent system. This integrated data provides for a means to achieve goals that are not achievable using a single sensor or series of single sensors.

The data combines to deliver information that is more accurate, complete, and precise, compared to data gathered in the absence of the volume and non-integrated sensors that were used in the past.

It is not always the individual sensors onboard that complete a task at hand, as was previously the case. Today, it is more likely to be synergistic information provided by several sensory devices that, taken together, accomplish a task. Using multiple sensors allows for multiple viewpoints, greater special and temporal coverage, a reduction in ambiguity, and greater precision than individual sensors can deliver.

Applications for multisensory integration and fusion include:
  • Remote sensing
  • Equipment monitoring
  • Biomedical systems
  • Transportation systems
However, even with sensor fusion, there are still challenges. Potential device error, noise, and flaws in the data gathering process, as well as the extraction of meaningful data are still sometimes problematic. Other challenges include the ability to react to any alterations in environmental conditions. The amount of information that is not used or discarded is high. Ascertaining what will and will not be needed is an exercise left to the experienced design engineer. A major challenge is to correctly weigh the cost and enhanced performance tradeoffs that sensor fusion demands. When each individual sensor can infer or conclude, and many sensors are integrated for a specific application, the results are definitely enhanced data, speed, and redundancy.

There is little doubt that substantial redundancy benefits occur as a result of sensor fusion as information from sensors overlaps. Today, substantial development is taking place in the techniques used to handle data that is sometimes partial in nature, or uncertain. Also important is solving bandwidth challenges in a net-centric environment.

Areas of recent research focus include localized or distributed algorithms where sensor nodes communicate with those sensors in close proximity. These algorithms are particularly valuable when there is a node failure and network changes. The downside of these algorithms, however, is their performance is not as stellar in a more global sense. Another challenge is the ability, or lack thereof, to track mobile targets with micro sensors, based on their inherent constraints. The trick is to have them communicate with some, but not all sensors, in order to prevent energy supply drain and overloaded networks.

From games to tablets

Kionix, for example, continues to develop embedded algorithms and advanced software, such as sensor fusion, to provide added value to end applications through the power of motion. The company uses a proprietary deep-silicon plasma etching process (DRIE), which provides superior performance in its MEMS structure. The company’s “XAC” sense element delivers high stability, shock and temperature performance and minimizes reflow work. Consumer product “tear-downs” have shown Kionix accelerometers in products ranging from the Xbox Kinect to Samsung Galaxy tablets to Motorola Droid mobile phones and the Barnes and Noble Nook. Kionix products are also used in the automotive and health and fitness industries.

The company’s KXTF9 (Figure 2) is a triaxis ±2-, ±4- or ±8-g silicon micromachined accelerometer with integrated orientation, tap/double tap, and activity detecting algorithms. The sense element is fabricated using a plasma micromachining process. Acceleration sensing is based on differential capacitance from acceleration-induced motion of the sense element, which uses common mode cancellation to decrease potential errors arising from process variation, temperature, and environmental stress.

The sense element is hermetically sealed at the wafer level by bonding a second silicon lid wafer to the device using a glass frit. A separate ASIC device packaged with the sense element delivers signal conditioning and intelligent user-programmable application algorithms.

KXTF9 by Kionix

Figure 2: Block diagram of the KXTF9 by Kionix.

The accelerometer is packaged in a 3 x 3 x 0.9 mm LGA plastic package operating from a 1.8 to 3.6 VDC supply. An I²C interface is used to communicate to the chip to configure and check updates to orientation, Directional Tap detection, and activity monitoring algorithms.

9- and 10-DoF

A typical sensor fusion solution that combines a 3D accelerometer, 3D gyro, and 3D magnetometer is called a 9-DoF (nine degrees of freedom) or 9-SFA (nine-sensor fusion axis) solution. Taking a closer look at the type of sensors used in mobile devices, it is easy to see that 3D accelerometers, 3D gyroscopes, and 3D magnetometers are becoming standard features. For instance, modeling the orientation of a rigid body, including airplanes, RC toys, sport watches, smart phones, etc., can be implemented by using the Direction Cosine Matrix Algorithm (DCM) and a magnetometer, a gyroscope and an accelerometer. The DCM algorithm calculates the orientation of a rigid body in respect to the rotation of the earth.

The calibrated sensors readings are fed to the DCM algorithm, which provides a complete measurement of the orientation, relative to the earth’s magnetic field and the direction of gravity, expressed by the Euler (roll, yaw, and pitch) angles. In applications such as smartphones, low-power MCUs such as the Texas Instruments MSP430F5xx can handle all the communication with the motion sensors via I²C protocol.

Looking forward there is no reason why, if we include one additional sensing quantity, sensor fusion applications cannot extend to a 10-DoF (or 10-ASF) solution.

The Analog Devices ADIS16480 iSensor device, for example, is a complete inertial system that includes a triaxial gyroscope, a triaxial accelerometer, and triaxial magnetometer, as well as a pressure sensor and an extended Kalman filter (EKF) for dynamic orientation sensing. Each inertial sensor in the ADIS16480 combines iMEMS technology with signal conditioning that optimizes dynamic performance.

Factory calibration characterizes each sensor for sensitivity, bias, alignment, and linear acceleration (gyroscope bias). As a result, each sensor has its own dynamic compensation formulas that provide accurate sensor measurements.

It is also evident that sensor fusion requires substantial MCU power. As a result, STMicroelectronics has announced details of a miniature smart sensor that combines a three-axis accelerometer with an embedded microcontroller together in a compact 3 x 3 x 1 mm LGA package for advanced custom motion-recognition capabilities.

STMicroelectronics has combined the microcontroller, operating as a sensor hub that runs sensor-fusion algorithms, and a high-precision three-axis digital accelerometer into a single package it calls iNEMO-A (to distinguish it from the company’s current iNEMO accelerometer/gyroscope series). The device reduces the demand on the host controller and application processor and decreases power consumption in portable devices. Both benefits deliver greater freedom and flexibility to the design of motion-enabled consumer electronics. The integration of high-resolution linear-motion sensing and the sensor hub in a single package also promises to increase system robustness and is said to be ideally suited for board-layout optimization.

  1. Wilfried Elmenreich’s An Introduction to Sensor Fusion, Research Report 4/7/2001.

Disclaimer: The opinions, beliefs, and viewpoints expressed by the various authors and/or forum participants on this website do not necessarily reflect the opinions, beliefs, and viewpoints of Digi-Key Electronics or official policies of Digi-Key Electronics.

About this author

Carolyn Mathas

Carolyn Mathas has worn editor/writer hats at such publications as EDN, EE Times Designlines, Light Reading, Lightwave and Electronic Products for more than 20 years. She also delivers custom content and marketing services to a variety of companies.

About this publisher

Electronic Products

Electronic Products magazine and serves engineers and engineering managers responsible for designing electronic equipment and systems.