Sensor fusion is the intermingling of sensory data from various sources to create a more complete image than would otherwise be possible if considering the independent sources by themselves. During the process of sensor fusion, the enhanced completeness of the image refers to information which is increasingly more complete, accurate, or reliable than the independent sources should warrant. In other words, sensory fusion is an example of two plus two equaling five; the combination of data sources offers extra sensory information that would not otherwise be available. Sensor fusion is typically used in surveillance operations, including with television cameras, sonar, radar, and geological surveillance involving seismic or magnetic sensors. The process takes place through a centralized or decentralized approach, depending upon which party is responsible for combining the images into a single whole.
Perhaps the simplest way for someone to think about sensor fusion is to picture a home security system consisting of multiple surveillance cameras set up throughout the different rooms of a house. If all of the cameras are linked to one central room containing a single television monitor for each camera, there will be a wall of images which represents the aggregate surveillance data for the entire house. This aggregate combination of images illustrates the additional benefits of fusing the images together into a single whole; by having all of the camera information broadcast to a central location, it becomes much easier to track the movements and activities of individuals in the house.
This can be contrasted against a situation where there is only a single television screen and the observer must cycle through the different cameras to obtain the desire image. The observer is still getting the exact same data, but the fact that the information is obtained in disparate pieces — as opposed to a seamless whole — makes the surveillance process far more difficult to execute. Gathering data on an intruder in a sensory-fused environment provides extra information; in certain cases, overlapping camera coverage zones will provide a multi-angle view of the intruder, making identification and observation that much easier. In a non-sensory-fused environment, having to cycle through images on a single screen will deprive the observer of these multi-angle benefits. Although it provides the exact same views, an observer benefits more from having the images fused together.
This is a decentralized example of sensory fusion; the observer must piece together the camera feed data using his own judgment and knowledge. This can be compared against a centralized sensor fusion environment, where some party at a central location combines the data sources before forwarding the end-result to the client. When selecting one or the other, experience is an important determining factor. In an environment where the client has expertise at sorting through and meshing together disparate pieces of data, a decentralized approach can allow the client to use his professional judgment with the raw data. When the client has less experience, a centralized approach can allow more skilled individuals to sort through the raw data, forwarding only the most relevant information to the client.