This thesis addresses the problem of detecting and tracking objects in a scene, using a distributed set of sensing devices in different locations, and in general use a mix of different sensing modalities. The goal is to combine data in an efficient but statistically principled way to realize optimal or near-optimal detection and tracking performance.
Using the Bayesian framework of measurement likelihood, sensor data
can be combined in a rigorous manner to produce a concise summary of
knowledge of a target’s location in the state-space. This framework
allows sensor data to be fused across time, space and sensor modality.
When target motion and sensor measurements are modeled correctly,
these “likelihood maps” are optimal combinations of sensor data. By
combining all data without thresholding for detections, targets with
low signal to noise ratio (SNR) can be detected where standard
detection algorithms may fail. For estimating the location of multiple targets,
the likelihood ratio is used to provide a sub-optimal but useful representation of
knowledge of the state space. As the calculation cost of computing
likelihood or likelihood ratio maps over the entire state space is prohibitively high for
most practical applications, an approximation computed in a distributed fashion is proposed and analyzed.
This distributed method is tested in simulation for multiple sensor modalities,
displaying cases where it is and is not a good approximation of central calculation.
Detection and tracking examples using measured data from multi-modal sensors (Radar, EO, Seismic) are also