Multisensor Data Fusion
Recently, I’ve come across this technical term that’s crops up a lot in the engineering world of IoT and mobile devices: Sensor Fusion. In its nominal usage, it means the ability to combine the output of common multiple mobile sensors like gyros, compasses, and GPS into a synthesized, more application-useful signal. A more precise term for that would be “multisensor data fusion”. That is, data from multiple sensors being combined together.
More broadly, sensor fusion can include synthesis of outputs of a single sensor over time, or an array of similar sensors in an environment, which would include human stereoscopic vision.
I’m largely an application software kind of guy, so these terms are novel to me.
I like to take concepts to an usually absurd apex or nadir1. So then, would the human brain then be an exquisite sensor fusion implementation?
-
“Multisensor data fusion” can be sung to Supercalifragilisticexpialidocious. ↩