Technical Considerations



The aim of this page is to explain our stand on the main technical aspects of image fusion, based on considerable 'hands-on' expertise gained during the development of our systems and feedback from customers. Although a 'Comment' area is not provided, please feel free to contact us for further technical discussions.

While some of the considerations presented here might hold for different fusion applications/settings, the scope of our work is focused on the following setting:

  • Thermal (LWIR) and Vis (incl. NIR) sensors
  • Real Time
  • Pixel-level fusion

Why fusion?

  • For human viewers, combine complimentary information from the different spectral channels in a way that reduces the cognitive load of observing (and understanding) multiple independent displays.
  • See through glass, light sources
  • Thermal information, e.g. detect 'liveness' of a target (person, animal)
  • Reduced bandwidth requirements when video transmission is required

Why thermal AR?

  • Hands free
  • Head-up (don’t have to look away from the ‘target’)
  • Better perception enabled by see-through fusion (thermal image is not the way humans are used to see things, sometimes not intuitive)
  • Better localization of thermal data

Synchronization and Latency

  • visual artifacts
  • control problems (when real-time actions are performed based on the fused image as input)


Visualization / Representation

Benefits of fusion are task dependent ; Not always helps, one can mask the other

Cognitive fusion; Semantically higher importance to the operator

Color representation significantly increases the number of different materials that can be discriminated in a scene. Intuitive, task based coloring. Color constancy is important.

Quality metrics; Observation - user performance or detection, recognition, tracking or classification

AR case:

It is important to understand what it is like to be in an environment, not how a camera captured image looks

Explore further

1) Alexander Toet (2011). Cognitive Image Fusion and Assessment, Image Fusion, Osamu Ukimura (Ed.), ISBN:978-953-307-679-9, InTech, Available from:

2) Krebs, W.K. & Ahumada, A.J. (2002). Using an image discrimination model to predict the detectability of targets in color scenes, In: Proceedings of the Combating Uncertainty with Fusion - An Office of Naval Research and NASA conference , April 22-24, 2002., Office of Naval Research and NASA, Woods Hole, MA.