See-and-avoid is still the gold standard for collision avoidance in aviation, especially in VFR. There is simply no system that can detect all traffic, from airliners to single-engine piston aircraft, paragliders, and UAVs. Looking out is still part of the routine even of airliner pilots – especially while flying through Class-ECHO airspace shared with General Aviation (as is shockingly common, for instance, in Germany and Switzerland).
Alas, human vision is notoriously limited and pilots have more than just this one job to focus on. So why not help the humans (and similarly, air traffic control on the ground) by pointing out traffic they may have missed, complementing cooperative systems like transponders/TCAS, ADS-B or FLARM?
Together with the Computer Vision Laboratory (CVLAB) of the EPFL in Lausanne, we have developed a leading-edge architecture together with a set of algorithms to do exactly that. It detects and classifies other manned aircraft and some other hazards in a live video stream from one onboard camera based on hardware that can fit into a GA cockpit. This information can then be used to track and locate targets, to issue warnings to the pilots and even to automatically recommend and execute avoidance manoeuvres. Some algorithms are based on the well-researched YOLO architecture, using tools from deep learning and convolutional neural networks (CNN) to achieve high accuracy while running in real-time. Other algorithms apply common computer vision algorithms.
A crucial part of deep learning methods is to have a large set of sample data that can be fed to the algorithm for training. The data first needs to be annotated by humans, which can be a very tedious task. CVLAB has developed tools to make this efficient and together we have annotated a huge set of video data from many flights conducted under various traffic and environmental conditions.
Check out some results in the video below.