The use of drones keeps growing, and innovative ways to use them to our benefit have been seeing the light of day. From breaking world records to helping pollination, and everything in between, there's a number of creative ways in which drones can be used.
Now, a team of researchers from Carnegie Mellon University in the U.S. is working on a system that will use drones to help identify the worse-hit areas after disaster strikes.
SEE ALSO: DRONES ARE BEING TESTED FOR PRECISE DETECTION OF LIFE IN DISASTER ZONES
A large number of people around the world own drones. These are typically used by amateurs to snap stunning aerial pictures, just for fun. But when disasters strike, people are also flying their drones above damaged zones in order to capture the devastation.
The team at Carnegie Mellon is saying that the latter amateur footage can be extremely useful when it comes to rapid damage assessment. Hence its focus on finding ways to use it for such a reason.
The team is using its knowledge of artificial intelligence (AI) to develop a system that will be able to automatically detect buildings and reveal the level of damage they have sustained.
The hope is to find a faster and more efficient way to assess damage.
"Current damage assessments are mostly based on individuals detecting and documenting damage to a building," said Junwei Liang, a Ph.D. student in CMU's Language Technologies Institute. "That can be slow, expensive and labor-intensive work."
By using drones, closer images and videos can be caught from multiple viewpoints and angles — which will be a jump up from current methods using satellite imaging that only snaps from one angle.
The point is to use an extensive array of video footage captured from anyone flying their drone above the damage-stricken area, and not merely depending on first responders' systems. "The number of drone videos available on social media soon after a disaster means they can be a valuable resource for doing timely damage assessment," explained Liang.
The team's system overlays masks on different parts of a building from the video that appear damaged. It then decides whether the damage is serious or light, or whether or not the building is entirely destroyed.
Their findings will be presented at the Winter Conference on Applications of Computer Vision (WACV 2021), which will take place virtually next year.