Programmers can manipulate the maps on smartphones or satellite images on computers to appear different than how they are. Now researchers are warning that the technology could be used to cook images of public lands and natural disasters.
Scientists found that the emergence of "deep fakes" — doctored videos that use an algorithm to replace the person in an original recording with someone else in a way that looks authentic — could bleed into geographical information sciences (GIS), changing the appearance of landscape satellite images.
Researchers call this "location spoofing," and as artificial information technology becomes more advanced, some say it can be used to dupe people into thinking public lands, protected areas and national parks look different than they are.
"This isn’t just Photoshopping things. It’s making data look uncannily realistic," said Bo Zhao, professor of geography at the University of Washington who works on artificial intelligence and GIS, in a statement. "The techniques are already there. We’re just trying to expose the possibility of using the same techniques and of the need to develop a coping strategy for it."
Zhao headed a team that published research in April in Cartography and Geographic Information Science about location spoofing. They found that programmers can easily change colors of buildings, make images look clearer or fuzzier and even add landscapes of different cities without any detection of being fake.
"You can create an image with Beijing-style architecture and combine it with Chicago-style and New York-style buildings — even rural and urban landscape — and it can all look real," said Chengbin Deng, co-author of the study and professor at the State University of New York at Binghamton. "With a lot of data sets, these images can look real to the human eye."
Deng said the same techniques could easily be used on protected areas and national parks to modify colors and their brightness, or even to add land features that aren’t actually there.
In terms of natural disaster and climate change, the differences between a true and doctored image are subtle, but could have major implications as technology and social media advance.
For example, Pierre Markuse, a writer at Sentinel Hub, a satellite imagery service, used the same data in a satellite image of the 2018 Camp Fire in California to create his own image.
His reproduction, which he wrote about in a 2019 Sentinel Hub article, makes the flames look brighter and the forest greener than the original image.
"None of these versions is ‘wrong,’ we just visualized the data available in a different way," he wrote in the article. "And since we are talking about images being perceived as fake, it should be noted that neither his nor my visualization really depicts a real-life view as perceived by the human eye, since we both added shortwave infrared data, which the human eyes can’t see."
It’s not as harmless when the images are completely made up, which happened in a 2019 image of a "devastating" Central Park fire that never happened.
He described in the article that after all of 10 minutes on Photoshop, he was able to create the same fake satellite image, too.
"But here is also where it is getting harder. Not all people know how to access and look for satellite data," Markuse wrote.
"So a satellite picture showing what looks like tanks rolling through a city, or burning and destroyed buildings after an alleged airstrike, is relatively easy to fabricate but then hard to fact-check for a big chunk of the population."
Deng said they wanted to use their research to warn of the misinformation that could come from location spoofing while also turning researchers and policymakers on to some of the positives of the technology.
One way, he said, is to use the algorithm as a precise scenario generator. Researchers can input a scenario — for example, a protected space under proposed conservation policy — and simulate how the landscape will react and develop over time.
"We can simulate a scenario where we allow urban deforestation and then let it grow back to see what happens," Deng said. "It really could be an advanced simulation tool to support visualization."
The technology could also be used to clear up satellite images that are clouded or blurry from high air moisture, like many coastal cities. The algorithm could simulate a "cloud-free" city, he said.
But in the wrong hands, the technology could be used to misrepresent trade secrets or make up environmental code violations, which could become a matter of national security, he said.
"We don’t advocate for people to use the technology for bad, doing that would very clearly violate the ethics of technology," Deng said. "But now, it’s on our radar if something does go bad."