Categories: FAANG

Deep Learning Digs Deep: AI Unveils New Large-Scale Images in Peruvian Desert

Researchers at Yamagata University in Japan have harnessed AI to uncover four previously unseen geoglyphs — images on the ground, some as wide as 1,200 feet, made using the land’s elements — in Nazca, a seven-hour drive south of Lima, Peru.

The geoglyphs — a humanoid, a pair of legs, a fish and a bird — were revealed using a deep learning model, making the discovery process significantly faster than traditional archaeological methods.

The team’s deep learning model training was executed on an IBM Power Systems server with an NVIDIA GPU.

Using open-source deep learning software, the researchers analyzed high-resolution aerial photographs, a technique that was part of a study that began in November 2019.

Published this month in the Journal of Archaeological Science, the study confirms the deep learning model’s findings through onsite surveys and highlights the potential of AI in accelerating archaeological discoveries.

The deep learning techniques that comprise the hallmark of modern AI are used for various archeological efforts, whether analyzing ancient scrolls discovered across the Mediterranean or categorizing pottery sherds from the American Southwest.

The Nazca lines, a series of ancient geoglyphs that date from 500 B.C. to 500 A.D. — primarily likely from 100 B.C. to 300 A.D. — were created by removing darker stones on the desert floor to reveal lighter-colored sand beneath.

The drawings — depicting animals, plants, geometric shapes and more — are thought to have had religious or astronomical significance to the Nazca people who created them.

The discovery of these new geoglyphs indicates the possibility of more undiscovered sites in the area.

And it underscores how technology like deep learning can enhance archaeological exploration, providing a more efficient approach to uncovering hidden archaeological sites.

Read the full paper.

Featured image courtesy of Wikimedia Commons.

AI Generated Robotic Content

Recent Posts

10 Podcasts That Every Machine Learning Enthusiast Should Subscribe To

Podcasts are a fun and easy way to learn about machine learning.

9 hours ago

o1’s Thoughts on LNMs and LMMs

TL;DR We asked o1 to share its thoughts on our recent LNM/LMM post. https://www.artificial-intelligence.show/the-ai-podcast/o1s-thoughts-on-lnms-and-lmms What…

9 hours ago

Leading Federal IT Innovation

Palantir and Grafana Labs’ Strategic PartnershipIntroductionIn today’s rapidly evolving technological landscape, government agencies face the…

9 hours ago

How Amazon trains sequential ensemble models at scale with Amazon SageMaker Pipelines

Amazon SageMaker Pipelines includes features that allow you to streamline and automate machine learning (ML)…

9 hours ago

Orchestrating GPU-based distributed training workloads on AI Hypercomputer

When it comes to AI, large language models (LLMs) and machine learning (ML) are taking…

9 hours ago

Cohere’s smallest, fastest R-series model excels at RAG, reasoning in 23 languages

Cohere's Command R7B uses RAG, features a context length of 128K, supports 23 languages and…

10 hours ago