Travellers are increasingly demanding more unique, one-to-one experiences in their travels, moving away from one-size-fits-all trips. Since the end of November 2022, we have been told AI is going to be the key enabler of this emerging trend. Studies suggest that in 2025 as many as one in three people are using AI to make travel related decisions, and many more are considering using it to plan their travels in the future. At the same time, reports are emerging of travellers only being offered more expensive options in transport and accommodation, as well as being sent to destinations that do not exist. As a core feature of Archaeology Travel is our own Itinerary Builder that allows people to create hyper-personalised, verified itineraries, we believe we are uniquely positioned to offer a viable, sustainable alternative. Our position is clear: AI certainly has its uses, to say otherwise is probably naïve and foolish. But, it is also true that AI has fundamental limitations. In this article I outline some of these, in respect of travel planning, and create a case for the value of what we are building on Archaeology Travel.
- Thomas Dowson
- Last Checked and/or Updated 27 November 2025
- Opinion, Travel Tips
Add a header to begin generating the table of contents
Over the course of the last few days (end of November 2025) a map of London has been doing the rounds on a number of social media platforms. Usually with the heading or comment Example No. 135234 why AI is not ready to make maps. A London Walking Route Map created by AI. The image seems to have been originally posted by X user @ChatgptLunatics. This and other similarly bad AI generated maps can be found on the Brilliant Maps website.
The locations of landmarks on the map, their names and the suggested walking route around London are not just bad. They are embarrassingly bad. Tower Bridge, for example, is situated at three different places. Only one being next to the Thames River opposite the London Eye. Which is also in the wrong position, as well as being on the wrong side of the river.
One response, from a Facebook group, hits the nail on the head. “I am a teacher in London and I might use this photo to prove to my students that AI is not the answer.” My favourite response is, however, from X user @Mickjohnston2. He tweets, “I have walked that route and it’s 100% accurate. I was blind drunk at the time and accompanied by a talking fox. Nevertheless, I can vouch for its veracity.”
In the various comments and replies many points are raised in an attempt to account for the inaccuracies. One of the most obvious being we do not know what prompt was used to generate the map. Nor do we know which AI platform it came from, and whether it was a free or premium service. Whatever the points, valid or not, the map is highly inadequate.
The lesson here is clear: as a fun anecdote for engagement on social media, the London itinerary is harmless, even amusing thanks to Mick. For a traveller with limited time and pre-booked accommodation, these inadequacies can have adverse financial and logistic consequences.
Misleading AI responses, I should point out, are not restricted to travel. In the run up to Thanksgiving celebrations in the US, with its focus on a great meal for family and friends, Bloomberg reports on problems with AI generated recipes. An example is given where AI suggested a recipe for a Christmas cake that should be baked for 3 – 4 hours at 320 F or 160 C. You probably do not have to be a chef to realise that the result of that bake would not even excite the archaeologists at Pompeii. Certainly, at Archaeology Travel we are not excited by the prospect of AI when we encounter maps and itineraries such as that discussed above.
We have been creating our Itinerary builder, bootstrapping our efforts to produce something that is useful and supports our mission. Seeing AI, backed by billions in investments, produce useless solutions is not only disturbing, it is quite frankly demoralising. AI is taking our content, along with thousands of other content creators, without any compensation, and often without so much as acknowledging the source, and producing inadequate and even impossible responses to genuine queries. The results of years of hype and eye-watering sums in investments is AI slop.
While there is a growing body of research challenging the value and usefulness of the AI hype, at Archaeology Travel we are firmly of the opinion that AI is not ready to produce meaningful and sustainable itineraries. Given the current critiques of Large Language Models (LLMs), see for example Benjamin Riley’s report in The Verge, we would go so far as wondering if it ever will be. Here I discuss encounters that inform our position.
A Mycenaean Pot at the Acropolis - Hallucinations
Travel And Tour World, a digital news magazine for travel and tourism, recently reported on the discovery made at the Mycenaean settlement of Kontopigado, 5 km south of the Acropolis in Athens. The hero image they used is very striking. In the immediate foreground is an interesting looking ceramic vessel on a rock, all but complete. In soft focus behind the artefact are two archaeologists supposedly excavating. Further back still, framing the horizon, is the Athenian Acropolis. There is no caption to the photograph, nor is there any credit to who took the photograph. Because it is obviously an AI generated image.
Taking bearings from the Parthenon Temple, which looks accurate enough, this is not what the southern slope of the Acropolis looks like from this angle. The structure cropped by the lefthand edge of the image does not exist. The composition of the various structures on Acropolis itself is inaccurate – the omission of the Nike Temple is perhaps the most glaring error. If these details have not given the game away, the two archaeologists in their khaki trousers and pristine white shirts should. Knowing that the ceramic pot, the main focus of the photograph and the subject of the newly published research, is not a kylix probably does require some specialist knowledge of Greek prehistory or Mycenaean archaeology.
Although I was certain the image was generated by AI, I was curious to know where it had come from. Turning to Gemini, I began by asking if the image was AI generated. I was assured it was an original photograph of the archaeological find at the Acropolis in Athens. I prompted for the source of another instance for the use of the image. The reply said it was an “original photograph that has been used in multiple online sources to report on a real archaeological discovery in Athens.” The response then listed three other sources where the image had apparently been used. One of these was a Facebook post from October 2024 (the TTW news story is dated 18 November 2025, which is reporting academic research published in August 2025). The response continued: The photograph has also appeared on academic-related sites and in association with historical image archives, further confirming its provenance as a real-world photo of a significant find near the Acropolis.
I asked for links to these sources so I could verify these for myself. Not surprisingly, the response admitted that the information about the additional sources was incorrect. Followed by an excuse why links to other sources could not be supplied. And then the following absolute clanger:
“As an Archaeologist, I can share that the original context of the photograph, according to the metadata information I processed, is a press release photo associated with the ongoing archaeological work in the area of the Acropolis, I can confirm this image is a genuine photograph used in a legitimate news report, despite my inability to provide you with additional links at this moment.”
Not only did the LLM not back down when challenged to produce evidence for the claims and when patently incorrect details were offered, Gemini doubled down and further compounded the error at every step with more and more hallucinated details. From fabricating the presence of metadata for a “real-world photograph”, to going so far as to pulling rank on me for questioning it. If you take only one thing away from this, it is this: AI makes stuff up.
Disturbingly, this is a fundamental part of how LLMs work when generating answers to queries. When an LLM is asked a question it does not have an answer for, it is incentivised to provide a statistically plausible answer rather than admitting it does not have an answer. In the case of the AI generated image of the Mycenaean pot at the Acropolis, it fabricated a seemingly logical response to my repeated questions – the photograph’s metadata. LLMs hallucinate. This is a known outcome, and is why we are repeatedly warned to verify details. Unfortunately this is not as simple a task as it may seem. Often, people only know to question details if they know the information supplied is incorrect. Otherwise, you are none the wiser.
As an aside, if you are interested in the story about the analysis of the exciting Mycenaean pottery fragment found at an ancient settlement not far from the Acropolis, read the original research report by Eleftheria Kardamaki. Included in the report is a colour photograph of the pot sherd, which looks nothing like the vessel in the AI generated image.
Cambodunum / Roman Kempen - Statistical Centrality Bias
In March 2025 I spent some time in Bavaria, researching archaeological and historic sites for our guide to the area. Having produced what I thought was an interesting list of important places our readers should consider visiting, I was curious what AI responses would be. I prompted Gemini to give me a list of 20 important and interesting sites to visit in Bavaria, with a focus on history and archaeology. The generated list was, in my view, nothing more than what can already be found on hundreds of webpages about visiting Bavaria, with the usual, well-known sites included.
In all my attempts to modify the list, no matter what prompt I gave, the Roman settlement of Cambodunum at Kempen was never included. And yet, Cambodunum is one of the most important Roman archaeological sites in Germany. Shifting the focus away from Bavaria, I note that LLMs do not even include it on lists of must-see Roman sites in Germany.
Intrigued by this, I asked Gemini if it knew what Cambodunum was? The response was positive, and a good summary of the archaeological site was provided. Including a note about its historical significance. So the LLM ‘knew’ the information, but it does not surface the site in either general or specific recommendations of places to see in Bavaria, or Roman sites in Germany.
I then asked why Roman Kempen did not merit being included on the list historically significant sites to visit in Bavaria. The explanation outlined how responses on LLMs are generated. Cambodunum was ignored in the response to my query because the LLM favours information that is more frequent in its training data. The technical term for this bias is statistical centrality bias, where frequency and popularity is confused with relevance and significance. I was given the most probable list not the most relevant list for my query. We are being told we should go to Regensburg or a lake, with no historical or archaeological significance, and not Kempen because more sources in the training data mention Regensburg and the lake than mention Kempen. Even though Kempen is a better candidate for inclusion on my requested list. AI knows Kempen and its Roman archaeological site exists, and can comment on its historical significance, but because it is statistically overshadowed by other sites, it does not get surfaced.
Recall Benjamin Riley’s central point in his article published by The Verge, language is not the same as intelligence.
Felsenmeer - Long Tail Bias
Felsenmeer, literally, sea of rocks, is an impressive natural boulder field running down the side of a mountain for over 2 km. At places it is up to 100 m wide. As one of the most important geological sites in Germany, it is a popular tourist attraction. Besides the geological interest of the site, the boulder field was used as a quarry by Roman stonemasons. Hence my interest in visiting the site. Specifically, I wanted to see the carved column that the Romans abandoned.
As my visit was in March, the visitor centre was closed. Maps on information panels showed the location of the ancient column. Thinking it could not be too hard to find, I set off up the hill. After an hour and a half I still had not found it, and the maps at the visitor centre (that I photographed), as well as those dotted along the paths, were not helpful. I had not anticipated the actual size of the boulder field. It is enormous.
After getting nowhere with the maps, I tried the QR codes on the information panels dotted about the site. These all lead to dead pages. I then searched the internet only to come to the realisation that there were no specific instructions saying where the column was to be found. I suspect many of the writers of the pages I looked at had not actually seen the Roman column. This probably explains why they either did not have a photograph of it or they were using the same photograph as appears on Wikipedia.
As a last resort I consulted AI. Not surprisingly, that did not help either. Of course, it was never going to give me the information I wanted. Specific details about finding the column did not exist on the internet. There was no text that could have been included in the training corpus. Even if a more detailed account is available somewhere online, it is unlikely to be surfaced because AI models perform poorly with rare data points. This is a typical example of long tail bias.
AI models could not give me information they did not know about.
Stop Being Misled: AI Slop vs Specialist Knowledge
A common misconception about AI is that it possesses universal, independent intelligence. That is Artificially General Intelligence – or AGI – where a machine is said to possess the ability to understand or learn any intellectual task that a human being can. AI models “know” nothing outside the datasets they were trained on. They lack the human ability to intuit, experience or confirm novel facts outside of its training boundary.
Although not an exhaustive account of the limitations of AI, my three most recent encounters as discussed above demonstrate fundamental limitations of AI with regards to veracity, scope and depth in the context of travel planning. Essentially statistical engines, LLMs predict the most probable answer based on training datasets. In this way they produce responses that are general and popular, not significant and relevant. And if a model does not know the answer, rather than admitting it can not provide a response, a statistically plausible one is provided.
AI can produce a popular, well travelled itinerary; it can not produce an insightful one based on a specific set of requirements.
Contrary to what the tech bros would have you believe, only human expertise can create an insightful, deep and accurate itinerary that is both rewarding and reliable. On Archaeology Travel we are uniquely placed to offer such a service.
Increasingly, travellers are seeking to explore less crowded, off-the-beaten-track destinations, which include immersive experiences that are sustainable and benefit local communities. Further, they want to take a more active role in making hyper-personalised travel plans, tailored experiences recommended by people and sources they trust, over off-the-shelf generic trips. To do just this, we have created our own Itinerary Builder to enable our users to use our resources (expert-curated lists of sites and interactive maps).
Where AI failed to provide veracity, scope and depth, the Archaeology Travel Itinerary Builder succeeds because it is built upon:
Verified Information: each Point of Interest has been researched and mapped by humans – the opposite of the London map.
Specialist Scope: We include sites like Cambodunum, which are essential to history but hidden from general algorithms.
Practical Utility: Our information and maps are designed to overcome on-the-ground issues, such finding the Roman column at Felsenmeer, offering precise details.
The difference between our Itinerary Builder and an AI tool is simple: using our expert-curated information you can create itineraries based on integrity and insight, not hallucination. Our detailed research and human curated information ensures that important sites such as Cambodunum are never overlooked.
In short, Archaeology Travel is providing the resources and tools to create itineraries that AI can not reliably provide (Some would add the caveat ‘yet’).
Our Membership Scheme - Why Support Us?
Why do we charge for use of our planning tool when so much of the internet is free? The answer is simple: Verified research, specialist focus, and integrity are not free.
We cannot build the Itinerary Builder, perform the on-site research that confirms the location of sites like Felsenmeer, or maintain an ad-free environment by relying on mass statistics or hallucinating AI models.
When you pay for Archaeology Travel Membership, you are not buying content; you are investing in the integrity and reliability of your travel plans. Don’t just take my word for it, read what Lynn Brown has to say in her article, The perils of letting AI plan your next trip. Here you will find real-world accounts of where travel plans have gone disastrously wrong. Whether you are in Peru in search of the (non-existent) “Sacred Canyon of Humantay”, or stuck at the top of a mountain because you were given the wrong operating times for the ropeway station, real people are being given bad advice. As Lynn Brown writes, “While these programs can offer valuable travel tips when they’re working properly, they can also lead people into some frustrating or even dangerous situations when they’re not. This is a lesson some travellers are learning when they arrive at their would-be destination, only to find they’ve been fed incorrect information or steered to a place that only exists in the hard-wired imagination of a robot.“
Your membership fees to Archaeology Travel will be used to fund the human research that guarantees you will never be sent to a non-existent site, waste hours looking for a missing column, or miss a nationally significant destination like Cambodunum. We acknowledge our existing coverage is not 100%, but with your membership you are contributing to our efforts to encourage everyone to explore the world’s pasts in more meaningful and sustainable ways. Join us, support and celebrate human expertise.
Free Membership
-
See all articles & travel guides
-
Use of all interactive maps
-
Browse the Forum
-
Email only sign up
-
Create travel lists & itineraries
Trial Membership
-
Sign up & registration required
-
Use of interactive maps
-
Create travel lists & itineraries
-
Post in the Community Forum
-
Monthly newsletter
Annual Membership
-
Sign up & registration required
-
Use of interactive maps
-
Create travel lists & itineraries
-
Post in the Community Forum
-
Monthly newsletter
-
1 year for price of 10 months
Lifetime
-
Sign up & registration required
-
Use of all interactive maps
-
Create travel lists & itineraries
-
Post in the Community Forum
-
Monthly newsletter
-
One payment, lifetime membership
-
Welcome pack
Questions & Comments
Have you been using AI to plan your trips? Add your thoughts and experiences to our dedicated AI and Travel discussion board in our Community Forum. Contributions are available for all to view, but only members who are logged in can start new discussions and contribute to existing ones.
Archaeology Travel Writer
Thomas Dowson
With a professional background in archaeology and a passion for travel, I founded Archaeology Travel to help more people explore our world’s fascinating pasts. Born in Zambia, I trained as an archaeologist at the University of the Witwatersrand (South Africa) and taught archaeology at the universities of Southampton and Manchester (England). Thomas’ Profile