Press enter or click to view image in full size
Large language models (or generative pre-trained transformers, GPT) need more reliable information accuracy checks to be considered for Search.
These models are great at creative applications such as storytelling, art, or music and creating privacy-preserving synthetic data for applications.
These models fail, however, at consistent factual accuracy due to AI hallucinations and transfer learning limitations in ChatGPT, Bing Chat, and Google Bard.
First, let’s define what AI hallucinations are. There are instances where a large language model creates information that is not based on factual evidence but may be influenced by its transformer architecture’s bias or erroneous decoding. In other words, the model makes up facts, which can be problematic in domains where factual accuracy is critical.
Ignoring consistent factual accuracy is dangerous in a world where accurate and reliable information is paramount in battling misinformation and disinformation.
Search companies should reconsider “re-inventing search” by mixing Search with unfiltered GPT-powered chat modalities to avoid potential harm to public health, political stability, or social…