When AI answers get
Airfind news item
Published on March 21, 2026.
The question has been raised about whether the answers given by artificial intelligence (AI) in China are truly objective or are they truly objective? The World Consumer Rights Day TV show revealed that some organizations were found to have mass-published sponsored articles, fabricated product reviews, and invented expert credentials to manipulate the data that large language models draw upon. This practice, known as generative engine optimization, or GEO, is a set of techniques designed to influence what AI models retrieve, cite and recommend. The practice, dubbed "poisoning" the AI, may lead to consumers buying faulty products based on fabricated recommendations, businesses being squeezed out, and platforms whose value relies on user trust. The challenge lies in enforcement, especially when manipulated content originates elsewhere before being ingested by models. Some platforms have already introduced clearer labeling for commercial content and adjusted their recommendation algorithms following the broadcast of the show.
Read Original Article