

Of late, my biggest concern is certain parties feeding LLMs with a different version of history.
Search has become so shit of late that LLMs are often the better path to answering a question. But as everyone knows they are only as good as what they’ve been trained on.
Do we, as a society, move past basic search to a preference for AI to answer our questions? If we do, how do we ensure that the history they feed the models is accurate?
“It’s not a tech issue. It’s a political issue.”
It kinda is a tech issue if the output is skewed because nefarious parties are feeding the model shit.
If they control the tech and what’s being fed into it then it makes the process rife for manipulation.