Fail, fail, fail, fail, succeed

Training Data (Part 1)

5375

AI LLMs (large language models), generate their responses to prompts (questions) based on the data they’re trained on. So, their accuracy depends on the veracity of the information they’ve “read.”

Major newspapers like The New York Times have sued Open AI for using their reporting as part of their chat bots training set.

See the problem here? The Times reportage, while not perfect, has at least been researched and vetted by fact-checkers. Taking that out of the AI’s training set can’t be good for the systems accuracy…