Like humans, ChatGPT favours examples and ‘memories’ – not rules – to generate language

Educatie

A new study, published in PNAS , led by researchers at the University of Oxford and the Allen Institute for AI (Ai2) has found that large language models (LLMs) – the AI systems behind chatbots like ChatGPT – generalise language patterns in a surprisingly human-like way: through analogy, rather than strict grammatical rules. The research challenges a widespread assumption about LLMs: that these learn how to generate language primarily by inferring rules from their training data. Instead, the

din zilele anterioare