Columbia CS Professor: Why LLMs Can’t Discover New Science

Teona Gherasim

LLMs have made tremendous progress in modeling human language. But can they go beyond that to make new discoveries and move the needle on novel scientific progress? We sat down with distinguished Columbia CS professor Vishal Misra to discuss this, plus why chain-of-thought reasoning works so well, and what real AGI would look like. Timecodes: 0:00 Intro 0:32 How LLMs and humans reason through manifolds 4:15 Token prediction, entropy & confidence 8:05 Chain-of-thought reasoning and entropy