We show they do and then use it to help cut reasoning cost (in tokens) by up to 50% without losing accuracy
Do LLMs know when they've gotten a correct…
We show they do and then use it to help cut reasoning cost (in tokens) by up to 50% without losing accuracy