Discussion about this post

User's avatar
Neural Foundry's avatar

The recency bias example with the S&P 500 really resonates here - it perfectly ilustrates why blindly adding context can actully hurt reasoning. That overconfident YES when the model should've stuck with mean reversion logic is such a powerful failure mode to highlight. Makes me wonder if there's a way to weight historical context more competitively against fresh headlines?

Expand full comment
Vishnu Vardhan's avatar

what was the prompt used for these models tho? that probably played the biggest factor for all of these predictions , if the models were asked to imitate an expert and be skeptical like a human would be after watching news for a while , the results could be very different

Expand full comment

No posts

Ready for more?