Lossfunk Letters
Subscribe
Sign in
Home
Archive
About
Latest
Top
Discussions
Your LLM is a confused oracle
We show that the forecasting accuracy of LLMs depends on what you ask and how you ask
Nov 26
•
Chinmay
and
Paras Chopra
7
2
3
Future of LLMs might not be Autoregressive
Intro to the world of block diffusion
Nov 24
•
Ayush Nangia
and
Aman Gokrani
10
4
Sequential scaling outperforms parallel scaling for LLMs
AI reasoning just got a upgrade: At the same compute cost, sequential thinking—iteratively refining ideas—beats parallel "crowdsourcing" in 95.6% of…
Nov 6
•
Aman Sharma
5
3
October 2025
Notes on Tiny Recursion Network
aka how a 7M parameter network gets SOTA on Sudoku-Extreme with 87% accuracy
Oct 31
•
Paras Chopra
4
1
Do LLMs know when they've gotten a correct answer?
We show they do and then use it to help cut reasoning cost (in tokens) by up to 50% without losing accuracy
Oct 29
•
Aman Sharma
14
4
How do LLMs "think" across languages
Performance of LLMs differ based on language on reasoning tasks and this difference varies for each task.
Oct 28
•
Shourya
14
5
3
What's the point of doing research?
The fun of the struggle is the point
Oct 17
•
Paras Chopra
35
4
7
September 2025
How to choose research problems
TLDR: balance between what your heart says and what the community will value
Sep 10
•
Paras Chopra
10
1
3
Don't tell LLMs you are an AI safety researcher
Adding “for AI safety research” increased refusals on a harmless paraphrasing task for some top models. Conservative, aligned public models might be…
Sep 5
•
Dhruv Trehan
7
4
3
August 2025
Tips on writing your first research paper
aka, how to increase your odds of acceptance at an AI/ML conference
Aug 29
•
Paras Chopra
10
1
What is research and how to do it?
or, can you teach an AI how to do good research?
Aug 12
•
Paras Chopra
61
7
12
Notes on Hierarchal Reasoning Model
aka what can we learn from the brain!
Aug 6
•
Paras Chopra
9
1
1
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts