Lossfunk Letters
Subscribe
Sign in
Home
Archive
About
Latest
Top
Discussions
Does spatial context make VLMs better game-playing agents?
And why noisy perception can make them worse.
Apr 2
•
Ashish Baghel
1
1
2
March 2026
The Reasoning Illusion: Why LLMs Fail When the Training Data Runs Out
EsoLang-Bench — accepted to the Logical Reasoning and ICBINB workshops at ICLR 2026
Mar 19
•
Aman Sharma
9
1
2
Making Large Language Models Speak Tulu: Structured Prompting for an Extremely Low-Resource Language
We use a structured 5-layer prompt to get GPT, Gemini and Llama to generate grammatically correct Tulu, a low-resource Dravidian language, with no…
Mar 10
•
Prathamesh Devadiga
16
1
3
Can AI Actually Find Security Vulnerabilities?
We measured AI’s ability to discover new security flaws in the wild
Mar 3
•
Ashish Baghel
,
Akshat Singh Jaswal
, and
Paras Chopra
5
1
February 2026
Are You Getting The Best Version of Your LLM?
We investigate how language and culture are entangled in LLMs
Feb 18
•
Shourya
9
1
2
Teaching morality to transformers
We train a custom transformers architecture on MIT Moral Machine data and run interpretability experiments on it
Feb 5
•
Mayank Goel
5
1
January 2026
Can an AI actually be your research mentor?
An AI research mentor that moves undergrads from "I have no idea" to a paper draft, with stage-aware guidance, tools, and measurable gains.
Jan 21
•
Abhinav Rajeev Kumar
14
1
4
Why LLMs Aren't Scientists Yet
Case study from four attempts at autonomous research and getting an AI-written paper published at an experimental conference.
Jan 9
•
Dhruv Trehan
12
6
1
December 2025
Dreaming Is the New Thinking
The next leap in intelligence won’t purely come from bigger models, it’ll come from machines that can imagine their own futures.
Dec 19, 2025
•
Akshat Singh Jaswal
19
2
4
November 2025
Your LLM is a confused oracle
We show that the forecasting accuracy of LLMs depends on what you ask and how you ask
Nov 26, 2025
•
Chinmay
and
Paras Chopra
7
4
3
Future of LLMs might not be Autoregressive
Intro to the world of block diffusion
Nov 24, 2025
•
Ayush Nangia
and
Aman Gokrani
12
4
Sequential scaling outperforms parallel scaling for LLMs
AI reasoning just got a upgrade: At the same compute cost, sequential thinking—iteratively refining ideas—beats parallel "crowdsourcing" in 95.6% of…
Nov 6, 2025
•
Aman Sharma
6
3
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts