Lossfunk Letters
Subscribe
Sign in
Home
Archive
About
Making Large Language Models Speak Tulu: Structured Prompting for an Extremely Low-Resource Language
We use a structured 5-layer prompt to get GPT, Gemini and Llama to generate grammatically correct Tulu, a low-resource Dravidian language, with no…
Mar 10
Â
•
Â
Prathamesh Devadiga
10
1
2
Can AI Actually Find Security Vulnerabilities?
We measured AI’s ability to discover new security flaws in the wild
Mar 3
Â
•
Â
Ashish Baghel
,Â
Akshat Singh Jaswal
, andÂ
Paras Chopra
4
1
Are You Getting The Best Version of Your LLM?
We investigate how language and culture are entangled in LLMs
Feb 18
Â
•
Â
Shourya
8
1
2
Most Popular
View all
What is research and how to do it?
Aug 12, 2025
Â
•
Â
Paras Chopra
92
8
12
Practical tips on building LLM agents
Jul 22, 2025
Â
•
Â
Paras Chopra
62
2
8
Manifesto for doing good science in AI
Jul 7, 2025
Â
•
Â
Paras Chopra
65
3
9
What's the point of doing research?
Oct 17, 2025
Â
•
Â
Paras Chopra
56
7
10
From an idea to an ML paper in under 1 hour
Jul 1, 2025
Â
•
Â
Paras Chopra
33
6
3
Dreaming Is the New Thinking
Dec 19, 2025
Â
•
Â
Akshat Singh Jaswal
19
2
4
Latest
Top
Discussions
Teaching morality to transformers
We train a custom transformers architecture on MIT Moral Machine data and run interpretability experiments on it
Feb 5
Â
•
Â
Mayank Goel
5
1
Can an AI actually be your research mentor?
An AI research mentor that moves undergrads from "I have no idea" to a paper draft, with stage-aware guidance, tools, and measurable gains.
Jan 21
Â
•
Â
Abhinav Rajeev Kumar
14
1
4
Why LLMs Aren't Scientists Yet
Case study from four attempts at autonomous research and getting an AI-written paper published at an experimental conference.
Jan 9
Â
•
Â
Dhruv Trehan
11
6
1
Dreaming Is the New Thinking
The next leap in intelligence won’t purely come from bigger models, it’ll come from machines that can imagine their own futures.
Dec 19, 2025
Â
•
Â
Akshat Singh Jaswal
19
2
4
Your LLM is a confused oracle
We show that the forecasting accuracy of LLMs depends on what you ask and how you ask
Nov 26, 2025
Â
•
Â
Chinmay
 andÂ
Paras Chopra
7
4
3
Future of LLMs might not be Autoregressive
Intro to the world of block diffusion
Nov 24, 2025
Â
•
Â
Ayush Nangia
 andÂ
Aman Gokrani
12
4
Sequential scaling outperforms parallel scaling for LLMs
AI reasoning just got a upgrade: At the same compute cost, sequential thinking—iteratively refining ideas—beats parallel "crowdsourcing" in 95.6% of…
Nov 6, 2025
Â
•
Â
Aman Sharma
6
3
Notes on Tiny Recursion Network
aka how a 7M parameter network gets SOTA on Sudoku-Extreme with 87% accuracy
Oct 31, 2025
Â
•
Â
Paras Chopra
6
1
Do LLMs know when they've gotten a correct answer?
We show they do and then use it to help cut reasoning cost (in tokens) by up to 50% without losing accuracy
Oct 29, 2025
Â
•
Â
Aman Sharma
15
4
See all
Lossfunk Letters
Exploring stochastic parrots 🦜 until they become self-aware
Subscribe
Find us
Lossfunk Homepage
Twitter
Lossfunk Letters
Subscribe
About
Archive
Sitemap
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts