Lossfunk is a new AI lab that aims to be a cosy home for independent researchers. We aim to be curiosity-driven alternative to academia and industry. As a founder of the lab, I wanted to share my thoughts on what doing good science means with all incoming researchers so we have an alignment in our culture and values.
I’m sharing the same here on our newsletter, with an aim to spark discussions. “Good science” means different things to different people, and what you’ll read below is my personal opinion.
Let me know your thoughts if you have an additional perspective, or a nuance that I missed.
What does good scientific research look like
Good science is about discovering knowledge that’s:
a) Novel: i.e. it that hasn’t been explored before
b) Impactful: i.e. once discovered and communicated, it guides future efforts of a community for years to come
And, how do you discover something that’s novel and impactful
By being deeply curious about a topic and asking big questions in it
Deep curiosity is important because greatness cannot be planned, it can only be stumbled upon. So, deep curiosity about a domain is the only way you’ll build enough understanding and mental models to start asking interesting, non-trivial questions that are at the edge of what we collectively know as humans.
Common pitfalls in AI research
[Pitfall #1] Not reading everything you can on a given domain
You should only stop reading (relevant) papers in your domain when your net new knowledge starts approaching zero in that domain. Unless you know almost all what a specific research community knows, you risk asking questions that have already been answered. (And remember: good science is about novelty first).
But, more important than novelty is impact and impact requires deep background knowledge. Without a deep understanding of a domain, you simply won’t appreciate what problems exist in that field, how previous approaches have fallen short and what gaps still lie unaddressed.
It takes a lot of effort, but a deep hunger for reading papers in your field is absolutely a must
Also, when you read, read actively. Make notes. Summarise key ideas. Write what they missed out and what can you improve upon.
If it helps, here’s an example of a paper reading log that I maintained when I dived into world models in reinforcement learning.

How I do reading on a topic:
Do a deep research on a topic (via any deep research system: OpenAI, Gemini, Claude, etc.).
I ask the system to prioritise arxiv links and academic research
I read papers and then try to summarize key ideas from my memory
As I read more and more papers, patterns emerge and I start noticing problems and gaps in the field
I often use semantic search for new papers via searchthearxiv.com
I have a Google Sheet where I maintain all experiments/hypothesis ideas as my understanding of a field becomes better
Remember: without a deep understanding of a field, you risk doing what has been done before and you won’t notice big, unsolved problems in that field
[Pitfall #2] Thinking incrementally by aiming for papers, not discoveries
A common trap in research is to directly optimize for papers. I understand that the incentives to put out a paper are strong (especially in academia), but know that Arxiv is full of papers that are never read or cited by anyone.
The fundamental point of research is not producing papers, rather it is discovering new knowledge that has significant downstream impacts
Papers lie at the very end of this process of discovery. Papers are important because they force you to make precise, evidence-backed claims, so keeping them in the back of your mind is useful. But, know that a paper that is so trivial that it doesn’t change anyone’s opinion isn’t really contributing anything to our collective knowledge.
Remember: writing papers is easy, but discovering new knowledge that will have downstream impacts is what’s important.
[Pitfall #3] Confusing building things with doing science
In AI research, many people come from an engineering background. As an engineer, your primary job is to build stuff that works in the real world. And that’s hugely valuable as our economy runs on useful things we have in our life.
But, a scientist’s mindset is different.
A scientist’s job is to make precise claims about a phenomena and then provide evidence for such claims
The key word here is “precise”. When we’re building things, the focus is on shipping a usable artifact by whatever means possible. For that, an engineer needs to take care of lots of edge cases, develop good user interfaces, combine systems and so on.
Because the focus in engineering is on delivering usable artifacts, as an engineer doing research, it’s easy to confuse what works well as a good scientific outcome. But, it’s not! What works well is a good engineering outcome, and not good science.
In order to produce a good scientific outcome, you have to carefully think about what specific part of the entire system can you make precise claims about that are both novel and impactful. And once you have these hypotheses about your claims, you need to isolate that part and do proper experiments and ablations to gather evidence about your claims.
Remember: things that work well in practice are valuable in the economy, but they’re not scientific artifacts. Science is about discovering knowledge and making precise claims about what you discovered.
If you do have a usable artifact (like a toaster), you can always isolate a parts of it to do experiments and discover new phenomena (like what drives evenness of toasting, or how different heating materials interact with the bread). But while doing that, you need to ask yourself: will what you discover be impactful or merely novel?
[Pitfall #4] Not gathering enough evidence about your claims
All great scientific discoveries come with sufficient evidence in order to be applicable in a variety of contexts.
Imagine if Darwin only made claims about beaks of finches, or Einstein only talked about behavior of light on trains. The reason Darwin waited for 20 years before publishing his magnum opus is because he wanted to gather enough evidence to be convinced that evolution by natural selection was a general phenomena that applied to all species, not just finches that he first noticed on the Galapagos islands.
As a scientist, you are obligated to gather the amount of evidence in direct proportion to the scope of your claim
If you think your claim is widely applicable, show it in a variety of contexts. Otherwise, make very specific, narrow claims.
[Pitfall #5] Not triple-checking your results
(Added later when Adithya D Silva pointed out this important omission)
Upon getting good results, there’s a natural temptation to share them. After all, you’ve spent spent so much effort, did so many experiments that didn’t pan out, and now that you found promising results, why wouldn’t you tell everyone about it?
Well, the reason to be skeptical is because genuinely novel and impactful results that nobody has stumbled upon before are quite rare, so your prior for discovering them should be low.
It’s wise to attribute your promising results to a mistake or misinterpretation, and then embark upon finding what “bug” caused you to get good results
So, when you get a good results, ask your friends or colleagues to play the devil’s advocate and tell you what you might have overlooked. Maybe the result is because of (unintentional) p-hacking? Maybe you got lucky with a random seed? Maybe there’s an actual bug in your code?
Remember: only when you’ve ruled out all the reasons why your promising results shouldn’t hold, should you go ahead and make big claims.
(This is why having reproducible experiments is important, so others can verify your claims and find things you’ve overlooked).
TLDR: How To Do Good Science
CHOOSE
Pick a domain you're curious about
|
v
READ
Read everything until no new insights
|
v
THINK
Think about what everyone is missing
|
v
HYPOTHESIZE
Put forward precise claims
|
v
EXPERIMENT
Setup careful experiments
|
v
VALIDATE
Gather more evidence in various contexts
|
v
COMMUNICATE
Write a paper
That’s it! Hope you liked the manifesto.
Please leave a comment if you have something new to add!
Paras Chopra is the founder and researcher at Lossfunk.
Quite insightful! Thanks for sharing "Search the ArXiv". I previously relied on manual search, Hugging Face/AlphaArXiv top papers, and a Multi-agentic Arxiv search tool that I built (https://intoai.pub/p/building-your-first-ai-agent) to find interesting papers.
For core AI research topics papers are amazing! I want to understand more about what to do when we are combining research from external fields (cognitive psychology, neuroscience, etc) into the existing AI research. I’m currently getting up to date by reading the latest reference books completely and not papers, but once I do that how can I combine these multiple independent research directions into one coherent research discovery process?