Latest Notes

Roman Grossi β€’ Founder

Indie hacking, startups, resilient systems - and staying sane while building a small company

Back to articles

DeepSeek vs OpenAI: Power, Price, and the Problem of Trust

Β· 2 min read Β· 46 views

About DeepSeek

In the last few days the entire internet has been buzzing about a new OpenAI competitor, the Chinese company DeepSeek, which claims that its models are much more powerful than anything OpenAI currently has (including o1). I played around and experimented with their models a bit, and here is what I can say:

First, DeepSeek models really do handle most tasks better than what OpenAI (ChatGPT) offers. In particular, they produce higher quality code, and their reasoning feels like it is 'thinking' much harder. Summarisation, condensation, and drawing conclusions from text are either on the same level as ChatGPT 4o / ChatGPT o1, or better. And on top of that, everything runs much faster.

Second, their API prices are not just low, they are VERY low, about 10 times cheaper than OpenAI.

Now to the downsides:

1. There is no memory support (or I could not find it). For me this is currently the killer feature of ChatGPT, and it is the only reason I am willing to stay as long as needed until a full competitor appears.

2. It is a Chinese company with Chinese censorship. For example, it is almost impossible to get information about Tiananmen Square and the massacre carried out there by the communist government. Information about Taiwan is also heavily distorted. You also will not be able to find out that Xi Jinping is Winnie the Pooh. As a result, the privacy of all conversations is extremely concerning.

Sam Altman has become especially animated in recent days and now says he will soon present something truly revolutionary, not AGI of course, but something much better than anything available today, so the competition will be interesting.

I want once again to raise the topics of privacy and trust on the internet. Treat all platforms with maximum scepticism and always try to imagine how critical it would be for you if some piece of information you share with a platform is later used against you. For this reason in particular I use ChatGPT Team rather than ChatGPT Plus, since it is claimed that my conversations there will not be used to train new or existing models. There is of course a small caveat: it depends on a basic level of trust, and personally I trust American corporations slightly more than Chinese ones.

More to explore

Human-Like Memory for LLMs

TL;DR I wrote a manifesto-style essay about a memory model for LLMs that is as close as possible to human memory and lets the system build a relationship histor…

Jan 19, 2026 Read more