As reported in The Decoder, Chinese AI provider Deepseek has released data demonstrating that AI language models could yield substantial profit margins, even when priced significantly below competitors like OpenAI. This transparency offers a rare glimpse into the operational costs and potential profitability of AI services, suggesting a theoretical profit margin of 545% if Deepseek fully monetized its offerings while maintaining its open-source strategy.
Deepseek’s analysis, based on a 24-hour test period, highlights the effectiveness of its smart resource management. The company processed 608 billion input tokens and 168 billion output tokens, with 56.3 percent of inputs served from a cache, significantly reducing costs. They employ a dynamic resource allocation system, using all server nodes for inference during peak hours and redirecting them to research and training during off-peak times.
The hardware infrastructure, consisting of 226.75 server nodes each containing eight Nvidia H800 GPUs, costs $87,072 per day, calculated at an estimated leasing cost of $2 per GPU per hour. Each H800 node processes approximately 73,700 input tokens per second during prefilling and 14,800 output tokens per second during decoding, with an average output speed of 20 to 22 tokens per second.

If Deepseek charged full price for every processed token using its premium R1 model rates ($0.14 per million input tokens for cache hits, $0.55 for cache misses, and $2.19 per million output tokens), daily revenue would theoretically reach $562,027. However, the company acknowledges that real-world revenues are lower due to its standard V3 model being priced below R1, many services being offered for free, and nightly discounts. Currently, only API access generates revenue.
This transparency illuminates the growing commoditization of AI services. While substantial profit margins are theoretically possible, market competition, tiered pricing, and the need for free services significantly impact actual profits. This contrasts sharply with OpenAI’s recent pricing strategy for GPT-4.5, which commands premium prices far above its predecessors and competitors like Deepseek, despite only modest performance improvements.
Deepseek’s data suggests that language models are becoming commodity services, where premium pricing may no longer reflect actual performance advantages. This puts pressure on Western AI companies like OpenAI, which are reportedly incurring significant operating costs while facing downward price pressures.
This environment may explain why OpenAI’s GTM Manager, Adam Goldberg, recently emphasized the importance of controlling the entire value chain, from infrastructure and data to models and applications. As language models become commoditized, the competitive advantage may shift towards companies that can effectively integrate and optimize their complete technology stack.
In summary, Deepseek’s disclosure of its operational expenses and potential earnings indicates that substantial profit margins are achievable even at lower prices, challenging the premium pricing models of competitors like OpenAI. The company’s efficient resource management and the evolving commoditization of AI services highlight the changing dynamics of the AI industry.
This article originally appeared in The New Digital’s AI and Tech News website.