Matt Asay
Contributing Writer

The DeepSeek lesson

The AI vendors that will end up winning will be those that earn customersโ€™ trust. OpenAI seems to be doing the opposite.

Winner holding golden trophy cup above head sun and golden sky in background
Credit: afotostock / Shutterstock

During the past two weeks, DeepSeek unraveled Silicon Valleyโ€™s comfortable narrative about generative artificial intelligence by introducing dramatically more efficient ways to scale large language models (LLMs). Without billions in venture capital to spend on Nvidia GPUs, the DeepSeek team had to be more resourceful and learned how to โ€œactivate only the most relevant portions of their model for each query,โ€ as Reflexivity president Giuseppe Sette notes.

It didnโ€™t take long for everyone to start interpreting DeepSeekโ€™s feat through the lens of their own biases. Closed-model vendors cried foul over theft of training data (given how much of their own training data was lifted from others, the irony police were out in full force), while open sourcerors saw DeepSeek as a natural fulfillment of open source superiority (despite the fact that there is no correlation between being open and winning in tech).

[ Related: More DeepSeek news and analysis ]

Lost in all this confirmation bias were two big developments, one positive and the other quite negative. First, AI need no longer be dominated by a billionaireโ€™s club. DeepSeek didnโ€™t democratize AI, exactly, but it has shown that AI entry costs neednโ€™t require seed rounds in the hundreds of billions. Second, although thereโ€™s no reason to think open approaches to AI will win, thereโ€™s every reason to think that OpenAIโ€™s hyper-closed approach will most definitely lose because itโ€™s customer-unobsessed. Winning in AI wonโ€™t be about open versus closed, but rather about customer trust.

โ€˜Techno-feudalism on steroidsโ€™

I donโ€™t have anything to add to the financial implications of DeepSeekโ€™s approach. As DeepLearning.AI founder Andrew Ng points out, โ€œLLM token prices have been falling rapidly, and open weights have contributed to this trend and given developers more choice.โ€ DeepSeek, by optimizing how it handles compute and memory, takes this to the next level: โ€œOpenAIโ€™s o1 costs $60 per million output tokens; DeepSeek R1 costs $2.19.โ€ As he concludes, the expectation is that โ€œhumanity [and developers] will use more intelligenceโ€ฆas it gets cheaper.โ€

But who will build the tools to access that AI-driven intelligence? Hereโ€™s where things get interesting.

Although itโ€™s fun to eviscerate OpenAI and others for finger-pointing over stolen training data, given these LLM vendorsโ€™ propensity to โ€œborrowโ€ copious quantities of othersโ€™ data to train their own models, thereโ€™s something far more troubling at play. As Me & Qi cofounder Arnaud Bertrand argues, โ€œThe far more worrying aspect here is that OpenAI is suggesting that there are some cases in which they own the output of their model.โ€ This is โ€œtechno-feudalism on steroids,โ€ he warns: a world in which LLM owners can claim ownership of โ€œevery piece of content touched by AI.โ€

This isnโ€™t open source versus closed source. Closed source software doesnโ€™t try to take ownership of the data it touches. This is something more. OpenAI, for example, is clear(ish) that users own the outputs of their prompts, but that users canโ€™t use those outputs to train a competing model. That would violate OpenAIโ€™s terms and conditions. This isnโ€™t really different from Metaโ€™s Llama being open to useโ€”unless youโ€™re competing at scale.

And yet, it is different. OpenAI seems to be suggesting that its input (training) data should be open and unfettered, but the data others use (including data that competitive LLMs have recycled from OpenAI) can be closed. This is muddy, murky new ground, and it doesnโ€™t bode well for adoption if enterprise customers have to worryโ€”even a little bitโ€”about their output data being owned by the model vendors. The heart of the issue is trust and customer control, not open source versus closed source.

Exacerbating enterprise mistrust

RedMonk cofounder Steve Oโ€™Grady nicely sums up enterprise concern with AI: โ€œEnterprises recognize that to maximize the benefit from AI, they need to be able to grant access to their own internal data.โ€ However, theyโ€™ve been โ€œunwilling to do this at scaleโ€ because they donโ€™t trust the LLM vendors with their data. OpenAI has exacerbated this mistrust. The vendors that will end up winning will be those that earn customersโ€™ trust. Open source can help with this, but ultimately enterprises donโ€™t care about the license; they care about how the vendor deals with their data. This is just one of the reasons AWS and Microsoft were first to build booming cloud businesses. Enterprises trusted them to take care of their sensitive data.

In this early rush for gold in the AI market, weโ€™ve become so fixated on the foundational models that weโ€™ve forgotten that the real, biggest market has yet to emerge, and trust will be central to winning it. Tim Oโ€™Reilly is, as ever, spot on when he calls out the โ€œAI company leaders and their investorsโ€ for being โ€œtoo fixed on the pursuit or preservation of monopoly power and the outsized returns that come with it.โ€ They forget that โ€œmost great companies actually come after a period of experimentation and market expansion, not though lock-in at the beginning.โ€ The AI companies are trying to optimize for profit too soon in the marketโ€™s evolution. Efforts to control model output will tend to constrain customer adoption, not expand it.

In sum, AI vendors that want to win need to think carefully about how they can establish trust in a market that has moved too quickly for enterprise buyers to feel secure. Grasping statements, such as OpenAIโ€™s, about model output data donโ€™t help.

Matt Asay

Matt Asay runs developer marketing at Oracle. Previously Asay ran developer relations at MongoDB, and before that he was a Principal at Amazon Web Services and Head of Developer Ecosystem for Adobe. Prior to Adobe, Asay held a range of roles at open source companies: VP of business development, marketing, and community at MongoDB; VP of business development at real-time analytics company Nodeable (acquired by Appcelerator); VP of business development and interim CEO at mobile HTML5 start-up Strobe (acquired by Facebook); COO at Canonical, the Ubuntu Linux company; and head of the Americas at Alfresco, a content management startup. Asay is an emeritus board member of the Open Source Initiative (OSI) and holds a JD from Stanford, where he focused on open source and other IP licensing issues. The views expressed in Mattโ€™s posts are Mattโ€™s, and donโ€™t represent the views of his employer.

More from this author