AI “ownership” is becoming a prominent topic with the extreme rise of more capable and powerful models like Seedance 2.0 creating Hollywood level cinematics and Cursor  “vibe-code” platform that writes Enterprise level products. The question is this: If we rival the Big Leagues, do we fully own it? 

And with AI, the uncomfortable reality is that you can “own” parts of an AI-driven workflow while never owning the model as a tool, and depending on jurisdiction, you may not “own” the output in the copyright sense even if a contract says you do. 

The rented intelligence problem

With an AI API, you receive access plus a stream of outputs the legal status of which changes depending on where you are and how you use it.

There are 3 distinct questions at play here:

  • Contract: what rights does the provider grant me?
  • Law: what does my jurisdiction recognize as protectable authorship?
  • Control: what can I safely commercialize, defend, and keep from being rug-pulled by a platform change?

Those questions, or rather the answers to them don’t always agree.

Contract ownership

This is the cleanest layer because it’s written down, but every vendor’s terms differ. When asking if you own AI-generated content legally and who owns AI output under OpenAI terms what you usually keep according to their Terms of Use is retaining ownership rights in the Input (as between Individuals and OpenAI), and according to their Services Agreement (business), the customer “retains all ownership rights” in Input. 

What you may receive are output rights as according to OpenAI’s Terms of Use, you “own the Output” to the extent permitted by applicable law as according to Anthropic’s “on Bedrock” commercial terms, “Customer owns all Outputs” (within that agreement’s framework).

The qualifier, “to the extent permitted by applicable law,” is the entire story. The contract can grant you rights against the provider. It cannot rewrite copyright law

Contract vs Law: human authorship vs “computer-generated works”

According to the U.S. Copyright Office’s registration guidance, material generated by a machine that lacks human authorship is not registrable, and the analysis turns on the human contribution.

The interesting part is where this gets interesting: what, exactly, counts as “enough” human authorship, and how do you prove it when the creative process includes a model?

The UK is structurally different.  According to the UK Copyright, Designs and Patents Act 1988, Section 9(3), for a “computer-generated” work, “the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken.”

According to Japan’s Agency for Cultural Affairs document (“General Understanding on AI and Copyright in Japan”), materials autonomously generated by AI are not considered copyrighted works because they are not “creatively produced expressions” of thoughts or sentiments.  The US wants a human fingerprint, the UK hands credit to the setup crew, and Japan is the strict judge: no human creativity, no copyright, so the “winner” is whoever can show the most human authorship.

Models relying on other models: the quiet dependency chain

A growing share of “AI products” are not a single model. They’re a dependency graph: one vendor for generation, another for moderation, another for speech, another for embeddings.

With AI “Agents” dominating this segment the risk splits:

  • Rights stacking: you’re only as free as the most restrictive license or term in the chain.
  • Output laundering: if Model B transforms Model A’s output, you still may carry constraints from A’s terms or from copyright’s human-authorship rules.
  • Market concentration: dependency chains quietly centralize power in a few model providers, even when the product looks decentralized

Economic and ethical consequences  

The U.S. Copyright Office ties copyrightability to human authorship, which leaves purely machine-generated material in a gray zone. 

And the gray zone gets exploited. Rights-holders have challenged generative AI both on training use and on outputs that can function as substitutes, including Getty Images v. Stability AI and The New York Times v. OpenAI and Microsoft.

Regulation adds a second axis: not “who owns,” but “who must disclose.” Article 50 of the EU AI Act introduces transparency duties around certain synthetic content and deepfakes, shifting compliance work onto deployers and platforms. 

Owning the model is a new asset class

If you don’t own the model, you may still “own” inputs, obtain contractual rights to outputs, and own the human-authored layer you add.

But owning the model, or holding durable rights to run it under a license you can live with, is qualitatively different.

That’s why AI model ownership is emerging as a new kind of asset, one the markets will increasingly price like infrastructure.