Generative AI is one of the most disruptive forces to emerge from the last decade of technological evolution. Its power is as captivating as its controversies. With the click of a button, machines can now produce artworks, compose music, write screenplays, design products, and even mimic human voices — all seemingly “created” by systems trained on existing human-made content. But as the creative output of algorithms grows, so does a fundamental legal and ethical question: who owns what AI creates?
This isn’t just a philosophical exercise. It strikes at the heart of intellectual property (IP) law — a system built to reward original human expression. Generative AI challenges that framework in ways few anticipated. The law has long danced around questions of transformation and inspiration, but AI has forced it into a full sprint. And the results, as we’re now seeing in courtrooms around the world, are far from settled.
The Zhang Jingna Case: Art, AI, and Allegation of Theft
A recent copyright case in Europe has become a touchstone for this debate — not because it directly involved AI, but because it revealed the fault lines of originality and fair use that AI now exploits.
Zhang Jingna, a United States-based Singaporean photographer, sued Luxembourg artist Jeff Dieschburg for allegedly copying one of her photographs — a stylized portrait of South Korean model Ji Hye Park — in his figurative painting. The resemblance between the original and the painting was striking. Though Dieschburg claimed artistic license and transformation, the European court sided with Zhang, affirming her copyright and ruling the painting as an unauthorized derivative.
This case is a warning shot. It underscores the importance of protecting the integrity of original works — especially in an era where machines can scan, mimic, and reproduce visual content in seconds. What Dieschburg did with a brush, generative AI can do at scale and speed, with even murkier lines of authorship and intent.
Now imagine a platform that trains an AI model on tens of thousands of Zhang’s images — scraped from the internet without permission — and produces new, eerily similar portraits. The legal system is only just beginning to reckon with such scenarios. The Zhang case may not have involved AI, but it echoes the ethical dilemma that AI now amplifies.
The AI Learning Paradox: Training on Copyrighted Work
At the heart of the IP debate around generative AI lies a contradiction: these models are valuable precisely because they are trained on vast troves of existing creative works — many of which are copyrighted. Text-to-image systems like Midjourney, Stability AI, and others rely on datasets that include illustrations, photographs, graphic designs, and paintings. Similarly, large language models consume books, articles, scripts, and personal blogs to become fluent in written expression.
Tech companies argue that this process constitutes “fair use” — a legal doctrine that allows limited use of copyrighted material without permission, provided the purpose is transformative, educational, or non-commercial. But critics argue this is a stretch. Training a commercial AI on millions of artworks to then produce new images — some of which clearly reference or resemble the originals — is hardly a classroom exercise.
It’s not surprising that lawsuits have started piling up. Writers, artists, and even software developers have taken legal action against AI firms, claiming their intellectual labor is being appropriated without compensation. The legal outcomes are still evolving, but the public sentiment is clear: authorship and originality must still matter.
Ownership in the Age of Algorithmic Creativity
Another wrinkle is the question of who owns the output of generative AI. If a user types a prompt into an image generator and receives a stunning result, who holds the rights to that image? The user? The developer of the AI? The copyright holders of the training data? Or no one?
In the United States, the Copyright Office has taken a firm stance: AI-generated content without meaningful human input cannot be copyrighted. This means that unless a person has significantly shaped the final product — through prompt engineering, editing, or compositional guidance — the work is not protected. It becomes part of the public domain.
This creates both opportunity and vulnerability. On one hand, it encourages open experimentation. On the other, it leaves creators — and businesses — exposed. If you base a brand campaign on AI-generated visuals, someone else could legally reproduce or repurpose them. Worse, if the image closely resembles a copyrighted photo, as in Zhang’s case, you may face a lawsuit yourself.
For businesses, this isn’t just a legal issue. It’s a reputational and operational one. Investing in creative assets that sit on shaky legal ground is a risky strategy. Companies must now vet not just their agencies and designers, but the algorithms behind their content.
Charting a Way Forward: Ethics, Transparency, and Consent
The way forward isn’t to outlaw generative AI. Its potential to democratize creativity, boost productivity, and personalize digital experiences is immense. But ethical and legal scaffolding must evolve in tandem with technical capability.
One area of consensus is transparency. AI developers should disclose what data their models are trained on — including whether copyrighted materials are involved. Consent mechanisms, like opt-outs for creators who don’t want their work included in training datasets, are also gaining traction.
Another is attribution. Some argue that AI-generated works should come with traceable metadata — a sort of digital chain-of-custody that shows the lineage of influence. This wouldn’t solve all issues, but it would restore a sense of authorship in a system currently defined by opacity.
Lastly, we need innovation in licensing models. Imagine if photographers like Zhang could license their portfolios for AI training under specific terms — perhaps earning royalties based on derivative output, or approving certain use cases while rejecting others. Blockchain and smart contracts could help automate and enforce these agreements. It’s not utopian. It’s just overdue.
Conclusion
Zhang Jingna’s victory in Europe wasn’t just a win for one photographer. It was a reaffirmation of a principle: that creativity has value, and that value deserves protection. As generative AI blurs the boundary between inspiration and imitation, that principle becomes even more urgent.
The intersection of AI and IP law is not a dead-end. It’s a negotiation — between progress and protection, between access and agency. But it must be a fair one. Because if artists, writers, and creators can no longer trust that their work will be respected in the digital commons, they may stop sharing it altogether.
And without those voices — without the Zhangs of the world — AI has nothing left to learn from.
The Writer
Desmond Israel Esq. | Partner, AGNOS Legal Company | Founder, Information Security Architects Ltd | Law lecturer, Ghana Institute of Management and Public Administration (GIMPA) Law School | Member, IIPGH.
For more comments, email: desmond.israel@gmail.com





