Synthetic personas: Navigating the new legal frontier of global advertising

Articles

As Generative AI (GenAI) transitions from a creative experiment to a core operational tool, multinational corporations are increasingly adopting "synthetic personas"—AI-generated human models—to power their marketing engines. For global brands, this shift offers unprecedented scalability and cost-efficiency. However, it also introduces a complex web of legal challenges across jurisdictions, thereby requiring a proactive and harmonized legal strategy.

The "Human Authorship" Paradox: A Global IP Challenge

The most immediate hurdle in utilizing AI-generated personas lies in Intellectual Property (IP) protection. The prevailing consensus among major jurisdictions, including the US Copyright Office (USCO) and various European authorities, remains firm: copyright protection requires "human authorship." Purely AI-generated outputs, devoid of sufficient human creative control, are generally ineligible for copyright protection.

For legal departments, this creates a "Proprietary Gap." While a brand may use these images, it may lack the legal standing to prevent competitors from using the same or substantially similar personas.

Accidental Likeness and the Right of Publicity

A significant risk in GenAI is the "black-box" nature of training datasets. Even when a persona is intended to be purely fictional, the algorithm may generate a face that bears a striking resemblance to a real individual—whether a celebrity or a private citizen.

In jurisdictions with strong Rights of Publicity laws (such as the US) or stringent Personality Rights (as seen in Germany or Brazil), this "accidental likeness" could trigger substantial liability.

The Transparency Mandate: EU AI Act and Beyond

Global regulatory trends are moving toward mandatory disclosure. The EU AI Act, the world's first comprehensive AI regulatory framework, explicitly requires that AI-generated content be labeled as such. Similarly, self-regulatory bodies like the ASA (UK) and CONAR (Brazil) are emphasizing transparency to prevent consumer deception.

Technical Integrity and Algorithmic Bias

From a product liability and consumer protection perspective, the "synthetic" element must be confined to the background or the persona. The product itself—the appliance, the hardware, the technology—must remain a faithful, non-AI-altered representation. Any AI manipulation that misrepresents a product’s features or performance could be categorized as "deceptive advertising" under FTC (US) or GDPR-adjacent consumer laws.

Furthermore, legal teams must oversee the ethical output of these tools. GenAI can inadvertently perpetuate racial or gender biases present in their training data. A robust legal review must include an "Inclusion Audit" to ensure that synthetic personas align with the company’s Global Diversity, Equity, and Inclusion (DEI) commitments.

Conclusion: Legal as an Architect of Innovation

The role of the modern corporate counsel is not to stifle innovation with "no," but to architect the "how." By establishing clear guardrails—focused on transparency, IP integration, and rigorous verification—legal departments can empower creative teams to harness the power of GenAI. In the race to digital transformation, the most successful brands will be those that pair creative audacity with a sophisticated, global-ready legal framework.

Andressa Neri, IP Lawyer at Whirlpool also contributed to this publication.

 

Source:

See original   -   PDF Download

Print