What Your Legal Team Needs to Know Before Deploying Generative AI

Generative AI is changing how businesses operate — but its legal landscape is moving just as fast. BNP Paribas, one of the world's largest banks, has published a detailed overview of the key legal risks businesses must address before deploying AI systems, drawing from the experience of its own legal department managing AI adoption at scale.
The article, authored through the bank's LEGAL department and featuring Martin Pailhes (Global Manager, Digital & IP Platform), makes one point clear: legal compliance isn't a final checkbox — it's an active, ongoing function that must be embedded into every AI deployment from the start.
The Three Legal Risk Areas That Matter Most
1. Personal Data Protection
Developing and using generative AI requires large amounts of data, which often includes personal data. Within the EU, the GDPR governs this — but similar frameworks exist in Australia, Japan, Brazil, Canada, and India. Businesses must document the data they collect, define its purpose, and implement security measures before any AI system touches it.
2. Intellectual Property and Copyright
Content generated by AI raises unresolved copyright questions. In most jurisdictions, copyright protection requires sufficient human intervention in the creation process. AI-generated content may not qualify as "original," limiting its legal protection. This matters both for content you create and for the data used to train the models you rely on.
3. Bias and Discrimination
The "black box" effect of AI makes it difficult to explain decisions or predictions. Both the EU AI Act and the GDPR provide frameworks requiring that AI systems do not discriminate and that their outputs are explainable. For businesses in regulated sectors, this is not optional.
Key Takeaways
- The EU AI Act is now actively being implemented — one of its first measures, effective February 2025, prohibits specific AI practices. Companies must certify the absence of these practices before deploying
- BNP Paribas has structured its LEGAL team to provide permanent regulatory monitoring, legal analysis, and business-unit guidance — a model worth studying for any enterprise scaling AI
- An "Omnibus AI Act" revision is currently in negotiation, which will adjust timelines and clarify high-risk AI system obligations — regulatory changes are coming, and businesses need early visibility
The message is consistent: understanding and complying with AI regulations is not just a legal obligation. It's how organizations build trust with clients, employees, and partners while protecting their operations.
🔗 Read the full article on BNP Paribas
Stay in Rhythm
Subscribe for insights that resonate • from strategic leadership to AI-fueled growth. The kind of content that makes your work thrum.
More from Thrum
Additional pieces exploring adjacent ideas
