Artificial intelligence is transforming the workplace and creating new risks for employers. Among those risks: the rise of “deepfakes,” or fabricated images, video, or audio designed to look and sound real.
As this technology rapidly advances, it is becoming increasingly difficult to distinguish authentic content from AI-generated material, while the tools used to create such material are becoming more accessible and capable of producing convincing content.
Against this backdrop, deepfakes are quickly moving from a theoretical workplace concern to a real compliance issue. Employees are currently using AI tools to create doctored images, videos, or audio targeting co-workers. In recent cases, employees have allegedly circulated sexually explicit deepfakes or fabricated recordings intended to humiliate colleagues or damage reputations.
Federal and some state lawmakers are responding. Earlier this year, the U.S. Senate unanimously passed the Disrupt Explicit Forged Images And Non-Consensual Edits (DEFIANCE) Act of 2025, which is now pending in the House of Representatives. If enacted, this Act would create a civil cause of action for victims of non-consensual deepfake content.
California, Florida, Illinois, and Tennessee have enacted laws allowing victims of AI-generated deepfakes to pursue civil remedies. While Ohio is not yet on this list, state legislators have introduced several bills to regulate the creation and distribution of deepfake content, particularly in cases involving harassment, fraud, or sexually explicit material.
Employers should keep the following key considerations in mind:
-
Existing harassment laws apply. The technology might be new, but courts analyze these claims under the same legal framework that governs traditional workplace harassment and discrimination claims. Content that targets an employee based on their gender, race, sexual orientation, or other protected characteristic may give rise to a hostile work environment claim under Title VII of the Civil Rights Act of 1964 and analogous state laws.
-
Failing to act can mean facing a lawsuit. Failing to investigate and address an employee’s complaints or concerns about a deepfake incident can create legal exposure. For example, a Nashville television meteorologist recently filed suit against her former employer for, among other things, allegedly failing to investigate and address sexually explicit deepfakes that used her likeness, underscoring the risks associated with inaction.
-
Off-duty conduct can create workplace liability. Many deepfake incidents originate from personal devices or social media outside of working hours. But if the content circulates among co-workers or affects workplace conditions, employers may still face liability under traditional harassment standards.
-
Handbooks and policies may need an AI update. Employee handbooks and anti-harassment policies created before the widespread availability of generative AI likely do not address AI misuse or deepfake content. Updating policies and training managers on how to respond to these situations can help to reduce risk and demonstrate that the employer takes these issues seriously.
With generative AI tools continuing to evolve, employers should expect these issues to appear more frequently in workplace disputes and litigation. If you have questions about how emerging AI technologies may affect your workplace policies or compliance obligations, please contact any K|W|W attorney.