TRUEPIC BLOG

Partnership on AI Unveils New Case Studies from Supporters of Synthetic Media Framework: Meta, Microsoft, Thorn, and Truepic

Partnership on AI released five new case studies with participation from Meta, Microsoft, Truepic, Thorn and the Stanford Institute for Human-Centered AI.

In this article

Subscribe to updates

Stay up to date with our latest resources and articles.

Findings Uncover Challenges of Disclosing AI-Edited vs AI-Generated Content, Managing Label Fatigue with Audiences and Other Unexplored Areas of Generative Media Governance

Press release

San Francisco – November 19, 2024 Partnership on AI (PAI), a nonprofit multistakeholder organization dedicated to responsible AI, today released five new case studies focused on mitigating synthetic media risks based on implementation of its Synthetic Media Framework. With participation from Meta, Microsoft, Truepic, Thorn and researchers from the Stanford Institute for Human-Centered AI, the use cases delve into an underexplored area of synthetic media governance known as direct disclosure –the methods or labels used to convey how content has been modified or created with AI.

“Collectively, we must build a foundation for media where transparency and truth go hand in hand. This is true not only for direct disclosures on content itself, but also for how companies transparently disclose their strategies and practices,” said Claire Leibowicz, Head of AI & Media Integrity at Partnership on AI. “Being transparent and reinforcing digital integrity is critical, and will become even more so as AI touches all of the ways people create, distribute, and encounter media in the years to come.”

Each of the five case studies showcases a different blend of technical and humanistic considerations required for disclosing content and navigating responsible and harmful uses of synthetic media. They discuss:


“As synthetic media becomes more sophisticated and ubiquitous, ensuring transparency around its creation is key to building and maintaining public trust,” said Rebecca Finlay, CEO at Partnership on AI. “By sharing real-world insights, our hope is to help others navigate the complexities of synthetic media and contribute to a more trustworthy digital landscape.”

Policy Recommendations

Based on the research, PAI urges policymakers and practitioners to consider the following five recommendations to promote transparency, trust, and informed decision making for media consumers in the AI age.

 1.  Better define what counts as a material AI use, based on multistakeholder input and user research.

 2. Support rich, descriptive context about content — whether or not the media has been generated or edited with AI.

 3. Standardize what is disclosed about content, and the visual signals for doing so.

 4. Resource and coordinate user education efforts about AI and information.

 5. Accompany direct disclosure policies with back-end harm mitigation policies.

PAI’s Synthetic Media Framework was launched in February 2023 and has received institutional support from 18 organizations. In March 2024, the group released ten in-depth case studies focused on transparency, consent, and harmful/responsible use cases. This newest collection complements PAI’s initial body of research with a focus on direct disclosure. To learn more about PAI’s ResponsiblePractices for Synthetic Media: A Framework for Collective Action, and to read the full collection of case studies, please visit: https://syntheticmedia.partnershiponai.org/

Messages of Support

“The aspiration to deliver transparency into the origins and history of online content is becoming a reality with the adoption of the C2PA media provenance standard. This is supported by the Partnership on AI’s (PAI) Responsible Practices for Synthetic Media Framework that takes an ecosystem-wide approach, acknowledging how tool builders, content creators, and distribution platforms can help. We’re enthusiastic to share what we’ve learned with an update on LinkedIn’s experience to date with implementing the C2PA standard.”
 – Eric Horvitz, Chief Scientific Officer, Microsoft

“Generative AI models’ misuse to create child sex abuse material (AIG-CSAM) is one of the most pressing technical and policy issues facing the synthetic media domain. Direct Disclosure is a necessary but not sufficient practice for addressing the harms of AIG-CSAM. We’re grateful that PAI provided us with the opportunity to highlight the complexity of this problem and suggest ways for stakeholders to respond.”
 – Riana Pfefferkorn, Policy Fellow at Stanford Institute for Human-Centered AI

“At Truepic, we believe that digital media provenance is the foundation of enhancing authenticity and transparency in online content. Participating in PAI’s portfolio of case studies allowed us to showcase how leveraging C2PA-compliant technology can have real-world impact—from safeguarding cultural heritage in conflict zones to bolstering accountability in high-stakes scenarios. We commend PAI for fostering collaboration and driving meaningful progress in synthetic media governance, and we are proud to contribute to these collective efforts to build a more authentic and resilient information ecosystem.”
 – Mounir Ibrahim, Chief Communications Officer and Head of Public Affairs at Truepic

“Partnership on AI acts as an ecosystem leader in building shared understanding and guidance for responsibly building and deploying AI, that results in concrete action. As a mission-driven organization dedicated to defending children from sexual abuse, Thorn is glad to support PAI’s efforts with our case study, which highlights some of the concrete action necessary to prevent the misuse of generative AI for furthering child sexual abuse. To have impact, we will all need to work together, and we’re glad to be doing that work collectively with PAI.”
 – Dr. Rebecca Portnoff, Vice President of Data Science, Thorn

About Partnership on AI

Partnership on AI (PAI) is a non-profit organization that brings together diverse stakeholders from academia, civil society, industry and the media to create solutions to ensure artificial intelligence (AI) advances positive outcomes for people and society. PAI develops tools, recommendations and other resources by inviting voices from the AI community and beyond to share insights and perspectives. These insights are then synthesized into actionable guidance that can be used to drive adoption of responsible AI practices, inform public policy and advance public understanding of AI. To learn more, visit www.partnershiponai.org.

Media Contact
Holly Glisky
pai@finnpartners.com

Subscribe to Truepic updates

Stay up to date with our latest resources and articles.

Get started
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Share this article

Text Link