On August 27, 2025, the U.S. Consumer Product Safety Commission (CPSC) held a virtual public meeting to preview its 2026–2027 agenda. Acting Commissioner Peter Feldman and Executive Director Brian Lorenze outlined a significant pivot in the agency’s approach to product hazard detection and prevention—centered on artificial intelligence and predictive analytics.
CPSC plans to move beyond reactive enforcement, aiming instead toanticipatehazards through modernized injury surveillance and data modeling. Rather than relying solely on traditional methods like consumer portal reports or mandated company disclosures, the agency is integrating AI to mine injury trends from broader sources, including social media and online reviews.
Lorenze highlighted the launch of a closed-loop generative AI system that continuously learns from expert feedback. This model enables staff to focus on oversight while automating routine tasks. According to the CPSC, globalization and complex supply chains have made it increasingly difficult to identify and trace hazardous products—creating the need for this kind of technological agility.
Still, questions remain. Social media, while rich in user feedback, is prone to misinformation and bot activity—raising concerns about data reliability and false positives. The CPSC acknowledged these risks and will be cognizant of them while moving forward. Notably, social media platforms ending fact-checking efforts earlier this year with reports suggesting bots account for roughly 20% of content on certain platforms, will likely further complicate the process.
Whether the agency will use AI insights to investigate companies with predictive, early-stage investigations—or expect companies to respond to emerging risks identified through these models—remains unclear. What is certain, however, is that the CPSC is entering a new era of tech-forward consumer protection.