Reuters just published a study showing that major AI companies aren't meeting established global safety standards. This is a pretty significant issue given how much money and resources are being poured into AI development.
The research reveals gaps between what companies promise around safety and what they're actually implementing. This comes at a time when regulators are still trying to figure out how to govern the space, and when corporate safety practices are really the main thing holding things back.
What's interesting is the timing. We're seeing companies move fast with new models and capabilities, but the safety infrastructure apparently hasn't kept pace. Some of the issues flagged include inadequate testing protocols, insufficient transparency, and gaps in risk management.
This will likely fuel ongoing conversations about whether self-regulation works.
https://www.reuters.com/business/ai-companies-safety-practices-fail-meet-global-standards-study-shows-2025-12-03/