While Big Tech companies scrambled to influence regulators behind closed doors, the European Union's AI Act quietly began reshaping how artificial intelligence operates across the continent.
The Act entered force in August 2024, but its real teeth started showing up this year. February brought the initial wave of rules—bye-bye to unacceptable-risk AI systems like government social scoring tools. Companies also had to start training their employees on AI literacy. Because apparently, understanding what your technology actually does is now a radical concept.
August delivered the second punch with transparency requirements for general-purpose AI. Suddenly, tech giants had to document everything and disclose copyrighted material used in training. The horror of accountability struck Silicon Valley boardrooms everywhere.
Big Tech didn't take this lying down. Lobbying efforts pushed hard against stricter provisions, arguing they'd kill innovation and global competitiveness. The usual playbook. And honestly? Some adjustments followed. The European Commission published draft guidelines in July 2025, clarifying expectations after industry pushback reached fever pitch.
The phased approach gives companies breathing room. High-risk AI systems—think CV-scanning tools for hiring—get until August 2026 for full compliance. Some embedded systems even scored extensions until 2027. Generous, really.
Four risk categories define the landscape: unacceptable, high, limited, and minimal risk. The higher you climb, the tougher the rules get. Makes sense, except determining which category your AI falls into requires legal gymnastics most startups can't afford. The majority of AI systems across the EU actually fall into the minimal risk category, requiring no specific compliance measures.
The European AI Office and national authorities now oversee this regulatory circus, supported by an AI Board and Scientific Panel. They even created an AI Act Compliance Checker tool for smaller companies. How thoughtful.
Critics wonder if softening certain provisions represents smart balance or regulatory backtracking. Big Tech's influence seems obvious—rules got clarified, timelines extended, compliance costs addressed. Meanwhile, the race toward AGI development continues accelerating globally, with experts projecting arrival around 2030. The penalties for non-compliance remain severe though, with fines reaching up to €35 million or 7% of global turnover for the worst violations. Whether this preserves innovation or waters down protection depends on your perspective.
The Act aims to prevent discrimination and guarantee explainable AI decisions. Noble goals. But watching regulators dance between industry pressure and public safety concerns reveals the messy reality of governing emerging technology.
Full applicability hits August 2026. Game on.

