EU AI Act to Significantly Impact SMEs: Key Compliance Steps by 2025
The EU AI Act, set to introduce comprehensive regulation for AI use in businesses, is poised to impact medium-sized enterprises (SMEs) significantly. SMEs, the primary users of AI systems like large language models and AI-integrated tools, must now assess and manage risks associated with these technologies.
Under the EU AI Act, AI systems in SMEs that fall under the regulatory framework are primarily those categorised as high-risk applications. These include AI used for decision-making in critical areas such as recruitment, credit scoring, and law enforcement. Firms must ensure compliance with transparency, documentation, and risk management obligations by February 2025 to avoid heavy fines and oversight by authorities like the Bundesnetzagentur.
To comply, companies should start by taking an inventory of their AI systems and determining their risk levels. The AI Act defines four risk levels, with 'high' and 'limited risk' categories having specific obligations like documentation and transparency. For minimal risk, training is usually sufficient, while high-risk systems may require external consultation. Even with minimal risk, companies must document their risk assessment process. High-risk AI systems require significant documentation, including training data and bias mitigation measures. Companies must also inform users if they are interacting with an AI, especially in voice call systems.
The EU AI Act, while introducing strict regulations, also presents opportunities. It can increase trust in AI applications and establish clear market standards. SMEs must act promptly to identify, classify, and ensure compliance with the AI Act's requirements to avoid potential penalties and build a robust AI governance framework.