First proposed in April 2021 and now in the final stages of the lawmaking process, the EU AI Act seeks to create an ecosystem of trust around artificial intelligence systems available on the EU market, using a risk-based approach where obligations are proportionate to the risk posed by the system. This strategy’s aim is to ensure that harms are minimised without stifling innovative AI development and use. Indeed, to support innovation from emerging players, the AI Act specifies considerations to be made for small and medium-sized enterprises and start-ups. These range from giving them free access to regulatory sandboxes, to being more lenient regarding documentation.
Before examining considerations set out for SMEs under the AI Act, it is important to define what these enterprises are. The European Commission has recently reconsidered how SMEs are defined under Recommendation 2003/361/EC, which is referred to in the latest text of the AI Act. Here, an enterprise is any legal entity engaged in economic activity, including self-employed individuals and family businesses. Micro, small and medium-sized enterprises are those that have fewer than 250 employees and annual turnover of less than €50 million and/or annual balance sheet of less than €43 million. Within this bracket:
While the AI Act does not exempt SMEs from compliance if they are using or providing an AI system covered under the AI Act, it does – in Article 55 – set out intended resources to help smaller enterprises with compliance, in terms of advice, financial support, and ensuring their voices are represented.
Furthermore, the Act provides an exemption for free and open-source AI components providing they are not put into service as a component of a high-risk system. The aim of this provision is to support the development and deployment of AI systems by SMEs, start-ups, and academics in particular.
Deployers of high-risk AI systems are required to carry out fundamental rights impact assessments prior to their use. This must include a detailed plan with the measures or tools to help mitigate risks to fundamental rights identified by the assessment. Deployers must also notify national supervisory authorities and relevant stakeholders, as well as representatives of groups likely to be affected by the AI system in question to collect relevant information necessary for the assessment. However, SMEs are exempt from the consultation requirement when carrying out impact assessments to preserve their resources, although they should aim to carry it out where possible.
Additionally, high-risk systems must be accompanied by technical documentation that demonstrates compliance with the requirements for high-risk systems outlined in Articles 8 to 15. This information can also be used by national supervisory authorities and notified bodies to assess the compliance of the systems. Annex IV sets out the minimum requirements for this technical documentation, but for SMEs and start-ups, the rules are more lenient and any equivalent documentation that meets the same objectives can be submitted providing that it is approved by the competent national authority. This leniency will likely make proving compliance less burdensome to SMEs, providing them with some flexibility while still ensuring that their systems adhere to the same standards as those used or provided by other, bigger players.
While drafting the Law, the Commission is required to consult the AI Office and its advisory forum, European standards organisations, and expert groups, as well as relevant stakeholders, including SMEs and start-ups. The advisory forum must also exhibit a balanced range of stakeholders representing industry, startups, SMEs, civil society and academics. Further, participation from SMEs and start-ups should be encouraged during the standardisation development process.
As well as ensuring that the views and interests of SMEs are represented, the latest version of the AI Act also requires communication directly with SMEs to support them with compliance. Specifically, member states are required to use existing channels or establish new ones especially for communication with SMEs and start-ups to provide consistent and homogenous guidance and respond to questions about implementing the requirements of the Act. National supervisory authorities can also provide guidance and advice on the implementation of the act to start-ups or SMEs that reflects the Commission’s or Office of AI’s advice. Additionally, to support SMEs and startups with compliance, member states will be required to organise development opportunities to enhance digital skills and knowledge on the application of the Act.
Ensuring that the views of SMEs are represented during the development of the rules will be vital for ensuring that requirements are not unachievable to smaller players or unproportionately burdensome or costly, which could prevent innovation. Instead, SMEs will be able to voice their concerns and will not face unnecessary barriers to innovation.
Under the AI Act, regulatory sandboxes must be established by member states for the development and testing of AI systems within an environment with strict regulatory oversight before they are placed on the market or used in the EU. Such sandboxes will be established to allow authorities to have a better understanding of technical developments, improve their supervision, and provide guidance on compliance while also allowing providers to test and develop AI systems before deploying them on the market, providing more legal certainty. Where sandboxes identify significant risks, they should be mitigated immediately, or the development and testing should be suspended until the risks are mitigated.
Although participation in the sandboxes is voluntary, it is encouraged, and SMEs and start-ups will be given free priority access to them providing they are eligible. To support the access of SMEs that may have limited legal resources, the procedures, processes, and administrative requirements for the application to the sandboxes and selection of participants should be simple and easy to understand. Unambiguous instructions should also be provided for the participation in and withdrawal from the sandboxes. Those that participate in the sandboxes, particularly SMEs and start-ups, will also benefit from guidance on the implementation of the Act, standardisation documents and certification, and other benefits such as Testing and Experimentation Facilities.
It is not easy to navigate the requirements of the AI Act, especially with standardisation yet to take place. Navigating these requirements with limited legal resources or indeed the resources to enable model testing adds an extra layer of complexity that free and priority access to sandboxes can help to solve or at least reduce. This will help smaller players remain competitive and compliant and reduce the likelihood of penalties.
Before high-risk systems can be used in the EU or deployed on the EU market, they must first undergo conformity assessments. There are two routes for conducting conformity assessments, one internal and the other externally conducted by an authorised notified body. The Act requires that the Commission regularly assesses the cost of compliance for SMEs and start-ups through transparent consultations and collaboration with member states. In particular, the interests and needs of SMEs and start-ups should be considered when setting fees for the third-party conformity assessments outlined in Article 43, where fees should be proportional to an enterprise’s size and market share, reducing them to fit this proportion where necessary. As part of the regular assessments that are required, the European Commission must also report the findings to the European Parliament and Council as part of the evaluation of the Act under Article 84(2).
Reducing compliance costs where possible is an important step towards ensuring that the innovation of SMEs is not stifled due to limited financial resources. Indeed, it is estimated that compliance costs for high-risk systems will be €6000 - €7000 and conformity assessments are estimated to cost around €3500 - €7500, with a total cost of compliance of between €9500 and €14500 for each high-risk AI system. This is on top of costs associated with the establishment and maintenance of a quality management system, which could cost up to €400000, in addition to costs associated with testing facilities, for example. Overall, it is estimated that compliance with the AI Act could cost an additional 17% of overhead on spending on AI in the EU, although this is likely to be lower for those that already have adaptable practices in place.
Thus, ensuring that fees for conformity assessments are reduced for SMEs in proportion to their size and revenue and providing free access to regulatory sandboxes will reduce some of the financial burden on SMEs, but they are still likely to have to invest a significant amount in compliance costs.
The progression of the AI Act signals an important step towards safer and fairer AI in the EU, but the legislation will also have profound implications on an operational level. It is clear that compliance will be costly and is not something that can happen overnight. While SMEs will benefit from additional initiatives and support, alignment with the Act will require significant resources. Starting the compliance journey sooner rather than later can spread the costs more evenly and avoid a drain on resources from last-minute efforts.
To find out how Holistic AI can help you navigate the requirements, schedule a call with our expert governance, risk and compliance team.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts