FR

Artificial Intelligence is no longer just a tech buzzword; it is a business reality. From optimising logistics to drafting contracts, AI tools are rapidly integrating into corporate workflows. However, this adoption outpaces legal clarity, creating a complex web of obligations under -in particular- the AI Act, Data Act, GDPR, intellectual property law and relevant sectorial legislation. For corporate counsel, navigating this landscape is a critical challenge. For instance, using an AI tool for CV screening can inadvertently trigger high-risk obligations and severe penalties in case of non-compliance. This article provides a pragmatic playbook for Belgian company lawyers to manage these risks and turn AI compliance into a competitive advantage.


1. The AI Act: A Primer for Corporate Counsel

As a regulation, the AI Act has direct effect in Belgium and establishes a risk-based legal framework. Obligations are imposed on various actors, but the primary responsibilities fall on 'providers' (who develop and market AI) and 'deployers' (who use AI in a professional capacity). Importantly, the Act has extraterritorial scope, affecting any company placing AI systems on the EU market.

The AI Act's obligations hinge on a four-tiered risk classification:

  1. unacceptable risk: these systems pose a clear threat to fundamental rights and are banned. Examples include social scoring, real-time biometric identification in public spaces (with narrow exceptions), and manipulative techniques that exploit vulnerabilities;
  2. high risk: AI systems that can significantly impact safety or fundamental rights fall into this category. This includes AI used in critical infrastructure, recruitment, credit scoring, and medical devices. These systems are subject to stringent obligations and an ex ante conformity self-assessment. Providers of high risk AI systems must for instance implement a risk management system and set up technical documentation and register the system in an EU database. Deployers must in turn ensure data governance, human oversight, and for certain deployers and high-risk AI systems perform a Fundamental Rights Impact Assessment (FRIA);
  3. systemic risk: this category applies to general-purpose AI (GPAI) models trained on vast datasets with high-impact capabilities that could cause large-scale accidents or disrupt entire sectors of society. Due to their immense scale and widespread use, a single failure could harm public health, safety and fundamental rights. Providers of these models must conduct thorough model evaluations, ensure an increased level of cybersecurity protection and notify the Commission;
  4. limited risk: systems like chatbots or those generating deepfakes face specific transparency obligations. Deployers must inform users that they are interacting with an AI tool or that content is AI-generated; and
  5. minimal risk: the vast majority of AI systems (e.g., spam filters, AI in video games) fall into this category, which are permitted and have no additional legal obligations, though voluntary codes of conduct are encouraged.

Providers of GPAI models without systemic risks, like those powering ChatGPT, must maintain technical documentation and provide summaries of training data.

Sanctions

The penalties under the AI Act are structured in a tiered system based on the severity of the violation. Non-compliance with prohibited practices (e.g. social scoring or biometric mass surveillance) can lead to fines of up to EUR 35 million or 7% of the global annual turnover. Other significant violations, such as failure to meet the strict governance and transparency requirements for "high-risk" AI systems, trigger fines of up to EUR 15 million (3%). Providing misleading information to regulators can in turn result in penalties of up to EUR 7.5 million (1%).

However, the AI Act foresees a protective “buffer” for small and medium-sized enterprises and start-ups. In their case, each fine is capped at the lower of the two possible amounts.

As most AI systems process personal data, these penalties can accumulate with GDPR fines. Severe violations of the GDPR, such as the processing of personal data without valid legal basis, can lead to fines of up to EUR 20 million or 4% of the global turnover. Less severe violations (e.g. failure to maintain adequate records of processing activities) can result in penalties of up to EUR 10 million or 2% of the global turnover. Consequently, a single severe compliance failure may be sanctioned under both the AI Act and the GDPR, subjecting an organisation to aggregate fines that may exceed 10% of the global revenue, which effectively makes this a Board room topic.


Timeline

The AI Act's phased implementation is already underway. Prohibitions on AI literacy and unacceptable-risk (and therefore prohibited) AI took effect in February 2025. Rules for GPAI models apply since August 2025, with the main body of the Act, including obligations for high-risk systems, becoming applicable in August 2026 and a few in August 2027 with some exemptions and grandfathering provisions.

In more detail, the timeline looks as follows:

  1. 2 February 2025: the ban on prohibited practices and the AI literacy requirements apply;
  2. 2 August 2025: the rules on general-purpose AI models and penalties apply. Providers of such models placed on the market or put into service before this date must comply with the AI Act by 2 August 2027;
  3. 2 August 2026: most remaining provisions of the AI Act apply, except for obligations related to high-risk AI systems listed in Annex I (e.g. safety components of aircrafts and medical devices). Obligations relating to high-risk systems listed in Annex III (e.g. AI used for CV screening) placed on the market before this date will apply only where the system undergoes significant design changes from that date onwards; and
  4. 2 August 2027: obligations relating to high-risk AI systems listed in Annex I apply. Where such systems are intended to be used by public authorities, compliance is required by 2 August 2030. The same timeline applies to AI systems that are components of large-scale IT systems listed in Annex X and placed on the market or put into service before 2 August 2027.


Digital Omnibus Proposal

A recent 'Digital Omnibus Proposal' published by the European Commission in November 2025 suggests adjustments to reduce the regulatory burden, but pending legislative clarity, companies should proceed with compliance based on the current text of the AI Act.

Under this proposal, high-risk obligations would be decoupled from a fixed start date and instead triggered by Commission-confirmed harmonised standards, with transitional periods of six months for Annex III systems (but no later than 2 December 2027) and twelve months for Annex I systems (but no later than 2 August 2028); transparency obligations would benefit from a six-month grace period with watermarking due by 2 February 2027 for systems placed on the market before 2 August 2026. AI literacy obligations on the companies (which are already applicable to date) would be softened and shifted to the Commission and Member States, processing of special categories of data would be allowed where strictly necessary for bias detection and correction and subject to strict safeguards. In parallel, controllers would be able to rely on legitimate interests to process personal data for the development and operation of an AI system, provided they complete a legitimate interest balancing test, implement appropriate safeguards, and comply with any applicable legislation requiring (as the case may be) consent for the relevant processing. In addition, the duty to register high-risk AI in an EU database would be removed for providers who classify Annex III-listed systems as not high-risk, and conformity assessments streamlined by leveraging already-designated product legislation bodies. Supervision would be centralised in the AI Office for very large online platforms (VLOPs) and very large online search engines (VLOSEs), and systems built on a provider’s own general-purpose AI model. Last but not least, it is the proposal’s intention to extend the beneficial SME regime under the AI Act to small mid-caps, and to have the AI Office create a Union-level sandbox (in addition to national ones) and expand real-world testing to more categories of high-risk AI.


2. The GDPR Intersection: Key AI Challenges

While the GDPR's principles are familiar, AI introduces unique compliance complexities. Deploying AI systems that process personal data requires a renewed focus on several fronts (non-exhaustive summary):

  • data minimisation vs. big data: AI models often require vast datasets for training, creating a direct tension with the principle of data minimisation. A clear legal basis and rigorous data governance are essential;
  • purpose limitation: the adaptability of AI can lead to ‘function creep’, where data is used for purposes beyond its original scope. This must be prevented through strong technical and organisational measures; and
  • transparency and explainability: the ‘black box’ nature and complexity of some AI systems challenge the right to an explanation for automated decisions, as confirmed in the Dun & Bradstreet Austria CJEU ruling (C-203/22).

In practice, the determination of the roles of the parties will be key: processor-controller under the GDPR and deployer-provider (in addition to the risk qualification set out above) under the AI Act. The respective obligations need to be mapped out for the specific AI models/systems and use cases at hand.

An example of how the GDPR and AI Act may interrelate is the following: a Fundamental Rights Impact Assessment (FRIA) might be required for the deployment of specific high-risk AI systems and deployers under the AI Act. This requirement may further build on an already existing Data Protection Impact Assessment (DPIA) required under the GDPR for specific data protection risks (see also Article 27(4) of the AI Act) may be used as a basis for a FRIA but will need to be elaborated to cover the non-data protection aspects.


3. Intellectual Property: Infringement Risks and Ownership Concerns

The use of AI raises critical IP questions at both the input and output stages. Also, at the AI tool level important IP considerations (e.g. eligibility for patent protection) arise but this is not discussed below.

At the input stage, training GPAI models on vast, internet-sourced datasets often involves copying copyrighted works. While the EU's text and data mining (TDM) exception permits this for lawfully accessed works, it is crucial to respect any machine-readable opt-outs by rightsholders (outside of a scientific research setting). The AI Act reinforces this by requiring GPAI providers to publish summaries of their training data, empowering creators to enforce their rights.

However, a recent ruling of the Munich Regional Court in the GEMA v OpenAI case poses an important health warning in that GPAI providers can rely upon the TDM exception (subject to opt-out) for temporary copies needed for analysis of (protected) training data, but cannot reproduce the same for memorization in its model as this would fall outside of the scope of the TDM exception.

Regarding AI output, the central question is one of authorship. Under Belgian law (Art. XI.165 Code of Economic Law), copyright protection requires an original work reflecting the author's personality. Since an AI tool cannot be considered a human author, purely machine-generated output is unlikely to qualify for copyright protection. Human intervention that demonstrates free and creative choices is necessary to establish authorship. Furthermore, companies must be wary of outputs that reproduce substantial parts of existing protected works, which could constitute infringement.


4. Your 6-Step Action Plan

1. Map Your AI Footprint and Risks

As a first step, it is essential that companies understand which legal obligations apply and ensure that they are adequately met. They should therefore:

  • map and diligence all AI tools they deploy or provide, classify them under the AI risk tiers, identify their roles (not only under the AI Act but also under the GDPR) and assess where these obligations are currently unmet;
  • inventarise the categories of data such AI tools process and identify privacy risks; and
  • map the origins of the (training) data and assess IP risks.

2. Establish Centralised AI Governance

Companies should also establish a clear and centralised governance structure. This ensures that the responsibility for regulatory compliance is coordinated, rather than scattered across IT, HR and business units. Moreover, it enables companies to control the introduction, modification and use of AI tools, while promptly addressing privacy and IP concerns. More specifically, companies are advised to:

  • establish a cross-functional AI committee that oversees compliance and risk management; and
  • ensure that a Data Protection Officer (DPO) is appointed and a data protection impact assessment (DPIA) performed when they engage in large-scale processing of personal data, or the processing of certain types of sensitive personal data.

3. Fortify Your External Legal Framework

As a third step, companies should update their external documentation to ensure it complies with all regulatory requirements. These include:

  • technical documentation relating to the AI tools of which they are a deployer or provider;
  • AI procurement contracts that clearly define responsibilities and compliance obligations;
  • a privacy policy that adheres to the GDPR requirements and informs data subjects that their personal data is processed through AI tools; and
  • license agreements ensuring that AI input does not infringe third party IP rights.

4. Implement Robust Internal Policies

Companies should not underestimate the key role that internal policies play in helping staff members to use AI tools responsibly. They reduce the risk of accidental misuse, data breaches and IP infringement. Companies should thus adopt internal guidelines and policies that address, inter alia:

  • the disadvantages and risks of AI tools, such as biases and “hallucinations”;
  • the IP risks associated with the use of AI tools;
  • the monitoring of AI regulatory compliance and prevent shadow AI tool use;
  • the importance of transparency when sharing AI-generated content;
  • the procedures for updating, retraining, or repurposing AI tools, as this may trigger new obligations; and
  • AI incident response plans and risk management protocols.

5. Embed AI Lifecycle Management

It is also important to note that AI risk management is an ongoing exercise. As a fifth step, we therefore recommend companies to establish processes that:

  • map and diligence AI tools not only at procurement but on an ongoing basis;
  • handle complaints and inquiries about AI use;
  • continuously monitor, detect and address use of the tools and risks;
  • have an open communication line between deployer and provider;
  • allow data subjects to exercise their rights under the GDPR; and
  • address third party’s IP concerns and secure IP ownership and licensing rights.

6. Drive AI Literacy Across the Company

Finally, companies’ internal policies and guidelines should be complemented by targeted training and tabletop exercises, tailored to specific roles and responsibilities. This ensures that staff members understand how to use AI tools responsibly and reduces the risk of errors, unlawful processing or IP infringements. This remains key even if the AI literacy requirement would be softened further to the Digital Omnibus Proposal.

5. Conclusion: From Compliance to Competitive Advantage

The AI regulatory landscape is undeniably complex, but it is not unmanageable. For corporate counsel, a proactive and structured approach is paramount. Moving beyond a reactive stance on compliance allows the legal function to become a strategic enabler of secure and responsible AI innovation. This does not only mitigate significant financial and reputational risk but also builds the trust necessary to harness AI's full potential.

Should you wish to discuss how to tailor this framework to and implement the same in your organisation, our team is ready to assist.