{"id":3012,"date":"2024-02-27T11:19:23","date_gmt":"2024-02-27T11:19:23","guid":{"rendered":"https:\/\/artificialintelligenceact.eu\/article\/52d\/"},"modified":"2024-08-08T12:20:05","modified_gmt":"2024-08-08T12:20:05","slug":"55","status":"publish","type":"article","link":"https:\/\/artificialintelligenceact.eu\/article\/55\/","title":{"rendered":"Article 55: Obligations for Providers of General-Purpose AI Models with Systemic Risk"},"content":{"rendered":"<p>1. In addition to the obligations listed in <a href=\"https:\/\/artificialintelligenceact.eu\/article\/53\/\">Articles 53<\/a> and <a href=\"https:\/\/artificialintelligenceact.eu\/article\/54\/\">54<\/a>, providers of general-purpose AI models with systemic risk shall:<\/p>\n<p style=\"padding-left: 40px;\">(a) perform model evaluation in accordance with standardised protocols and tools reflecting the state of the art, including conducting and documenting adversarial testing of the model with a view to identifying and mitigating systemic risks;<\/p>\n<p style=\"padding-left: 40px;\">(b) assess and mitigate possible systemic risks at Union level, including their sources, that may stem from the development, the placing on the market, or the use of general-purpose AI models with systemic risk;<\/p>\n<p style=\"padding-left: 40px;\">(c) keep track of, document, and report, without undue delay, to the AI Office and, as appropriate, to national competent authorities, relevant information about serious incidents and possible corrective measures to address them;<\/p>\n<p style=\"padding-left: 40px;\">(d) ensure an adequate level of cybersecurity protection for the general-purpose AI model with systemic risk and the physical infrastructure of the model.<\/p>\n<p>2. Providers of general-purpose AI models with systemic risk may rely on codes of practice within the meaning of <a href=\"https:\/\/artificialintelligenceact.eu\/article\/56\">Article 56<\/a> to demonstrate compliance with the obligations set out in paragraph 1 of this Article, until a harmonised standard is published. Compliance with European harmonised standards grants providers the presumption of conformity to the extent that those standards cover those obligations. Providers of general-purpose AI models with systemic risks who do not adhere to an approved code of practice or do not comply with a European harmonised standard shall demonstrate alternative adequate means of compliance for assessment by the Commission.<\/p>\n<p>3. Any information or documentation obtained pursuant to this Article, including trade secrets, shall be treated in accordance with the confidentiality obligations set out in <a href=\"https:\/\/artificialintelligenceact.eu\/article\/78\">Article 78<\/a>.<\/p>\n","protected":false},"template":"","class_list":["post-3012","article","type-article","status-publish","hentry"],"meta_box":{"date_of_eif":"2 August 2025","eif_according_to":"Article 113(b)","eif_inherit_from":"Chapter V","item_summary":"<p>This article states that providers of general-purpose AI models with systemic risk must follow certain rules. They must evaluate their models using standard protocols, identify and reduce systemic risks, and report any serious incidents to the AI Office and national authorities. They must also ensure their AI models and infrastructure are secure. Providers can use codes of practice to show they are following these rules until a standard is published. If they don't follow an approved code or standard, they must show they are complying in another way. Any information obtained must be kept confidential.<\/p>\n","item_notes":"","item_order":"55","chapter-article_from":["3461"],"article-recital_to":["2673","2674"],"title-article_from":[]},"_links":{"self":[{"href":"https:\/\/artificialintelligenceact.eu\/wp-json\/wp\/v2\/article\/3012","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/artificialintelligenceact.eu\/wp-json\/wp\/v2\/article"}],"about":[{"href":"https:\/\/artificialintelligenceact.eu\/wp-json\/wp\/v2\/types\/article"}],"version-history":[{"count":3,"href":"https:\/\/artificialintelligenceact.eu\/wp-json\/wp\/v2\/article\/3012\/revisions"}],"predecessor-version":[{"id":4590,"href":"https:\/\/artificialintelligenceact.eu\/wp-json\/wp\/v2\/article\/3012\/revisions\/4590"}],"wp:attachment":[{"href":"https:\/\/artificialintelligenceact.eu\/wp-json\/wp\/v2\/media?parent=3012"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}