Search Menu

JURIDICA INTERNATIONAL. LAW REVIEW. UNIVERSITY OF TARTU (1632)

Issues list

Issues

31/2022

Cover image
Download

Issue

PDF

Algorithmic Explainability and the Sufficient-Disclosure Requirement under the European Patent Convention

Artificial intelligence and its subsector machine learning differs from traditional programming. For this reason, coupled with its potential benefits to society in many arenas, it has been articulated as one of the key priorities in the European Union. Such characteristics specific to artificial intelligence as models with increased accuracy and generalisation power may accentuate issues of algorithmic explainability that can defy patentability. Accordingly, the article focuses on the legal requirements related to the ‘sufficient disclosure’ criterion under the legal framework for patents as one facet of deciding on the patentability of the invention, and it addresses potential solutions for overcoming issues of algorithmic explainability. The author argues that solutions introducing a system involving deposit of the algorithm, training data, or both might not be as effective a mechanism for tackling those issues as instead implementing a recognised certification system.

Keywords:

algorithm; explainability; patent; sufficient disclosure

1. Introduction

Artificial intelligence, or AI, and its subfield machine learning (hereinafter ‘ML’) hold potential to bring vital benefits to society. *1 Since ML differs from traditional programming in the way in which the program is built, it may entail issues of algorithmic explainability that are absent in traditional programming. Namely, algorithmic explainability represents a constellation of issues connected with difficulty in explaining how the data outcome has been generated from the input data. *2

Algorithmic explainability may create tensions with regard to the ‘sufficient disclosure’ criterion of the European Patent Convention *3 (hereinafter ‘EPC’). Under Article 83 EPC, the invention must be described clearly and completely, such that a person skilled in the art is enabled to realise it.

Although computer programs, if claimed as such, are excluded from patentability under Article 52(2)(c) and 3, the exception does not apply to creations involving software (which could comprise AI that is considered to be ‘computer-implemented inventions’ under the EPC) that demonstrates a ‘further technical effect’. However, any creation nevertheless has to comply not only with the ‘invention’ requirement but with other criteria as well, including the ‘sufficient disclosure’ aspect, to be eligible for a patent under the EPC. *4

The sufficient-disclosure requirement in patent law was designed before the emergence of AI. Therefore, inventions involving unexplainable algorithms might, for instance, comply with the ‘invention’ requirement but not the ‘sufficient-disclosure’ criterion, or they may even fail to fulfil both, and, hence, be denied patentability under the EPC. This might tend to favour trade secret protection and the non-enrichment of general knowledge. Alternatively, these inventions could be rendered public for use by everyone. Neither of these options encourages the development of inventions involving sophisticated ML.

This paper focuses on addressing challenges that stem particularly from the algorithmic explainability and sufficient disclosure requirement pursuant to the EPC. It presents support for an argument that the recognition of certification could be a preferable approach for achieving balance between an incentive to innovate and patentability when compared to introducing solutions that involve the deposit of the algorithm, the deposit of training data, or both. This recognition could help remedy the algorithmic explainability issue and alleviate the burden of meeting the ‘sufficient disclosure’ criterion under the EPC.

The argument relies on legal methods – analytical, descriptive, comparative, and historical. Within three sections, with their various subsections, the primary legal sources, secondary ones, and case law are referred to substantiate the claim articulated by the hypothesis of the article.

The scope of the article is restricted to the EPC; therefore, considerations on the issues outlined in the article outside the jurisdiction of EPC exceed the ambit of the paper. Likewise, analysis of those aspects of creations that must exist if the creation is to be considered an ‘invention’ under the EPC exceeds the scope of the article.

2. Machine learning

ML aims to facilitate self-learning operation of computers by recognising data patterns, constructing interpreting models, and enabling non-programmed predictions without built-in instructions. *5 In other words, ML focuses on finding the right features to build the right models, namely, programs or algorithms, trained on data sets, that achieve the right tasks. It is in this respect that ML differs from traditional programming; that is, in ML, the program is constituted from the inputs and respective outputs resulting from the statistical correlations between the input data read by the algorithm. In traditional programming, in contrast, the rules are explicitly determined by humans, so the output results from the input data in alignment with previously programmed rules and models. *6

Types of ML algorithms range from those with defined functions to models that deploy neural networks and deep learning to achieve abstraction with deeper correlations and associations amongst data. *7 More sophisticated ML models offer greater accuracy and generalisation, and these are more appropriate for processing data of a heterogeneous nature, such as genetic data. In this regard, ML has a significant role in various scientific fields, among them health care, in which it contributes to image analysis and diagnostics. *8

However, the more sophisticated the model is, the less comprehensible, explainable, and explicable it becomes. This is the so-called ‘black box’ phenomenon. *9 Not all ML algorithms come across a ‘black box’ paradigm. The lack of algorithmic explicability may appear due to several factors: (a) sophistication of a model; (b) quantities of input data that are too large for a human to immediately comprehend; and (c) deficiencies in the model or data. *10

Issues with algorithmic explainability impose challenges also to comply with Article 83 EPC. For instance, for process patent claims in diagnostics *11 that increases the tension between the advantages of ML, especially neural networks, and the desire for monopoly rights under the EPC.

3. The requirement of sufficient disclosure under the EPC

3.1. A general overview of the ‘sufficient disclosure’ criterion

The criterion of ‘sufficient disclosure’ in general is one of the essential prerequisites that has to be fulfilled, alongside those related to other aspects of patentability under the EPC (for instance, the creation has to qualify as an ‘invention’ and be ‘non-obvious’, ‘novel’, and ‘commercially applicable’). Therefore, ‘sufficient disclosure’ is just as fundamental for patentability within the EPC framework as those other criteria. While a computer program per se is not patentable if claimed as such creations involving computer programs or computer-implemented inventions (not excluding those that involve AI) may nevertheless be considered patentable if they present ‘further technical effect’ *12 .

For this article, the fulfilment of the ‘invention’ criterion is not analysed in detail with regard to creations involving AI. This is because that criterion is not the nub of the issue addressed here and also in the further analysed practice of the European Patent Office (hereinafter ‘EPO’) is not evaluated in isolation: each patent claim under the EPC is considered on the basis of all the criteria mentioned. As the case law examined below elucidates, the evaluation of a claim might, for instance, identify deficiencies not solely in the fulfilment of the ‘invention’ criterion but also in meeting others, including ‘sufficiency of disclosure’, just as there may be defects only in the ability to satisfy the requirement that the invention be ‘sufficiently disclosed’. In this regard, the satisfaction of each patentability element under the EPC is evaluated separately. In light of the specifics of AI, meeting the criterion of ‘sufficient disclosure’ might present particular difficulties and, hence, merits special attention.

Delineating the requirements of the ‘sufficient disclosure’ criterion under the EPC, Article 83 states that the application ‘shall disclose the invention in a manner sufficiently clear and complete for it to be carried out by a person skilled in the art’. Article 83 EPC is linked with Article 84, which stipulates that ‘the claims shall define the matter for which protection is sought. They shall be clear and concise and be supported by the description’. Further on, in Article 100(b), the EPC states the grounds for opposition aimed at revoking a patent; among those grounds are non-compliance with Article 83.

The level of sufficiency required of the disclosure depends on what kind of patent protection is claimed, for what and in what magnitude, or the monopoly conferred by the patent circumscribed by the claims should correspond with the respective technical contribution to general knowledge. *13 Specifically, claims under the EPC can be divided into those for a product (apparatus, substance); process, such as manufacture or working processes; use (for instance, means adapted to realise the relevant function or steps in the case of computer programs); and ‘product by process’, with a new product being obtained by means of the new process. *14

Sufficiency is achieved if (a) the description allows one to obtain the product (in cases of product‑patent claims); (b) it enables one to conduct the process (in cases of process-patent claims); or (c) the invention can be used for previously unknown purposes, or the stated technical effect can be credibly achieved (in cases involving use-patent claims). *15

Three aims are stated for the description: (1) to inform of the steps for realising the invention (per Article 83 EPC), (2) to support the claims (under Article 84), and (3) to disclose the invention (under Article 52). *16

Rule 42 of the Implementing Regulations *17 stipulates requirements for the description, generally foreseeing disclosure in writing. The ratio of ‘sufficient disclosure’ is to convince of realisability, not to actually carry out the invention (for instance, building and training the ML algorithm). Realisation necessitates (a)  plausibility (not certainty) of reaching the outcome (the solution for the technical problem), ascertained on the basis of the description and supporting materials *18 ; (b) completeness (the ability for realisation to be carried out without an undue burden – it might involve simple verification tests that do not require additional experimentation); and (c) reproducibility (the invention being able to be repeated at the statistically expected frequency). *19

The sufficiency of the disclosure may be derived from the ‘application’, inclusive of any supporting documents, such as drawings, tables, and others. In this regard, the wording of Article 83 EPC is broader than that of Article 8(2) of the Strasbourg Convention *20 , or its predecessor *21 , which requires only the ‘description’ to disclose the invention. The term ‘application’ has allowed extending the description and incorporation of deposit of micro-organisms (see Article 28 EPC). *22

Furthermore, the wording of the ‘sufficient disclosure’ criterion set forth in the EPC and that of the preparatory documents of the EPC does not directly require legal/moral justification or explanation of the invention apart from technical realisability unless this is specifically claimed. Therefore, for the EPC, algorithmic scrutability (related to the complexity of their structure and to their decision-making process) and intuitiveness (the relevance of particular criteria to the output or a decision) in the sense of providing reasons for a particular outcome (legally or morally well-justifiable outcomes) are not the decisive factors in evaluating the sufficiency of disclosure. *23 For the patent scheme of the EPC to be satisfied, the pivotal component is the technical explanation of the decision‑making process unless a specific claim is made otherwise. For instance, in T 1153/02, *24 a patent claim was filed for a computerised medical-diagnostics system able to interact with a patient without medical intervention. The application was rejected because ‘the claimed method is neither necessary nor sufficient for achieving a quick, efficient and accurate diagnosis by direct interaction with the patient’.

Concluding, the respective justification may become a part of the examination of sufficiency of disclosure if the claim explicitly mentions or entails an inextricable requirement of verifying the specific technical implementation or application. *25 For instance, technical plausibility is assessed by examining the corresponding therapeutic application, the efficacy of the invention in relation to the purpose‑limited medical-use claims. *26 Nonetheless, generally, moral and legal justification is evaluated under Article 53(a) EPC (‘ordre public and morality’ criterion) and Article 57 EPC (‘industrial application’ criterion).

3.2. A detailed picture of the criteria related to ‘sufficient disclosure’

In respect of ‘clear’ disclosure, sufficiency entails (a) outlining all the crucial elements, (b) their function, (c) their internal links, and (d) the ultimate result within the lines of the claims precisely (without ambiguity, vague expressions, undefined or generally unaccepted terms, or details buried in other information). All technical steps and proper testing methods needed for achieving the outcome must be reflected, without significant inconsistencies. *27 For instance, patent application PCT/EP2019/068722 *28 , for simulation of patients developing medical conditions in an AI-based setting, was initially rejected. The rejection was also based on the consideration that ‘the description merely talks about image-based and non-image information’ and that there was a lack of information ‘to establish an increased functionality suitable data set credibly’. It should be noted that EPC also allows for the description to be disclosed, alternatively, in publicly available documents providing clear reference, as supporting material. *29

For inventions involving ML, the disclosure depends on the invention. Namely, if the inventive contribution is in the algorithm, it should be disclosed, whereas if it lies only in the data, the algorithm does not need to be disclosed. Accordingly, the steps to construct the model, training process, and respective training data should be disclosed. The substantiation here is that disclosing only the decision process does not guarantee identical repetition. However, the inventor has discretion to judge the means and the quantity of data deemed necessary for realising the invention without undue burden. *30

Concluding, currently, there is no requirement to disclose the training data in the form of a library or the algorithm in the form of the source code unless the invention could not be sufficiently disclosed otherwise. Hence, the description and supporting documents’ working examples should (if possible, in written form) (1) describe the invention and, (2) depending on the claims, include – (a) the steps to construct and train the model *31 and those to obtain and select the data; (b) the architecture of the model; (c) the decision-making process or other relationships between inputs and outputs; (d) the sequence of steps applied *32 ; (e) the essential parameters, weights, and functions, with their mutual connections; (f) the type and quantity of data involved; and (g) the source of data *33 and other elements.

The criterion of ‘completeness’ requires that the description be scrupulous and disclose the underlying teaching of the invention entirely. *34 Namely, terms related to data-processing that possess a technical component (for instance, ‘kernel’) should be outlined in detail and comprehensively reflect the system architecture, its internal mechanics, and the associated interaction. *35 Additionally, for inventions employing ML, a specific, appropriate ML model should be disclosed if claimed as such, with avoidance of such vague expressions as ‘artificial neural network’ and the like. *36 Analogous, suitable input, training, and testing data should be mentioned explicitly, again without unspecified indications such as ‘wide range of seekers for a healthcare’. *37

In cases of minor errors (such as inadequate definition of a parameter) that can be remedied via application of general knowledge or simple verification tests that do not amount to an undue burden, the risk of harm does not influence the completeness demonstrated, since patentability is not contingent upon readiness for production. *38 An undue burden is deemed to exist when the application foresees (a) reliance on chance; (b) the implementation of the functional features (in claims defined by way of functional features); (c) conducting ethically debatable and/or time-intensive tests (if the features could have been defined otherwise, for instance, for ascertaining whether there is a pharmaceutical effect or finding the technical solution to the problem); *39 (d) and determination of a suitable method for testing datasets in cases with a large number of potential candidates. *40 It should be concluded that the aspects mentioned here should be considered also in cases involving algorithmic explainability issues.

As the foregoing analysis confirms, the requirement of ‘sufficient disclosure’ has to be met irrespective of the satisfaction of other conditions for patentability under the EPC – among them the requirement for the creation to qualify as an ‘invention’ and be ‘non-obvious’, ‘novel’, and ‘commercially applicable’. Hence, even though a creation involving AI might demonstrate a ‘further technical effect’ and qualify as an ‘invention’, compliance issues might still arise in relation to the sufficiently‑disclosed‑invention angle, owing to the specifics of AI. Possible approaches for overcoming these hurdles therefore deserve detailed analysis.

4. Potential solutions to tackle the algorithmic explainability issue under Article 83 EPC

4.1. Deposit of the algorithm

For a solution under Article 83 EPC, some scholars have suggested *41 the introduction of an algorithm‑deposit system similar to the system applied for micro-organisms, including mechanisms under the Budapest Treaty. *42 It should be noted that inventors might not find this proposed solution a preferable way to tackle the issue, for the reasons explained below.

The notion behind the deposit system for micro-organisms was developed to deal with difficulties in describing only a non-publicly-available micro-organism. If, for instance, an organism has been isolated from the soil, mutated, and further selected, a written description (of the strain itself and the further‑selection process) could not in itself guarantee reproducibility. *43 The canonical example involves cell lines that, compared with prokaryotic cells, are more complex and visually, morphologically very similar. It would prove exceptionally cumbersome to describe the cell line such that a person skilled in the art could obtain it in practice. Nonetheless, this only applies to situations where the cell line is not a combination of various cell lines or the substance that is not dependent on its properties, the end product, or the method of manufacture. That said, not every invention involving micro-organisms requires a deposit to suffice Article 83 EPC. *44 A deposit merely supplements the written description; it is not a substitute for it. *45

On this basis, it can be concluded that the difficulty of describing the micro-organism lies in its morphological similarities with others that cannot be comprehensively expressed in words without a tangible sample. Proceeding from this reasoning, it can be found that the deposit system for micro-organisms is different from the deposit system proposed for algorithms to tackle the issue with explainability. Namely, micro-organisms that have their origins in nature without additional, non‑routine modifications, technical extraction, or production under EPC are natural phenomena. *46

Thus, it should be concluded the hurdles describing a novel natural phenomenon (in this case, a particular micro-organism) lie either (a) in its randomness that cannot be precisely described with reference to existing knowledge – for instance, there might be a lack of appropriate genetic-sequence data related to the functionality of an organism as is evident in DNA coding with some antibodies *47 – or (b) in its visual, morphological similarities with other objects, including altered versions thereof. In other words, novel micro-organisms cannot be sufficiently described either because there is nothing tangible to compare with or because there is so much to compare with that one cannot precisely distinguish those in question from the rest.

In summary, the deposit of micro-organisms serves ‘distinguish[ing] from others’. The deposit may also serve the purposes of trials, especially with regard to broad claims centred on particular inventive results. *48 In contrast, the proposed solution of depositing algorithms as a way to tackle the explainability issue seems to resemble partial substitution for the written description. Whilst an inventor in the case of micro-organisms can explain and describe the inventive and technical concepts that underlie the invention, merely reflecting them more clearly with the aid of the deposit, the issue with unexplainable algorithms is bound up with the ability to explain to others both the inventive and technical concepts behind the algorithm in their own right, not solely with the impossibility of reflecting a visible conceptual distinction from other algorithms. Hence, in the case of algorithms, the proposed deposit system would function not purely for visualisation but, rather, for constructing a major part of the substance of the description.

It should be opined that the proposed deposit system imposes an undue burden on a person skilled in the art with regard to realising the invention. It would initially require said expert to understand the working principles of the invention deposited, so as to be able to implement it repeatedly. The requirement of Article 83 EPC cannot be fulfilled if the written description is absent or contradicts the deposited material. In the scenario proposed, the written description does not wholly and correctly reflect the sample deposited. The same is true when the invention can be realised only upon multiple requests from the depositary or through know-how in excess of general public knowledge in the respective technical field. *49 The stance should be taken that the deposit system recommended may well not be a preferable solution to resolve the issue, since the written description must still be intrinsic to the disclosure.

Furthermore, as noted above, the program in contexts of ML comprises also correlation between the data and the output. *50 Hence, it should be found that only the deposit of the algorithm as such without the written description cannot provide sufficient guidance for a person skilled in the art in how to carry out the invention. Doing so would require, in addition, understanding of the logic underlying the program, the training and input data, and those correlations between them that are an essential part of the output. *51 In this regard, the deposit of the algorithm on its own does not provide the crucial information on the invention, since the data comprised could not be considered general knowledge and could not be guessed without an undue burden in every case. *52 In other words, conversely to micro‑organisms, just because the algorithm being displayed does not automatically render it comprehensible. Hence, it must be concluded that algorithm deposits may not allow tackling the algorithmic explainability and that it might not afford the preferable balance between the incentive to innovate and the EPC framework in the relevant cases.

4.2. Deposit of the training data

Another proposed solution is depositing the training data. The ratio behind this suggestion is that said mechanism should facilitate transparency of the output generation by serving as a partial substitute for the explanation in words, with the publicly accessible deposit being starting material analogous to sequences of biological materials. *53

It should be noted, firstly, that considerations similar to those mentioned in the previous subsection also pertain to the proposed required deposit of the training data. In a nutshell, depositing only the training data would not entirely reveal the invention except when the invention lies solely in the training data due to further reasons. The ML model, as outlined before, comprises the correlation between the data and the output. *54 Therefore, a deposit of purely the training data would neither explain how the particular output has been generated nor, in consequence, suffice for meeting the requirement of disclosing the entire algorithm *55 in cases wherein the inventive step lies in not the training data but the algorithm. Also, only depositing the training data without actual input data would not demonstrate how the invention would behave outside the testing environment and whether it would function across the entire range claimed. *56

Furthermore, the practice of the EPO does not require revealing all the training and input data; rather, one must precisely describe and specify the data that would be considered suitable for construction of the claimed invention. *57 Additionally, the training method chosen and the process should be described. *58 It should be concluded, then, that it is left to the discretion of the inventor whether to disclose the full list of data in the libraries, *59 with all the weight values, parameters, *60 make reference to an existing relevant database *61 ; include an indication of the appropriate data, such as ‘the invention can utilize data from repositories such as the Autism Genetic Resource Exchange’ *62 ; and/or outline the basic features of the data, whether in such a form as ‘data records describing telecommunication network events’ *63 or otherwise.

In summary, a training-data deposit mechanism may not satisfy Article 83 EPC in cases wherein the inventive step exists outside solely of the training data and general knowledge for building the product (as set forth in product claims). In other words, the expert would still need to determine the algorithmic components and input data.

Even a combination of the two – depositing both the algorithm and the training data – will not reveal the input data or substitute a necessity for a written description. Additionally, as outlined above, current EPO-related case law does not require disclosing the algorithm, training, and input data in deposit form. In fact, the deposit might, similarly to inventions involving micro-organisms, *64 exert a chilling effect on its actual usage. Namely, inter alia, a deposit could provide too competitive advantage to others. *65

4.3. Certification

It can be proposed that certification might offer a solution that does address the algorithmic explainability issue under Article 83 EPC. The solution could be an alternative of, for instance, (a) the decomposition of algorithms or construction of model-agnostic interpreters *66 that require additional resources; (b) reliance on limited general knowledge for product patents (that may not reproduce the algorithmic logic *67 ).

Delineating, the certification is currently used, for instance, for medical devices *68 to verify, approve the appropriateness of the device for the intended purpose. Medical devices involving AI are also certified to tackle algorithmic explainability. Thus, apart from compliance with standards, certification includes testing the device with a variety of testing data to, for example, examine its performance across the intended range and ascertain causality in a supervised environment. *69

Additionally, certification for certain risk AI-based systems that are also not yet placed but intended to be placed in the market in the European Union (EU) has already been proposed by the so-called AI Act. *70 Intended for systems posing certain risks, the certification procedure involved could entail mandatory confidential disclosure of the source code and underlying data to competent authorities and provision of a suitable testing environment.

In this regard, the AI Act mutatis mutandis follows the certification mechanism implemented for medical devices as a pre-condition for certain risk-linked algorithms, and it expresses an intention to render this a part of the public order. It follows that those AI systems that both are intended for placement in the market in the EU and are to be categorised as AI systems posing a particular risk will be subject to an obligation of undergoing certification and patent examination if a patent is desired.

Although criticism of the certification proposed as a mechanism under the AI Act *71 has emerged, it should be noted that this certification could serve as a starting point and could, mutatis mutandis, be adjusted and recognised within the patent framework, at least for algorithms with issues of explainability to alleviate the problem of compliance with Article 83 EPC. One could look, for instance, at T 1164/11, *72 for which the EPO concluded that a patent might be granted also in cases in which it has been demonstrated convincingly and with examples that a surprising technical effect is achieved by means of the claimed device even when there is unknown and inexplicable underlying scientifically sound substantiation. The certification may provide a convincing demonstration. Nonetheless, the rest of the description still must reflect a realisable invention. *73

Concluding, the EPO considers certification to be an appropriate, sufficient mechanism to provide evidence of a realistic invention. *74 The certification could also be deemed an objective and impartial approach to demonstrate the intended result contrary to the statements by the inventor or by closely or permanently involved contributors whose extensive knowledge of the invention might create subjectivity issues. *75 For instance, many aspects of the invention may have become apparent to a person with intimate knowledge of it, to such an extent that a vague explanation results. Product, product-by-process, and use-patent claim could avail themselves of this mechanism, hence addressing such issues.

Furthermore, it should be noted that certification may, in fact, provide a crucial testing environment, probably extending to the form of a regulatory sandbox involving multiple actors that could help prove the concept, or preliminary verification of the sufficiency of the description. This method of certification to overcome algorithmic explainability would not comprise purely a simulation; in addition, it would complement the initial vision of practical execution as imagined in the mind of an expert. *76 To this end, the certification, similarly to that proposed by the AI Act, could involve confidential disclosure of the training, testing, and input data used and of the source code if this is agreed upon by the relevant parties and deemed necessary. The approach could prevent the disclosure of more information than is necessary for enabling a person skilled in the art to execute the invention, through evaluation – on a preliminary-examination basis and in a confidential environment – of what quantity of data the description should provide, in which form. This approach could help keep the rest of the details about the invention, for instance, as a trade secret. Thereby, it should not confer too competitive advantage on others when compared to the scenario of depositing and disclosing the algorithm along with all the data involved. It should be envisioned that this suggested path could aid particularly in cases of process patents and use claims.

Concluding, furthermore, this approach could assist in ascertaining the complexity of the invention since the EPO allows the ‘person skilled in the art’ to be a team of specialists. *77 With the mechanism proposed here, the inventor would know in advance whether even with the involvement of a team of specialists an undue burden to enable the realisation of the invention thus, to amend the description accordingly.

The criterion of ‘sufficient disclosure’ does not expressis verbis require that the invention provide a fair outcome generated by means of the algorithm if the claim does not specifically indicate this. Nonetheless, justification for the outcome could be evaluated under Article 53(a) EPC, on the ordre public, and morality criteria in line with Article 57 EPC (‘Industrial application’).

To summarise, the certification proposed under the AI Act overall foresees a more extensive evaluation than the patent examination. Given that the latter certification and the certification proposed here for patent-application purposes are congruent in many respects, the two could be combined, and in their unified form they could be conducted by a legitimate central body. This approach might reduce the administrative burden involved, a burden that could be large since the language by the AI Act suggests that all AI systems that are also intended to be placed in the market in the EU (many member states of which are EPC signatories *78 ) and embody the specified risk must undergo certification. *79 A unified certification scheme could provide either a single-purpose certificate or several official public certificates, as dictated by the aim, the content, and the evaluation requirements, especially since the EPO allows supporting an application with other, clearly referenced documents. *80

Finally, it should be noted that the certification for patent purposes is proposed as voluntary; therefore, it would not create an additional, unfavourable burden on inventors, especially in fields with existing certification systems in place (for instance, that of medical devices). Moreover, the suggested certification could help reduce the administrative burden for inventors and patent examiners alike, by facilitating rapid rectification of the deficiencies identified. Albeit designed, especially to overcome the challenges with unexplainable algorithms, this certification could be applied voluntarily also by inventors in cases involving explainable algorithms.

5. Conclusions

As noted above, although computer programs per se are not patentable if claimed as such under the EPC; creations incorporating AI may be considered for patentability and classed as an ‘invention’ if they demonstrate a ‘further technical effect’. Under the EPC, all the facets of patentability – including ‘invention’, ‘non-obviousness’, ‘novelty’, ‘commercial applicability’, ‘sufficiency of disclosure’, and others – are evaluated separately. Therefore, creations involving AI might simultaneously display deficiencies not only in satisfaction of, for instance, the ‘invention’ requirement but also with regard to other aspects of patentability under the EPC. Vice versa, even though a creation involving AI might pass the ‘further technical effect’ threshold and qualify as an ‘invention’, that does not automatically mean that any other criteria for patentability under the EPC are met. Therefore, AI may bring in particularly issues connected with algorithmic explainability. For instance, in genetics, the nature of the associated data and deficiencies of the capacity of more simple ML models leads to tension with regard to the possibility of complying with Article 83 EPC. In light of the value AI brings for facilitating human prosperity, it is crucial to overcome the problem with algorithmic explainability so as to support incentives to patent inventions involving AI under the EPC and, through this, enrich general knowledge. Otherwise, patentability difficulties arising from algorithmic unexplainability may lead, for instance, to opting instead for trade-secret protection. Ultimately, scientific progress could thus be impeded.

Although the criterion of ‘sufficient disclosure’ under Article 83 EPC leaves room for supporting the description with other documents and even with a deposit in particular cases involving micro‑organisms, the language does not foresee substitution for the written description. Therefore, it can be deemed that the solutions heretofore proposed to address algorithmic unexplainability – introducing the deposit of the algorithm, the training data, or both – might not be a preferable way to fulfil the requirements of Article 83 EPC from standpoint of an inventor.

Considering that certification is known in other fields, proposed in the AI Act, and permitted under the EPC, it can be concluded that certification could be accepted as a voluntary approach primarily for overcoming difficulties with algorithmic explainability and patentability under the EPC as well. Criticism of the certification proposed by the AI Act and certification, for example, for medical devices has been raised; however, since, in many aspects, the proposed certification by the AI Act and what is recommended here in many aspects are closely aligned, it should be regarded that the certification suggested under the AI Act might be taken mutatis mutandis as a suitable starting point.

Upon making of the appropriate adjustments, the certification proposed here could be considered preferably, in voluntary form and in centralised also for patent purposes. In this form, the certification proposed would not constitute an additional, non-preferred administrative burden on inventors, a factor that may be especially relevant for fields with existing certification systems, such as the medical‑devices domain. In summary, it should be reiterated that the proposed mechanism is suggested as a voluntary instrument principally to overcome patentability hurdles that face inventions with algorithmic explainability issues, while also available for other inventions involving algorithms, at the discretion by an inventor.

Its legal implementation would not be prohibitively complex. The EPC allows the use of expert opinions, certificates as supporting documents (see Article 83), and evidence (see Article 117). Hence, the proposed certification would merely require recognition, rather than legal amendments to the EPC, and would not dilute the EPC framework.

pp.125-135