05.05.2023

unyer Digital / Tech and Data – The world and AI: ChatGPT – Challenges in France, Austria, Germany and Italy I Newsflash

PDF

unyer brings together Fidal, KWR, Luther and Pirola, law firms combining global expertise and local insight in France, Austria, Germany and Italy. The use of ChatGPT requires an integrated cross-border approach, especially in company groups, to align a fast forward business approach and address the most pressing legal risks. This newsletter provides summary insights for all four countries as well as further valuable insights into country specific practices and regulatory investigations from Luther for Germany, Fidal for France, Pirola for Italy and KWR for Austria combined with practical recommendations.

ChatGPT Scales-Up AI Related Issues

Legislation around AI has been developing for the past several years. The European Union has taken up this challenge by proposing various regulations, mainly through:

  • A proposed regulation in April 2021 (“AI Act”), which aims to harmonise the commissioning and use of AI systems in the European Union and prohibit certain practices;
  • A proposal for a directive in September 2022, whose objective is: (i) to standardise the rules of access to information and the lightening of the burden of proof with regard to damage caused by AI systems, (ii) to establish a broader protection for victims (whether individuals or companies) and (iii) to promote the AI sector by strengthening guarantees.

On member state level, the authorities have also started their first actions:

France: The French data protection supervisory authority, the Commission Nationale de l’Informatique et des Libertés (CNIL) has taken up the subject to propose dedicated AI documentation, detailing, in particular, the modalities of application and interpretation of the General Data Protection Regulation (“GDPR”) in the context of the provision of AI solutions. The publication of this documentation is part of a broader initiative of the CNIL to work on these issues, in particular, following the creation of a department dedicated to AI. The French Council of State indicated, in August 2022, that it was encouraging the reinforcement of the powers of the CNIL and the evolution of its role so that it would become the national supervisory authority responsible for the regulation of AI systems provided for by the AI Act. In other words, the CNIL will be strongly involved in the regulation of AI and is expected to take a diligent and educational approach regarding complaints filed against ChatGPT as well as its general positions on AI.

The Case Study of ChatGPT

ChatGPT, a generative AI language model from OpenAI, has been stirring the world since late last year. Microsoft recently announced a USD 10 billion investment in OpenAI and has already integrated ChatGPT into its search engine Bing. ChatGPT is probably on its way to becoming a competitor to the Google search engine, which, in its turn, is powered by machine learning and natural language processing.

In the web-based application of ChatGPT, users can enter their questions or commands to the AI in a search field. Such requests to ChatGPT are called “prompts”. A prompt can, for example, include a request to prepare an email, write a newsletter, summarise a large text or translate a given text into another language. Based on the prompts, ChatGPT provides a corresponding output, which essentially consists of text. However, ChatGPT is also able to write several lines of software code that users can copy and use for their own software development. If users are not satisfied with the output, ChatGPT provides a functionality to resubmit the request so that ChatGPT can generate a different output.

OpenAI does not only offer ChatGPT to users but also an API to develop AI applications with OpenAI´s technology. Businesses may, for example, develop a ChatGPT customer support chatbot for their own website.

ChatGPT can be used for private matters, but also in a business context: verifying the type of information provided about a specific company; elaborating a strategy for a certain project; preparing a social media strategy; optimising internal processes, etc.

At the end of November 2022, ChatGPT released its GPT-3.5 version. The deep-learning model was trained using 175 billion data points with a very large data set. This data came from various sources, for example, web scraping, books, and Wikipedia. Since ChatGPT is not directly connected to the internet, it cannot access external information, but only its own training data. The model was trained with the aim of predicting the next word in each case. Thus, the words that ChatGPT strings together are ultimately based on probabilities calculated by the deep-learning model.

In March 2023, OpenAI released its GPT-4 version based on a more advanced deep learning model. GPT-4 is currently only accessible to ChatGPT Plus users paying for access. It is interesting to note that the free Bing search engine already uses GPT-4 for providing search results.

When a company uses ChatGPT, the following aspects must be taken into account:

I. ChatGPT and Data Protection

If information is provided in the prompts that directly or indirectly indicates a person, personal data is processed via ChatGPT.

The use of ChatGPT becomes particularly challenging from a data protection perspective if companies integrate it into their website or build their own applications based on the language model technology available through the API and offer them to their customers. In this case, corresponding agreements under data protection law, such as a data processing agreement (DPA) or a Joint Controller Agreement, must be concluded with OpenAI. OpenAI offers a Data Processing Addendum for API users. However, according to OpenAI, it does not sign any DPA provided by the user or any changes to its own DPA. Moreover, companies need to consider that the conclusion of an DPA with OpenAI is by no means sufficient. Because data is transferred to the US, the EU standard contractual clauses must also be concluded. Additionally, a data protection impact assessment has to be considered. Companies should pay attention to whether additional safeguards are described in the DPA and must conduct a transfer impact assessment in accordance with the requirements of the EU standard contractual clauses. If companies intend to integrate ChatGPT into their HR process (e.g., for the selection of applicants), it should be ensured that no automated decision-making in this context leads to unlawful data processing without interpersonal intervention (Art. 22 GDPR).

Employees of companies should be sensitised not to enter prompts into ChatGPT that contain personal data of one of their customers, suppliers, business partners or work colleagues. If they do so, they could be in breach of confidentiality obligations that arise, among other things, from the GDPR and that are regularly imposed on them by their employers or, where applicable, business partners.

Austria: Under Austrian law, employees are already legally obliged to maintain confidentiality about personal data according to Sec. 6 of the Austrian Data Protection Act. Violations of this obligation may be sanctioned with administrative fines and may lead to damages and consequences under employment law (e.g. dismissal). This could be, for example, the telephone number of the person concerned in addition to their name.

According to the new version of OpenAI’s Terms and Conditions from 14 March 2023, the input and output data of API users will not be used for further development of OpenAI’s large language model. This is different, however, when using ChatGPT. The input and output data will generally be used for the further development of ChatGPT by OpenAI unless the user fills out a form objecting to any further use of the input and output data. Shortly, OpenAI has included the option to object via a button in the settings section. Companies should be aware that it cannot be ruled out that personal data entered by their employees will appear as output to another user of ChatGPT. This poses the risk of a data protection breach.

Some data protection regulators have launched investigations against OpenAI or even temporarily banned ChatGPT in their countries. The latter was done by the Italian data protection supervisory authority, whereupon OpenAI blocked the service in Italy. Data protection supervisory authorities in Austria, France and Germany have already published first legal assessments while the European Data Protection Board decided to launch a dedicated task force to foster cooperation and to exchange information on possible enforcement action conducted by the national data protection authorities. While some authorities like the Austrian Data Protection Authority are currently only following the developments, others have already initiated action:

Germany: In Germany, the Data Protection Conference (DSK) is currently examining whether OpenAI with ChatGPT violates the provisions of the GDPR. The Federal Data Protection Commissioner Ulrich Kelber does not exclude the possibility of blocking ChatGPT in Germany. However, this cannot be decided by the Federal Data Protection Commissioner alone, but falls under the jurisdiction of the data protection authorities of the German federal states.

France: French media (Les Echos) revealed in early April that the CNIL has already received at least five complaints about ChatGPT, including one from a French MP. These complaints have led the CNIL to open an investigation into ChatGPT’s compliance with the GDPR.

According to media reports, the complaints are based on the following:

  • alleged lack of transparency in certain terms and conditions of use and an incomplete privacy policy;
  • alleged lack of fairness because the information generated by ChatGPT is sometimes incorrect, which also leads to questions about the scope of application of the principle of data accuracy (in other words, do the answers provided by ChatGPT concerning a natural person necessarily have to be completely accurate?)
  • alleged absence of a legal basis for processing: can ChatGPT justify the principle of legitimate interest? (If the legitimate interest could be retained, in our opinion, it is still necessary to prove that this processing does not harm the interests, rights and freedoms of the persons concerned, which it will be up to OpenAI to demonstrate.)

The CNIL has not yet made any communication following these complaints.

Italy: The Italian supervisory authority (Garante per la protezione dei dati personali) by orders of 30 March and
11 April 2023 imposed an immediate temporary limitation on the processing of Italian users’ data by OpenAI. The Italian SA highlighted that no information is provided to users and data subjects whose data are collected by Open AI; more importantly, there appears to be no legal basis underpinning the massive collection and processing of personal data in order to ‘train’ the algorithms on which the platform relies. Finally, the Italian SA emphasises in its order that the lack of any age verification mechanism exposes children to receiving responses that are absolutely inappropriate to their age and awareness, even though the service is allegedly addressed to users aged above 13 according to OpenAI’s terms of service.

OpenAI had to comply by 30 April with the measures concerning transparency, the rights of data subjects – including users and non-users – and the legal basis of the processing for algorithmic training relying on users’ data, i.e.

  • Concerning a Privacy Notice, OpenAI must prepare and make available on its website a transparent Privacy Notice, which illustrates the methods and logic underlying the processing of data necessary for the operation of ChatGPT as well as the rights attributed to users and non-user interested parties.
  • For data subjects connecting from Italy, a Privacy Notice must be submitted prior to completion of registration and, always before completion of registration, they must be required to declare that they are of legal age (age verification).
  • Regarding the legal basis of the processing of users’ personal data for training algorithms, the Italian SA ordered OpenAI to eliminate any reference to the execution of a contract as the legal basis and to indicate instead, (based on the principle of accountability), consent or legitimate interest as a prerequisite for using such data, without prejudice to the exercise of its powers of verification and verification subsequent to this choice.
  • Concerning the exercise of data subjects’ rights, tools must be made available to data subjects, even non-users, to request the correction of personal data concerning them inaccurately generated by the service, or the cancellation of the same data, if the correction is not technically possible. OpenAI, moreover, must allow non-user data subjects to exercise, in a simple and accessible way, the right to object to the processing of their personal data used for the training and development of algorithms and recognise similar rights for users, if they identify the legitimate interest as the legal basis of the processing.
  • Regarding age verification measures, the Italian SA ordered OpenAI to immediately implement an age gating system for the purpose of signing up to the service and to submit, by the 31 May, a plan for implementing, by 30 September 2023, an age verification system to filter out users aged below 13 as well as users aged 13 to 18 for whom no consent is available by the holders of parental authority.
  • Finally, OpenAI will have to promote an information campaign by 15 May, through radio, TV, newspapers and the Internet, in order to inform individuals on the use of their personal data for training algorithms.

On 28 April the Italian SA gave a further update explaining that it had received a letter from OpenAI describing the measures it implemented in order to comply with the order issued by the SA on 11 April. OpenAI explained that it had expanded the information to European users and non-users, that it had amended and clarified several mechanisms and deployed appropriate solutions to enable users and non-users to exercise their rights.

Based on these improvements, OpenAI reinstated access to ChatGPT for Italian users and Italian SA agreed with this approach.

More specifically, OpenAI

  • drafted and published, on its website, an information notice addressed to users and non-users, in Europe and elsewhere, describing which personal data are processed under which arrangements for training algorithms, and clarifying that everyone has the right to opt-out from such processing;
  • expanded its privacy policy for users and made it accessible from the sign-up page prior to registration with the service;
  • granted all individuals in Europe, including non-users, the right to opt out from processing of their data for training of algorithms, also using an online, easily accessible form;
  • introduced a welcome back page in case of reinstatement of the service in Italy containing links to the new privacy policy and the information notice on the processing of personal data for training algorithms;
  • introduced mechanisms to enable data subjects to obtain erasure of information that is considered inaccurate, explaining that at present, it is technically impossible simply to correct inaccuracies;
  • clarified in the information notice for users that it would keep on processing certain personal data to enable performance of its services on a contractual basis. However it would process users’ personal data for training algorithms on the legal basis of its legitimate interest, without prejudice to users’ right to opt out from such processing;
  • implemented a form to enable all European users to opt out from the processing of their personal data and thus to filter out their chats and chat history from the data used for training algorithms;
  • added, in the welcome back page reserved for Italian registered users, a button for them to confirm that they are aged above 18 prior to gaining access to the service, or else that they are aged above 13 and have obtained consent from their parents or guardians for that purpose;
  • included the request to specify one’s birthdate in the service sign-up page to block access by users aged below 13 and to request confirmation of the consent given by parents or guardians for users aged between 13 and 18.

The Italian SA welcomes the measures OpenAI implemented and calls upon the company to comply with the additional requests laid down in its 11 April order. This applies in particular to implementing an age verification system and planning and conducting an information campaign to inform Italians of what happened as well as of their right to opt out from the processing of their personal data for training algorithms.

In any case, the Italian SA declared it will carry on its fact-finding activities regarding OpenAI under the umbrella of the ad-hoc task force that was set up by the European Data Protection Board.

II. ChatGPT and Copyright

Copyright protected works require a personal intellectual creation by the author. Such a personal intellectual creation can, from an EU perspective, only be based on human achievement (see for example Section 2 (2) of the German Copyright Act or various sections of article L-112 and L-113 of the French Intellectual Property Code). In other words, results created by AI-based applications such as ChatGPT are generally not protected by copyright. ChatGPT cannot therefore be the author of the output generated by the AI language model.

Italy: According to current Italian laws, artificial intelligence systems have no legal personality and as such cannot be recognised as inventors, thus failing the conditions for establishing the patentability of a work. There is therefore the problem of identifying the author of a work generated by AI. To date, there is no rule governing this case, therefore it seems legitimate to conclude that the work, artificially generated, cannot find protection in copyright due to the lack of human contribution in the creative act.

Without a closer look at the training data, it is impossible to assess who actually owns the copyright to ChatGPT’s output. It is also unclear whether the users who initiate the text output with their prompt can be regarded as authors. Section 3 (a) of OpenAI’s Terms and Conditions as of 14 March 2023 only states that OpenAI assigns all its right, title and interest in and to Output (cf. openai.com/terms/). Which rights this should include in detail, however, remains open.

Germany: Whether ChatGPT violates copyrights when it uses copyrighted works as training data is also an issue. Under German copyright law, reproduction of copyrighted words for text and data mining is possible only subject to specific conditions (Section 44b of the German Copyright Act). To the extent that the output generated by ChatGPT still contains even small, copyrighted components of a work, their use by a ChatGPT user may infringe copyrights or related rights from a German perspective. This could lead to warnings from rights holders, e.g. when such output is published while redress under Open AI Terms and Conditions is largely excluded. With regard to the scraping of publicly accessible content which may be used as training data by ChatGPT, companies should consider to implement text and data mining reservations.

France: Under French law, similarly the violation of content protected by copyright or protected by the sui generis right of database producers may be alleged. Indeed, the training of this type of conversational agent requires a large quantity of data which are for the most part issued from contents scraped on the web. However, as stated in article L335-3 of the Intellectual Property Code: “Any reproduction, representation or distribution, by any means whatsoever, of an intellectual work in violation of the author’s rights, as defined and regulated by law, is also an infringement of copyright. With regard to databases, French law states in article L342-1 of the Intellectual Property Code that “The producer of databases has the right to prohibit: 1° The extraction, by permanent or temporary transfer of the totality or of a qualitatively or quantitatively substantial part of the contents of a database on another support, by any means and under any form that it is; 2° Re-use, by making available to the public all or a qualitatively or quantitatively substantial part of the contents of the database, in whatever form.” As such, since it is not possible to verify the source of the content produced by ChatGPT, the question pertaining to the infringement of rights may arise, including by the end users of this tool.

III. ChatGPT and the Protection of Trade and Business Secrets

The use of ChatGPT can also have an impact on the protection of trade and business secrets. For example, it cannot be ruled out that users may be tempted to mention trade and business secrets in the prompts. The motivation behind such disclosure in the prompts is to increase the quality of the output. A user may mention a trade secret to get inspiration for further product ideas or ideas for new services. The disclosure of such trade and business secrets could possibly fall into the wrong hands. ChatGPT may use the prompt and output for the development or improvement of its services, if the user has not actively opted out from such further use of in- and output data. This is indicated by ChatGPT’s Terms and Conditions.

Non-disclosure of business-critical information within a prompt to ChatGPT should be generally required by any company. Otherwise, this information could end up in the hands of competitors and circumvent one’s trade secret protection concept. Where appropriate security measures are not taken and implemented, such confidential information pertaining to trade secrets may not fall within the scope of EU countries’ trade secrets laws. Local trade secrets protection regulation in Austria, France, Germany and Italy and its consequences for the use of ChatGPT can be described as follows:

Germany: Under German law, disclosures of trade and business secrets in prompts could be considered a violation of Section 4 of the German Trade Secrets Act (Geschäftsgeheimnisgesetz). In case of disclosures of e.g. business partners’ trade secrets, those third parties could also assert certain claims against the employer of a ChatGPT user, even if the employer had no knowledge of such conduct of its employee (Section 12 German Trade Secrets Act).

France: The risk of disclosing a trade secret is one of the risks presented by the use of conversational agents such as ChatGPT. The French Commercial Code provides in its articles L151-4 and L151-5 that the disclosure of a secret is illegal when it is made without the consent of its legitimate holder or by a person who acts in violation of an obligation not to disclose the secret or to limit its use. But more importantly, tools such as ChatGPT may pose a risk to patent-pending inventions whose non-disclosure is a condition of patentability.

Italy: The Italian Law on the industrial property (Legislative Decree 10/02/2005, n. 30) defines a trade secret as business information or technical-industrial and commercial know-how, subject to the legitimate control of the holder and which meets three fundamental requirements: the non-notoriety or restricted access to knowledge, the presence of an economic value underlying the secrecy and the adoption by the holder of “measures to be considered reasonably adequate” to maintain the confidential character information.

From the Italian law perspective, it is therefore necessary to assess the company’s conduct regarding the adoption of reasonably adequate measures to maintain the confidential nature of the information.

A company’s liability could result from the fact that the use of ChatGPT is not considered within the internal policy to avoid the use of internal industrial data, or where there has not been sufficient internal training to exclude these data entries. The liability of the individual operator can be established only where s/he, after receiving adequate training, has used ChatGPT without following the internal company policy that had authorised the use of the chatbot tool. The protection of trade secrets remains crucial in an era where artificial intelligence tools can pose a threat to data security. It is important to note that this protection is provided for in art. 99 of the Italian Industrial Property Code stating that protection is recognised “without prejudice to the regulation of unfair competition”, which allows additional competitive protection, which also applies to information that does not constitute trade secrets pursuant to art. 98 of the Italian Industrial Property Code but which are appropriated by professionally incorrect means.

Austria: Under Austrian law, prompts by employees via ChatGPT containing confidential information may constitute an unlawful disclosure of trade secrets under Sec. 26a ff of the Federal Act against Unfair Competition 1984 - UWG.

In Conclusion: Implementation of Protection Safeguards

The topic of AI will significantly shape 2023 and beyond. There are tons of new AI applications which will become as popular as ChatGPT (DALL-E or Midjourney for AI-generated images, AI-generated art with Stable Fusion, AI-generated text with Jasper or AI-generated videos with Synthesia). Google also recently announced plans to add AI features to its search engine later this year.

When using an AI based application, companies should pay attention to:

  • the types of rights providers of the AI applications grant themselves to the input or prompts,
  • the types of terms and conditions providers of AI applications have set up and their interaction with the applicable local laws,
  • the personal data protection safeguards which need to be implemented including but not limited to confidentiality, implementation of the rights of data subjects, the Data Transfer Agreements, EU Contractual Clauses or Binding Corporate Rules to legitimise data transfers to unsafe third countries,
  • the required measures to protect trade secrets,
  • the commercial exploitation of the output where such commercial use is permitted, and
  • where applicable  a privacy impact assessment in cases where personal data is processed through these AI based systems.

Policies and codes of conduct that provide guidance on the use of ChatGPT and other AI applications should be drafted and implemented within entities for employees as well as any stakeholders of a corporate or public entity.

Training should be given for the use of ChatGPT AI applications to mitigate the risk exposure associated with the use of AI tools. For instance, the use of ChatGPT within a small pilot group with a defined “use case” to gain first experience with this tool could be a first step.

To this end, the general terms and conditions, the license agreements and the data protection declarations of the AI providers must be examined closely in each individual case.

Companies that develop AI applications or integrate AI applications into their own products and services should closely follow the developments of the European legislator on the regulation of AI. This includes, first and foremost, the planned regulation on artificial intelligence as well as the EU Commission’s proposals for a revision of the product liability directive and a new directive on AI liability.