The draft regulation of generative AI prepared by data protection authorities

"Let's protect our data with regulated generative AI!"

Update May 2023: a privacy and copyright-friendly prompt for text mining



The draft Regulation on Generative AI is a document prepared by data protection authorities to regulate the use of generative artificial intelligence technologies. It aims to protect the fundamental rights of individuals and to promote innovation and responsible use of generative AI technologies. The draft regulation aims to establish clear and precise rules for the use of generative AI technologies, to ensure transparency and to guarantee that personal data is processed responsibly. It also aims to promote the use of generative AI technologies for the common good and to encourage innovation and responsible use of generative AI technologies.

Generative AI is a sub-field of artificial intelligence (AI) that involves the use of artificial neural networks to create data or content, such as images, music or text, autonomously. Unlike other types of AI that focus on analysing existing data, generative AI creates new data from a series of statistical models learned from a training data set.

The process by which generative AI works varies according to the techniques used. However, in general, artificial neural networks are trained to recognise patterns in existing data sets, such as images or text. These networks are then used to generate new data that resembles the data in the training set.

There are several approaches to performing generative AI, including the use of recurrent neural networks (RNN) or generative adversarial neural networks (GAN). In the case of RNNs, models are trained to predict the next value in a sequence of data and then used to generate new sequences. In the case of GANs, two neural networks are used in tandem: a generative network that creates data and a discriminative network that evaluates the quality of the generated data. Both networks are trained simultaneously until the generative network can produce high quality data.

In short, generative AI uses artificial neural networks to create new data that resembles that of a training set. The techniques used vary, but all involve the use of statistical models to generate new and original data.

What are the main challenges in implementing the proposed regulation of generative AI?

The main challenge for the implementation of the Generative AI regulation project is to define clear and precise guidelines for the development and use of generative AI technologies. It is essential to determine the limits of the use of generative AI technologies and to define rules and procedures for their development and use.

Another important challenge is to develop monitoring and control mechanisms to ensure that generative AI technologies are used responsibly and in accordance with established guidelines. It is also important to develop accountability mechanisms to ensure that companies and users are held responsible for their actions and decisions.

Finally, it is essential to develop data protection mechanisms to ensure that personal data and sensitive information are protected and not used for unauthorised purposes. It is also important to develop intellectual property rights protection mechanisms to ensure that generative AI technologies are not used to infringe the intellectual property rights of others.

What are the advantages and disadvantages of the proposed regulation of generative AI for businesses?

The benefits of the proposed regulation of generative AI for companies are numerous. Firstly, it would allow them to better understand and control their generative AI systems, allowing them to better manage their risks and liabilities. In addition, it would give them greater assurance that their generative AI systems comply with applicable laws and regulations. Finally, it would allow them to better protect their data and systems against cyber attacks and privacy breaches.

However, the proposed regulation of generative AI also has drawbacks for companies. Firstly, it may result in additional costs for companies, as they will have to set up compliance systems and procedures to ensure that their generative AI systems comply with current laws and regulations. In addition, it may result in a loss of time and money for companies, as they will have to put in place compliance systems and procedures to ensure that their generative AI systems are compliant with applicable laws and regulations. Finally, it can lead to a loss of flexibility and freedom for companies, as they will have to comply with current laws and regulations.

How could the proposed regulation of generative AI help protect personal data?

The proposed regulation of generative AI could help protect personal data by placing restrictions on how companies can use personal data. For example, the draft could require companies to obtain explicit consent from users before collecting and using their personal data. In addition, the draft could impose restrictions on how companies can store and share personal data. Finally, the draft could require companies to put in place security measures to protect personal data against loss, misuse and unauthorised disclosure. By imposing these restrictions, the draft regulation of generative AI could help protect users' personal data.


The regulation of generative AI prepared by data protection authorities is an important step towards the protection of data and the rights of individuals. It offers additional protection against the misuse of personal data and artificial intelligence technologies. It also provides additional safeguards for consumers and businesses that use these technologies. The regulation of generative AI is an important step towards a more responsible and safer use of artificial intelligence technologies.

Act now to protect your privacy and personal data! The draft regulation of generative AI prepared by the data protection authorities is an important step to ensure data security and privacy. To learn more about this project and how you can support it, click here:

Text mining and artificial intelligence for text generation: respecting copyright and data protection

Text mining is an analytical method for extracting relevant and useful information from large amounts of textual data. Artificial intelligence (AI) plays a key role in this process, using automatic natural language processing (NLP) and machine learning techniques to identify patterns, trends and relationships in text data.

What conditions must the prompt meet in order not to infringe copyright and data protection law?

Some general advice on respecting copyright and data protection rights in the context of using a prompt. To avoid infringing these rights, please ensure that you comply with the following conditions:

1. Do not use copyrighted content without permission: Avoid including text excerpts, images, videos, music or any other content that is protected by copyright without first obtaining permission from the author or rights holder.

2. Citing sources: If you use information or extracts from other sources, it is important to cite them correctly to acknowledge the author's original work.

3. Create original content: Wherever possible, try to create original and unique content that does not directly copy the work of others.

4. Do not disclose personal information: To respect the right to personal data protection, do not ask for sensitive or personal data in the prompt and avoid including information that could identify individuals without their consent (names, addresses, telephone numbers, e-mail addresses, etc.).

5. Respect privacy: Do not share confidential information or details of other people's private lives without their explicit consent.

6. Be aware of applicable laws: Copyright and privacy laws may vary from country to country. It is important to familiarise yourself with the laws applicable in your region or country and to comply with them.

By following this advice, you can reduce the risk of infringing copyright and data protection law when using a prompt. However, for specific legal advice, it is always recommended to consult a lawyer specialising in these areas.

Profiling regulation - The IT lawyer in Paris answers

What does the regulation say about the profiling of people in computer processing, the opinion of the IT lawyer in Paris


Personal data protection regulations, such as the European Union's General Data Protection Regulation (GDPR), strictly regulate the profiling of individuals in IT processing.

According to Article 4 of the GDPR, profiling is defined as "any form of automated processing of personal data which involves the use of such data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict factors concerning that natural person's work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements".

Profiling is only allowed in certain specific circumstances, and the data subject must be informed and must have given his or her explicit consent for his or her personal data to be used in this way. In addition, individuals have the right to object at any time to the profiling of their data.

The GDPR also requires organisations to take steps to ensure the transparency, security and accuracy of data used for profiling, as well as to protect the fundamental rights and freedoms of data subjects.

Some remarks by the lawyer specialised in computer law in Paris on the regulations concerning the profiling of persons in data processing, in particular with regard to the RGPD (General Data Protection Regulation), which applies to the member countries of the European Union.


Profiling is defined by the GDPR as "any form of automated processing of personal data which consists in using that personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict factors concerning preferences, interests, financial situation, behaviour, etc."


The GDPR provides a framework for profiling to protect the rights and freedoms of data subjects, particularly in relation to automated decisions with legal or similar effects. Here are some key points to consider:


  1. Consent: Profiling generally requires the consent of the data subject. Individuals must be informed of the existence of profiling and its potential consequences.
  2. Right to object: Individuals have the right to object to profiling when it is used for direct marketing.
  3. Automated decisions: Individuals have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning or significantly affecting them.
  4. Limitation of data: Data processing must be limited to the data strictly necessary to achieve the purposes of the processing.
  5. Transparency: Controllers must provide clear and accessible information on profiling procedures and the criteria used to make automated decisions.
  6. Impact assessment: For high-risk processing operations, such as large-scale profiling, a data protection impact assessment (DPIA) may be required.


It is important to note that regulations may vary depending on the jurisdiction and context. Consult Pierre de Roquefeuil for legal advice specific to your situation.


Datenschutzbehörde, GZ: D124.3816, Registrar: 2023-0.193.268


The Austrian Data Protection Authority (DPA) ruled that the vast majority of personal data collected by the CRIF credit bureau was illegal and should be deleted. 


The CRIF collected the addresses, dates of birth and names of almost all Austrians in order to calculate their "creditworthiness" without consent or other legal basis


. Most of the basic data used by the CRIF to calculate the "solvency values" come from the address publisher AZ Direkt (which belongs to the German Bertelsmann Group). 


AZ Direct is only allowed to pass on this data for marketing purposes and not for the calculation of the credit rating. 


These credit ratings also have real impacts, explained Max Schrems: "Millions of people in Austria are affected by this. Customers do not receive a mobile phone contract or an electricity contract if their score is too low. One might have to pay higher loan payments if the bank uses this score. We believe that data should only be collected from clear defaulters, not from the whole population. noyb expects the CRIF to appeal the decision as it is a blow to its business model.


CJEU, Opinion of the Advocate General in Case C-634/21 SCHUFA Holding and Others (Scoring) and in Joined Cases C-26/22 and C-64/22 SCHUFA Holding and Others (Release of outstanding debts) Advocate General Pikamäe: the automated establishment of a probability of a person's ability to repay a loan constitutes profiling under the GDPR 


Case C-634/21 concerns a dispute between a citizen and the Land Hessen, represented by the Commissioner for Data Protection and Freedom of Information of the Land Hessen (hereinafter 'HBDI'), concerning the protection of personal data. In the course of its business activity of providing its customers with information about the creditworthiness of third parties, SCHUFA Holding AG (hereinafter 'SCHUFA'), a company governed by private law, provided a credit institution with a score for the citizen in question, which was used as the basis for the refusal of the credit applied for by the latter. The citizen then asked SCHUFA to delete the relevant record and to give him access to the corresponding data. However, SCHUFA only informed him of the relevant score and, in general, of the principles underlying the method of calculating the score, without informing him of the specific data taken into account in this calculation and the relevance attributed to them in this context, arguing that the method of calculation falls within the scope of business confidentiality. Insofar as the citizen concerned argues that SCHUFA's refusal is contrary to the data protection regime, the Court of Justice is called upon by the Wiesbaden Administrative Court to rule on the restrictions which the General Data Protection Regulation 1 (hereinafter 'GDPR') imposes on the economic activity of intelligence agencies in the financial sector, in particular in the management of data, and on the impact to be attributed to business confidentiality. Similarly, the Court will have to clarify the scope of the regulatory powers that certain provisions of the RGPD confer on the national legislator by way of derogation from the general objective of harmonisation pursued by this legal act.


In his Opinion, Advocate General Priit Pikamäe states, first, that the GDPR establishes a "right" of the data subject not to be subject to a decision based solely on automated processing, including profiling. The Advocate General then finds that the conditions for that right are met since: - the procedure at issue constitutes "profiling", - the decision produces legal effects in relation to the data subject or significantly affects him in a similar way, and 1 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of individuals with regard to the processing of personal data and on the free movement of such data and repealing Directive 95/46/EC (OJ 2016 L 119, p. 1). Directorate of Communication Press and Information Unit - the decision can be considered to be based exclusively on automated processing. The provision of the GDPR providing for that right is therefore applicable in circumstances such as those in the main proceedings. 


Data transfer: the necessary assessment of foreign legislation?

Update November 2, 2022


The European Data Protection Board (EDPB) provides its framework for compliance with the GDPR in the event of data transfer outside the European Union.

Recommendations 01/2020 on measures that
supplement transfer tools to ensure compliance with
the EU level of protection of personal data
Version 2.0
Adopted on 18 June 2021.

It was expected that this framework would ease the contractual formalities for companies (Binding Corporate Rules or Standard Contractual Clauses) regarding data transfers outside the European Union.

Mais il ressort de ce cadre que l’examen minutieux des législations étrangères reste nécessaire, comme le préconise l’arrêt Schrems II (, dès qu’une zone territoriale est identifiée comme incertaine par les autorités européennes :,

Unless they fall within the derogations provided for in Article 49 of the RGPD.

Indeed, a sovereign state may in any case access data upon specific request: only a general request for access to data could be challenged on principle.

As regards data transfers to the United States, there is still no adequacy decision by the European Commission, following the so-called Schrems I judgment of 6 October 2015 (C-362/14) (invalidation of Safe Harbour) and Schrems II judgment of 16 July 2020 (C-311/18) (invalidation of Privacy Shield). The US legislation reflects a conception of privacy centred on the protection of the American citizen, not including foreigners, which is not the universalist conception of the European Union.

The 4th Amendment to the United States Constitution provides: "The right of citizens to be secure in their person, domicile, papers and effects against search and seizure without reason shall not be violated nor shall it be issued no warrant except on serious presumption, corroborated by oath or solemn declaration and describing with precision the place to be searched and the persons or things to be seized. »

Despite efforts at approximation (see Joe Biden's recent executive order of 7 October 2022, which does provide for a delegate of the executive branch and even an independent court, but whose opinions are not binding), US law still provides for mass surveillance and the absence of effective remedies for the rights of data subjects.

It is therefore primarily up to the data exporter, in a perilous assessment exercise, to check foreign legislation. The Committee provides a list of data sources in Annex III of its recommendations.

The new Standard Contractual Clauses (“SCC”) (adopted by the European Commission on June 4, 2021 and published on June 7, 2021, which will enter into force on June 27, 2021, dec_impl/2021/914/oj?uri=CELEX:32021D0914&locale=fr) do not exempt from this exercise.

The Doctolib interim decision may give some clues as to how to secure a data transfer: location and encryption in France, obligation for the European subsidiary to contest the foreign "general" request, low sensitivity data, short data retention period: 

Consult a lawyer specializing in computer law

Data transfer to a foreign cloud: the Doctolib decision

CE, ord. ref., March 12, 2021, Doctolib, req. No. 450163 – Interim interim decision

In its Doctolib decision, the Council of State recognizes the incompatibility of American law with the protection of personal data in the European Union

Concerning the United States, the Schrems II judgment (CJEU July 16, 2020, case C-311/18) invalidated the Privacy Shield, equivalent to an adequacy decision, which allowed companies submitting to it to export personal data to the United States. American law authorizes public authorities to access personal data without providing the persons concerned with effective remedies.

The Doctolib company, which offers an online medical appointment service, hosts its data on the servers of a subsidiary of Amazon Web Services (AWS), a company incorporated under US law. As part of the vaccination campaign against covid-19, Doctolib has been appointed by the Ministry of Solidarity and Health to manage the related appointments.

The judge in chambers of the Council of State was seized of a request for suspension of this partnership insofar as it would disregard the general regulation on data protection (EU) 2016/679 of April 27, 2016 (RGPD).

The judge rejects that:

AWS Sarl, a Luxembourg subsidiary of AWS Inc., is certified “health data host” (CSP, art. L. 1111-8). The data is hosted in centers located in France and in Germany and there is no provision in the contract for any transfer of data to the United States for technical reasons. Since AWS Sarl is a subsidiary of a US company, it may be subject to an access request by US public authorities, according to the applicants.

Data in question. – The Council of State notes that “the disputed data includes personal identification data and appointment data but no health data on any medical grounds for eligibility for vaccination”, people simply having to justify their eligibility with a sworn statement.

The duration of the conversation. – The data “ are deleted at the end of a period of three months at the latest from the date of making an appointment, each person concerned having created an account on the platform for the purposes of vaccination can delete it directly online”.

Request for access by US public authorities. – In order to secure their relations, it is stipulated in the contract thatAWS will have to contest “any request that is general or does not comply with European regulations ". This measure is accompanied by a "device for securing data hosted by the company AWS through a encryption procedure based on a trusted third party located in France in order to prevent the reading of the data by third parties”.

> the level of protection thus put in place by the parties “cannot be regarded as manifestly insufficient with regard to the risk of violation of the GDPR”.


The scope of the decision is relative in that it is a provisional interim decision sanctioning "manifestly unlawful" and could therefore be corrected. To be continued.

CNIL, delib. n° 2020-044 of Apr. 20, 2020 and press release of Oct. 14, 2020
CE, ord. ref., 13 Oct. 2020, n° 444937