A privacy investigation by Canadian authorities has concluded that OpenAI's training of ChatGPT violated both federal and provincial privacy laws. The probe, conducted by the Office of the Privacy Commissioner of Canada (OPC) in coordination with provincial counterparts, found that the company collected, used, and disclosed personal information without proper consent, and failed to meet transparency obligations under the Personal Information Protection and Electronic Documents Act (PIPEDA) and similar provincial statutes.
What the probe found
The investigation determined that OpenAI's large language model training process involved scraping personal data from publicly available sources — including social media profiles, forum posts, and other online content — without obtaining meaningful consent from individuals. The OPC stated that this practice contravenes the core principle of consent under Canadian privacy law, which requires organizations to obtain permission before collecting, using, or disclosing personal information.
Additionally, the probe found that OpenAI did not provide adequate transparency about how personal data was being used for model training. The company's privacy policy and public statements were deemed insufficient to inform individuals about the scope and nature of data collection, processing, and retention.
Specific violations
The OPC identified several specific breaches:
- Failure to obtain valid consent for the collection and use of personal information
- Lack of transparency regarding data collection practices
- Inadequate safeguards to prevent unauthorized access or misuse of personal data
- Non-compliance with data retention and deletion requirements
The investigation also highlighted that OpenAI's reliance on a "legitimate interest" justification — common in some jurisdictions — does not hold under Canadian law, which requires explicit consent for most uses of personal information.
Regulatory response
The OPC has issued a series of compliance recommendations to OpenAI, including:
- Implementing a mechanism for individuals to withdraw consent for their data being used in training
- Providing clear, accessible information about data collection and processing practices
- Establishing a process for individuals to request deletion of their personal information from training datasets
- Conducting a privacy impact assessment for any future model training
OpenAI has been given a deadline to respond to these recommendations. Failure to comply could result in enforcement actions, including fines or orders to cease processing personal data.
Broader implications
This ruling adds to a growing list of regulatory actions against AI companies worldwide. Similar investigations are underway in the European Union under the GDPR, and in several U.S. states. The Canadian decision is notable because it applies both federal and provincial laws, creating a dual regulatory framework that other jurisdictions may reference.
For organizations using ChatGPT or similar AI tools, the ruling serves as a reminder to review their own data handling practices, particularly when integrating AI services that may process personal information. Companies should ensure they have clear consent mechanisms, transparent privacy policies, and robust data governance frameworks in place.
Bottom line
Canadian privacy authorities have ruled that OpenAI's ChatGPT training violated federal and provincial privacy laws, primarily due to inadequate consent and transparency. The decision sets a precedent for how AI model training must comply with existing privacy regulations, and may influence future enforcement actions in other jurisdictions.