Privacy commissioners’ report on OpenAI emphasizes Calgarians’ need for better online safety

Support LWC on Patreon

As cybercrime continues to rise in Calgary, a joint provincial inquiry into the generative artificial intelligence company found that certain versions of its popular software, ChatGPT, violated several Canadian privacy laws.

Conducted through the combined efforts of the Office of the Privacy Commissioner of Canada and authorities in Alberta, British Columbia, and Quebec, the May 6 report investigated the extent to which the program collected, used, and disclosed citizens’ personal information.

Regulators found that two versions of the software, GPT-3.5 and 4 — released less than a year after ChatGPT’s November 2022 launch — failed to adequately obtain user consent upon gathering “vast amounts” of data.

The investigation uncovered that many users did not understand that their conversations were being used to train the AI model, and that some of the information collected from social media and online discussion forums contained sensitive details, including those of underage patrons.

From this, the document determined that much of the information was later unconsensually and inaccurately regurgitated as opinions under the guise of fact, thereby spreading misinformation and infringing on users’ right to privacy.

These findings prompted the regulators to recommend changes to OpenAI’s operations. The identified issues have since been marked as resolved, subject to the company updating its privacy policy to improve user consent and the accuracy of its outputs.

Regardless, Alberta Information and Privacy Commissioner Diane McLeod said that, even with these improvements, the software’s actions remain discouraging, especially considering that the platform was launched long after the province’s private-sector privacy law was enacted in 2004.

“From the Alberta perspective, I want to note first that it is unfortunate and disappointing that technology companies have moved ahead so quickly with new developments and innovations, without first ensuring that they are adhering to privacy legislation,” said McLeod in a news release from the Government of Alberta.  

“Our investigation found that OpenAI did not appear to turn its mind adequately to privacy compliance in its development and deployment of ChatGPT, which is very troubling.”

The report concluded by recognizing that, aside from the primary concerns of privacy and accuracy, software like ChatGPT poses a third problem: helping “malicious actors” to carry out cybersecurity attacks, which a local expert said isn’t always improved by enhanced policies.

Updating policies merely a partial solution to cybercrime

According to the Calgary Police Service’s (CPS) most recent annual report, fraud in all forms, including cybercrime — being any form of illegal activity carried out using computers and the Internet — increased by 12 per cent last year compared to 2024. 

Retired CPS officer Kathy Macdonald said that she remembers the early days. Before completing her 25-year career in the mid-2010s, she worked on the crime prevention unit, where she said her interest in cybersecurity began. 

“In the early 2000s, people were just starting to buy computers and get access to the Internet,” said Macdonald. 

“At least a few times a week, we were dealing with all sorts of new scams, cons, exploits, and hacks, and viruses that were happening on the Internet.”

This experience taught her many valuable lessons and kick-started her passion for cybercrime prevention. As the founder of Global Cyber Security Courses Inc. and an instructor at the Canadian Criminal Justice Academy, she has shifted her career to teaching what she has learned to aspiring officers.

Macdonald said that a pattern persists today compared to the early days of personal technology. She said that it starts when new software is created — often marketed as something that will “make our lives happier, healthier, and safer” — and used by consumers, unbeknownst to the potential of bad actors. 

“There’s a whole other group of people that test the technology, looking for vulnerabilities, and they exploit it when they can,” she said. 

Whether these cybercrimes are personal, financial, or political in nature, she said that the companies behind the software used by fraudsters have time and again faced pressure to change their policies, which usually prompts the government to do the same. 

Such was the case in 2015 when Alberta’s Education Act was amended to require all students to “refrain from, report, and not tolerate bullying or bullying behaviours towards others in the school…or by electronic means.”

Macdonald referenced this incident as an example that she does not see as having made a significant difference to the persistence of cyberbullying. Sometimes, she said, lawsuits — like one from earlier this month, which found Meta liable for social media addiction — follow policy changes.

Whether lawsuits will arise in the future regarding OpenAI and personal privacy concerns, Macdonald said, the pattern she has identified shows minimal likelihood that this will deter AI-driven cybercrime. 

“When you talk about AI, you talk about ChatGPT, and it’s going to be exactly the same thing over and over again,” she said. 

“The fraudsters, they’ve embraced it.”

What makes AI-driven cybercrimes more invasive is fraudsters’ ability to use the platform to personalize their attacks by including vulnerable details an individual has shared with a chatbot, explained Macdonald.

“Anybody that has criminal intent, or that has the intent to financially gain or take revenge on us or abuse us in some way, they really don’t care about the patches and the policy and the police warnings and the laws and such,” she said. 

“They have embraced AI, and it has enabled them to craft and tailor very believable tactics and techniques against us, and really specifically they target our state of mind.”

To combat this, Macdonald emphasized the importance of the public taking online security into their own hands. She summarized her recommendations as trusting your intuition and being rational, skeptical, and kind. 

“AI doesn’t have emotions, and so being kind is very much a human emotion, and it’s got some morality there,” she said. 

“When you see something that’s floating around the Internet, or you see a photo that looks like it’s been morphed, or you hear a voice that’s been cloned — some sort of AI that’s been used to deceive and manipulate. Tell that person, find them and tell them, even if you don’t know them, take the extra time.”

“That’s being kind on the Internet in this day and age.”

Liked it? Take a second to support Sarah Palmer on Patreon!
Become a patron at Patreon!

Trending articles

Walcott: Policing the Poor – The End of the Free Fare Zone

Courtney Walcott

Calgary committee votes to end downtown free fare zone

Darren Krause

Calgary has a gold-standard public engagement policy. No one’s enforcing it: Report

Darren Krause

Calgary’s annual Nagar Kirtan parade celebrates Sikh holiday Vaisakhi

Sarah Palmer

Route Ahead’s $2 billion ask joins a growing list of Calgary budget priorities

Darren Krause

Latest from LiveWire Calgary

School boards, City of Calgary partner to increase access to sports

Kaiden Brayshaw - Local Journalism Initiative

Calgary student raising money for school-wide access to period products

Kaiden Brayshaw - Local Journalism Initiative

Walcott: Policing the Poor – The End of the Free Fare Zone

Courtney Walcott

Calgary’s annual Nagar Kirtan parade celebrates Sikh holiday Vaisakhi

Sarah Palmer

MORE RECENT ARTICLES

Calgary has a gold-standard public engagement policy. No one’s enforcing it: Report

Darren Krause

UCalgary to host fourth Nursing Story Slam

Kaiden Brayshaw - Local Journalism Initiative

Calgary committee votes to end downtown free fare zone

Darren Krause

Route Ahead’s $2 billion ask joins a growing list of Calgary budget priorities

Darren Krause