How prepared are we for deepfakes? Researchers call for shift in AI to protect women - Action News
Home WebMail Saturday, December 28, 2024, 04:23 PM | Calgary | 0.8°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
Montreal

How prepared are we for deepfakes? Researchers call for shift in AI to protect women

While laws already exist to protect peoples privacy and reputation, one lawyer says deepfake cases fall into a legal grey zone. But the bigger issue is preventing them in the first place.

Deepfakes a 'great example of not having women in the decision-making process,' researcher says

A woman stands on a beach.
A photo created using AI image generator Midjourney. Radio-Canada's Dcrypteurs used the image to test out one of the sites that, without consent, creates a nude image by clicking the dshabiller (undress) button. (Midjourney)

In the picture, a blondwoman in a bikini stands on the beach. A line then flashes across the screen, exposing her nude figure.

"Use undress AI to deepnude girl for free!" reads the description on the site.

Although it says consent is required, it only takes a few clicksto upload an image and see the person in it undressed.

Since last summer, the number of sites with publicly available AI imagetools have multiplied and gained millions of views, and cases of AI-doctored photos of underage girls have already been shared by high school students in London, Ont., and Winnipeg. No charges have been laid in either case.

But abuse of the technology has been prosecuted in Quebec. Last year, a man from Sherbrooke in the Eastern Townships was sentenced to three years in prison for creating at least seven deepfake videos depicting child pornography.

Quebec, like the rest of the country, may not be prepared to deal with this ascendant AI technology,according to intellectual property lawyer Gaspard Petit.

And as Ottawa plays catch-up in regulating harmful content on the internet, researchersare calling for greater diversity and transparency to stop women from being targeted by the technology without their consent.

Petit says he has been taking a closer look at the development of AI technology as it continues to evolve.

"I think there's a general consensus that in Quebec, we're not quite preparedin Canada as a whole," he said.

According to Gaspard, protections in the Quebec charter and laws already exist to protect people's privacy and reputation.

He says nude deepfake cases can fall into a legal grey zone whereit's not always clear if it's possible to criminally prosecute a person who produces or distributes them something he says Canadian legislators are debating how to improve.

One problem, Gaspardsays, is that the onus falls on the victim to prove they have been harmed and who is responsible and then, if they have the means, sue.

But he says the bigger issue is preventing the creation of distribution of the images in the first place.

Fixing the gender disparity

Dongyan Lin, a researcher at Montreal-based artificial intelligence institute MILA, studies the link between neuroscience and AI. She says thesedeepfakes are a "great example of not having women in the decision-making process."

As a result, she says there are blind spotsat these companies in thinking about how the technology would be used "once it's massively commercialized."

Affecting Machines, a project developed at ConcordiaUniversity, tries to bridge the gender gap in AI and STEM by promoting the work of women in the field.

Lindsay Rogers,knowledge mobilization advisor at Concordia'sApplied AI Institute, is one of the people involved in the project.

"Gender diversity is really fundamental for having AI systems that are representative of the populations that use them," she said.

"It's not just about the numbers like AI [labs] hiring more women or non-binary folks in a room, it's about creating a culture and an atmosphere where they can succeed and do well and become valued members of the team," she said, putting the percentage of women working in the field in tech at around a quarter, barely creeping up in the past two decades.

Ethics training, other solutions

Along with stricter regulations and public hearings on AI use, Lin says mandatory ethics training would help AI developers gain a broader understanding of how the technology could be used by the public.

Banning sites that use deepfake technology is also an option, but experts likeSasha Luccioni, a Montreal-based research scientist at AI company Hugging Face,points to tools that allow users to skirt bans in the country they're based.

Other technical solutions like making images unusable by AI models are also on the table, but none of these solutions address the problem at its core,says Luccioni.

The root of the problem, she says, is how people decide to use the available technology includingusing it to objectify women's bodies.

For that problem,she says the solution is educating the public and raising awareness.

With files from Jeff Yates, Alexis De Lancer, Caitlyn Gowriluk and The Canadian Press