$26 million in global enterprise of deepfake video call scams; YouTube videos used: SCMP

Spread the love


Hong Kong police said Sunday that scammers defrauded a multinational corporation of nearly $26 million by impersonating senior executives using deepfake technology, one of the first cases of its kind in the city.

Law enforcement agencies are scrambling to keep up with productive artificial intelligence, which experts say has the potential for misinformation and abuse — such as deepfake images that show people mouthing things they never said.

Police told AFP that an employee of a company in the Chinese finance hub “received video conference calls from people posing as senior officials of the company requesting to transfer money to designated bank accounts”.

Police received a report of the incident on January 29, at which time some HK$200 million ($26 million) had already been lost through 15 transfers.

“Investigations are still ongoing and no one has been arrested so far,” the police said, without disclosing the name of the company.

We are on whatsapp channels. Click to join.

The victim worked in the finance department and the scammers impersonated the firm's UK-based chief financial officer, Hong Kong media reports said.

Acting Senior Superintendent Baron Chan said several people participated in the video conference call, but all except the victim acted.

“Scammers find publicly available video and audio of impersonating targets through YouTube, and then use deepfake technology to mimic their voices… to lure the victim into following their instructions,” Chan told reporters.

Deepfake videos are pre-recorded and do not include conversation or interactions with the victim.

What to know about how lawmakers are addressing deepfakes that victimized Taylor Swift

(AP Entertainment)

Even before Taylor Swift's lewd and violent deepfake images went viral over the past few days, state lawmakers in the US have been looking for ways to ban such consensual images of adults and children.

But in this Taylor-centric era, the issue is getting a lot more attention since she's been targeted by deepfakes, computer-generated images made to look real using artificial intelligence.

Here's what states have done and what they're considering.

Where to find deepfakes

Artificial intelligence went mainstream last year like never before, allowing people to create more realistic deepfakes. Now they appear frequently online in many forms.

There's profanity — taking advantage of celebrities like Swift to create fake compromising images.

There's music — a song that sounds like Drake and The Weeknd performing together has gotten millions of clicks on streaming services — but it's not the artists. The song has been removed from the platforms.

And there are political dirty tricks, this election year — before the January presidential primary, some New Hampshire voters reported receiving robocalls from President Joe Biden, telling them not to bother casting ballots. The state attorney general's office is investigating.

But obscenity using likenesses of non-famous people, including minors, is a much more common situation.

What the states have done so far

Deepfakes are just one area of ​​the complex field of AI that lawmakers are trying to figure out how to handle.

At least 10 states have already enacted deepfake-related laws. More measures are under consideration this year in legislatures across the country.

Georgia, Hawaii, Texas and Virginia have laws on the books criminalizing non-consensual deepfake porn.

California and Illinois have given victims the right to sue creators of images using their likenesses.

Both Minnesota and New York do. A Minnesota law also targets the use of deepfakes in politics.

Are there tech solutions?

University at Buffalo computer science professor Siwi Liu said several approaches are being worked on, none of which are perfect.

One is deepfake detection algorithms, which can be used to flag deepfakes in places like social media platforms.

Another — which Liu said is in development but not yet widely used — involves embedding codes in the content people upload, signaling if they are reused in AI creation.

And the third approach is to require companies offering AI tools to include digital watermarks to identify content produced with their applications.

He said it makes sense to hold those companies accountable for how people use their tools, and that companies can enforce user agreements against creating problematic deepfakes.

What should the law contain?

A model law proposed by the American Legislative Exchange Council addresses obscenity, not politics. The conservative and pro-business policy group is urging states to do two things: criminalize the possession and distribution of deepfakes depicting minors in sexual acts, and allow victims to sue people who distribute non-consensual deepfakes showing sexual behavior.

Jake Morabito, who directs the communications and technology task force for ALEC, said, “I would recommend that lawmakers start with a small, targeted solution that addresses an obvious problem. He cautions that lawmakers should not target the technology used to create deepfakes, which could close off innovation with other important uses.”

Todd Helmus, a behavioral scientist at RAND, a nonpartisan thinktank, suggests that leaving enforcement to the people who file lawsuits isn't enough. Litigation requires resources, he said. And the result may not be worth it. “It's not worth suing someone who doesn't have money to give you,” he said.

Helmus called for guardrails throughout the system and said government involvement would probably be needed to make them work.

He said efforts should be made to prevent deepfakes from being created by other companies that can use OpenAI and other platforms to create content that looks real; Social media companies should implement better systems to keep them from spreading, and there should be legal consequences for those who do it anyway.

Jenna Leventoff, a First Amendment attorney at the ACLU, said deepfakes are harmful, but free speech protections also apply to them, and lawmakers should make sure they don't go beyond existing exceptions to free speech, such as defamation and fraud. Obscenity, as they try to control emerging technology.

Last week, White House Press Secretary Karine Jean-Pierre addressed the issue, saying social media companies should create and enforce their own rules to prevent the spread of false information and images like Swift's.

What is being proposed?

In January a bipartisan group of members of Congress introduced federal legislation that would give people the property right to their own likeness and voice — and the ability to sue those who use it misleadingly through Deepfake for any reason.

Many states are considering some form of deepfake legislation in their sessions this year. They are being introduced by Democrats, Republicans and a bipartisan coalition of lawmakers.

Bills gaining traction in GOP-dominated Indiana would make it a crime to distribute or create sexually indecent depictions of a person without their consent. It was unanimously approved in the House in January.

A similar measure introduced this week in Missouri is dubbed “The Taylor Swift Act.” And another cleared the Senate this week in South Dakota, where Attorney General Marty Jackley turned some investigations over to federal authorities because the state lacks the AI-related laws necessary to file charges.

“When you go on someone's Facebook page, you steal their child and you put it in obscenity, there's no First Amendment right to do that,” Jackley said.

What can a person do?

For anyone with an online presence, it can be difficult to prevent a deepfake victim.

But RAND's Helmus said people who find they've been targeted can ask the social media platform where the images were shared to remove them; Notify the police if they are in a law enforcement location; Tell school or university officials if the alleged offender is a student; And get mental health help when needed.



Source link

Leave a Comment