It seems that the economic damage caused by deepfakes is becoming enormous.
SBBIT article: "Deepfake's "improvement" in accuracy is too dangerous, damage amount will exceed 2027 trillion yen by 6"According to the report, the following points are noted:
Article Contents
Improving Deepfake Accuracy
- Deepfake technology is developing rapidly due to advances in AI.
- Early deepfakes were low quality and clearly fake.
- From 2018 to 2019, AI image generation technology improved.
- The advent of GAN has significantly improved image quality.
- From 2020 onwards, Transformers technology will improve consistency for long-form videos.
- Deepfake content will increase 2023% in 3000 compared to the previous year.
Damage caused by the abuse of deepfakes
- Deloitte estimates that fraud losses will rise from $2023 billion in 123 to $2027 billion in 400.
- The average annual growth rate is 32%, meaning that the amount of damage has more than tripled in four years.
- New generative AI tools make it cheap to create deepfakes.
- In particular, the financial services industry is increasingly being targeted.
- Deepfake cases in the fintech industry will increase by 2023% in 700.
- Contact centre fraud caused by audio deepfakes costs companies around $50 billion per year
- By 2024, deepfake-related cases are expected to increase by 60% from the previous year, reaching 15 cases worldwide.
- Concerns include non-consensual sexual content and falsified identity documents.
- A shadowy industry has formed where fraudulent software is sold on the dark web.
Actual cases of deepfake fraud
- Deepfake scams targeting corporate executives are on the rise.
- WhatsApp scam targets CEO of WPP, the world's largest advertising groupCase study.
- Hong Kong executive impersonation case results in losses of tens of millions of dollarsCase study.
- It has been reported that cyber attacks using AI are on the rise.
Deepfakes and other AI-based cyber attacks
- IvantiResearchAccording to a report, many companies are reporting an increase in cyber attacks leveraging AI.
- AI-driven cyber attacks are expected to become more prevalent in the future.
- The most feared threats include phishing (45%), attacks targeting software vulnerabilities (38%), ransomware attacks (37%), and attacks targeting API vulnerabilities (34%).
Current status of deepfake countermeasures
- Banks and other financial institutions are introducing fraud detection systems that use AI and machine learning.
- JP Morgan uses large-scale language models to detect email fraud.
- Mastercard has developed a "Decision Intelligence" tool that predicts the legitimacy of a transaction.
- Existing risk management frameworks may not be able to keep up with new AI technologies.
Nationwide effort to combat deepfakes
- It has been pointed out that it is becoming difficult to distinguish deep fakes with the naked eye.
- OpenAI will offer a deepfake detection tool using its own AI, but deepfakes are rarely created by a single tool, limiting the effectiveness of such a tool.
- The C2PA Initiative is developing a standard to show the production process of AI-generated content in a format similar to food ingredient labels.
- UK Government Launches "Deepfake Detection Challenge".
- Public awareness activities are being carried out.
Thoughts from an identity perspective
Generative AI has a wide range of impacts on identity, and deepfakes are just one aspect.
In terms of measures to combat deep fakes,
- Caller Authentication
- Instead of humans making decisions based on voice or facial images, information senders must be authenticated using advanced authentication before important transactions (technical measures)
- Organizational measures to ensure this
- Promoting digitalization to combat counterfeit identity documents
- Clarification of the nature of the information being disseminated
- Human resources measures to implement these measures
etc. will be necessary.
Caller Authentication
As an example of caller authentication, we always authenticate requests by phone or video using CIBA.1One example is using a push notification to send to a pre-registered device of the person claiming to be a phishing scam to authenticate the user.
On the other hand, it is also important to guarantee that the person on the phone will not be fired if they make such a request. A typical tactic used by such scammers is to pressure the person on the phone if they hesitate, saying, "The survival of the company depends on it. If you don't do it now, you'll be fired." It is necessary to protect the person from this kind of pressure. This is difficult to achieve with technical measures alone, and requires measures such as company regulations.Organizational measureswill be required in addition to your identification documents.
In addition, to prevent the use of deep fakes to forge identity documents, it would be effective to switch to documents with digital signatures. Fortunately, in Japan, we have access to public personal authentication and the Digital Agency's digital authentication app, so I think we will need to rely on these to provide a high level of identity verification.
Clarification of the nature of the information being disseminated
Clarification of the nature of the information being disseminatedI think the key to identity is both how the information is generated and who the source of the information is. This is very important to protect the integrity of your identity. For example, what if you made a video of yourself committing a crime or made non-consensual sexual content and spread it? If people believed that, it would definitely change how others perceived you and destroy your credibility.
C2PA and Originator Profile are responsible for this. They show whether the video or image was generated by AI, who the sender is, etc. However, this requires some caution in terms of freedom of speech.
C2PA and Originator Profile (OP) are technologies that improve the trustworthiness of digital content, but they can have different impacts on free speech.
C2PA and Free Speech
C2PA is a technology for verifying the origin and editing history of digital content, and is intended to prevent the spread of fake news and deep fakes. However, if this technology is misused, it could lead to restrictions on freedom of speech. For example, there are concerns that the C2PA system could be used to identify journalists and allow governments to restrict speech. It is also possible that content tracking by C2PA could be used to enforce certain laws.
Originator Profiles and Free Speech
Originator Profile is a technology to verify the authenticity and trustworthiness of originators of web content. The purpose of this is to prevent misinformation and ad fraud, but identifying originators' identities can lead to loss of anonymity and restrictions on freedom of speech. In particular, if originator information is used inappropriately, it can encourage self-censorship.
Impact on freedom of speech
- Privacy concerns: Both technologies raise concerns about privacy violations, as they collect and manage information about users, which could make it more difficult for users to express their opinions freely.
- Risk of misuse: If the technology is misused by governments and other powerful actors, there is a risk that freedom of speech could be restricted, particularly by targeting journalists and activists.
- Technology Transparency and Accountability: We need transparency about how these technologies are used and how data is managed. Without proper accountability, freedom of speech could be threatened.
These technologies are important for increasing the trust of digital content, but they require careful consideration in how they are used and managed to protect free speech.
Personnel measures
The last human resource measure is also very important. Even if technical measures are implemented, they are meaningless if they are not used. However, this is quite difficult. It is possible to enforce organizational education and penalties on employees and other members of an organization, but it is difficult to enforce this on the general public. I think this is an issue.
Conclusion
While the capabilities of attacking tools evolve exponentially, human skills do not evolve in the same way, so it is impossible to counter them with skills alone without the support of technology. Therefore, it is necessary to strongly promote technical countermeasures.
On the other hand, when it comes to social communication, the relationship with freedom of speech is also important. Therefore, it is important not to overdo it. Also, it is necessary to be aware of the difficulty of human countermeasures.
It is essential to take all these factors into consideration and implement measures in a balanced manner.