[ad_1]
There is growing consensus on how to address the challenge of deepfakes in media and businesses, generated through technologies such as AI. Earlier this year, Google announced that it was joining the Coalition for Content Provenance and Authenticity as a steering committee member — other organisations in the C2PA include OpenAI, Adobe, Microsoft, AWS and the RIAA. With growing concern about AI misinformation and deepfakes, IT professionals will want to pay close attention to the work of this body, and particularly Content Credentials, as the industry formalises standards governing how visual and video data is managed.
What are Content Credentials?
Content Credentials are a form of digital metadata that creators can attach to their content to ensure proper recognition and promote transparency. This tamper-evident metadata includes information about the creator and the creative process that is embedded directly into the content at the time of export or download. Content Credentials have the best chance of arriving at a globally standardised and agreed-on way of labelling content yet, thanks to the weight of the companies behind the concept.
SEE: Adobe Adds Firefly and Content Credentials to Bug Bounty Program
The use of Content Credentials offers several benefits. It will help to build credibility and trust with audiences by providing more information about the creator and the creative process. This transparency can aid in combating misinformation and disinformation online. By attaching identity and contact information to their work, creators can make it easier for others to find and connect with them, enhancing their visibility and recognition. Equally, it will become easier to identify and de-platform or remove content that isn’t legitimate.
Deepfakes are a challenge that Australia is struggling to grapple with
Australia, like much of the rest of the world, is struggling with a massive acceleration of deepfake fraud. Sumsub’s third annual Identity Fraud Report found a 1,530% surge in deepfakes in Australia over the past year and noted that the sophistication of these was also increasing.
The situation has become so concerning that the government has recently announced a strategy to counter some specific examples of it and then establish pathways to treat it as any other form of illegal content.
Deepfakes are particularly potent sources of disinformation because the eye can be tricked so quickly. Research suggests that it takes as little as 13 milliseconds to identify an image — a time frame much shorter than the length of time it would take to work through and determine the validity of it. In other words, deepfakes are such a risk because they can already have the intended impact on a person before they can be analysed and dismissed.
SEE: AI Deepfakes Rising as Risk for APAC Organisations
For example, Australia’s leading science body, the CSIRO, published information on “how to spot a deepfake,” and that guidance requires extensive analysis.
“If it’s a video, you can check if the audio is properly synced to the lip movement. Do the words match the mouth? Other things to check for are unnatural blinking or flickering around the eyes, odd lighting or shadows, and facial expressions that don’t match the emotional tone of the speech,” CSIRO expert, Dr. Kristen Moore, said in the guidance feature.
So, as useful as that advice is, equipping the end target of deepfakes on how to identify them isn’t going to be enough to prevent them from wreaking havoc across society.
Government and the private sector need to come together to combat deepfakes
The government making deepfakes illegal is a positive step in protecting those that would be victimised by them. However, the IT industry will need to be the ones that develop ways of identifying and managing this content.
There are already high-profile cases of major business figures like Dick Smith and Gina Rinehart “demanding” that organisations such as Meta be more proactive in preventing AI scams, after their likenesses were used in deepfakes.
As noted by the Australian eSafety Commissioner, the “development of innovations to help identify deepfakes is not yet keeping pace with the technology itself.” For its part, the Australian government has committed to combatting deepfakes by:
- Raising awareness about deepfakes so Australians are provided with a reasoned and evidence-based overview of the issue and are well-informed about options available to them.
- Supporting people who have been targeted through a complaint reporting system. Any Australian whose photo or video has been digitally altered and shared online can contact eSafety for help to have it removed.
- Preventing harm through developing educational content about deepfakes, so Australians can critically assess online content and more confidently navigate the online world.
- Supporting industry through our Safety by Design initiative, which helps companies and organisations to embed safety into their products and services.
- Supporting industry efforts to reduce or limit the redistribution of harmful deepfakes by encouraging them to develop: policies, terms of service and community standards on deepfakes, screening and removal policies to manage abusive and illegal deepfakes, methods to identify and flag deepfakes in their community.
Ultimately, for this vision to be successful, there needs to be support from the industry, with the organisations providing the technology and investing most deeply into AI. This is where Content Credentials comes in.
Steps to take to help combat deepfakes
Content Credentials are the best chance of forming standards that will combat deepfakes. As this approach is industry-driven and supported by the weight of the heaviest hitters in content industries, it means illegitimate content can be flagged across the vast bulk of the internet — similar to how virus-filled websites can be flagged to the point that they become effectively unfindable on search engines.
For this reason, IT professionals and others working with AI for content generation will want to understand Content Credentials in the same way that Web developers understand security, SEO and the standards that are expected to protect content from being flagged. Steps they should be taking include:
- Implementing Content Credentials: First and foremost, the IT pros need to make sure their organisation actively adopts and integrates Content Credentials into workflows to ensure content authenticity and traceability.
- Advocating for transparency: Both internally and externally, with partners and customers, advocate for organisations to be transparent about their use of AI and to adopt ethical practices in content creation and distribution.
- Supporting regulation: Engage with industry bodies and government agencies to shape policies and regulations that address the challenges posed by deepfakes. This includes participating in the various inquiries the government will run on AI to help shape policy.
- Collaborating: Work with other professionals and organisations to develop standardised practices and tools for identifying and mitigating the risks associated with deepfakes.
- Preparing response strategies: Have a plan in place for when deepfakes are detected, including steps to mitigate damage and communicate with stakeholders.
- Leveraging community resources: Finally, utilise resources from cybersecurity communities and governmental bodies like the eSafety Commissioner to stay updated and prepared.
Without a doubt, deepfakes are going to be one of the most significant challenges the tech industry and IT pros will need to develop an answer for. Content Credentials offers an excellent starting point that the industry can gravitate around.
[ad_2]
Source link