AISG Launches “Prize Challenge” to Curate Ideas and AI Models to Combat Fake Media

The “Trusted Media Challenge” is a five-month long competition aimed at attracting the AI community to design and test out AI models and solutions that will easily detect audiovisual fake media, where both video and audio modalities may be modified. The initiative – targeted at AI enthusiasts and researchers from around the world – aims to also strengthen Singapore’s position as a global AI hub by incentivising involvement of international contributors, and sourcing innovation ideas globally.

Participants in this Challenge will have access to datasets of original and fake media videos with audio. The Challenge is conducted in partnership with Mediacorp’s CNA and Singapore Press Holdings’ The Straits Times, who have provided about 800 real video clips including news and interviews. In addition, custom videos were collected from consented actors. In total, there are approximately 4,000 real clips and 8,000 fake video clips for the participants to train and test their models on.

The Challenge is open to researchers and industry professionals from around the globe, or anyone who is interested and/or experienced in machine learning, deep learning, computer vision, especially in media forensics. Participants need to build AI models that estimate the probability that any given video is fake.

From today until 15 December 2021, participating teams can submit their solutions – codes and models – via the Challenge Platform provided by AI Singapore. This platform will automatically rank the submissions on the leaderboard.

The winner that emerges from this Challenge stands to earn prize money of S$100,000 and a start-up grant of S$300,000 to develop their solutions further, using Singapore as the development base. Prizes and start-up grants will be awarded to the top three winners. The total prize money amounts to S$700,000 (about US$500,000).

Fake media technology or deepfake tech is becoming mainstream, delivering benefits and yet posing a variety of threats. The technology has allowed movie producers to manage videos and dialogues without expensive reshoots, facilitated professional training, or has been used to protect the identities of those being persecuted, among other applications. At the other end of the spectrum, deepfakes are used to sow mistrust and seed scamming, making them an existential threat to societies today. If left unchecked, fake media risks becoming a serious national security concern.


Trusted Media Challenge Timelines
The Trusted Media Challenge opens on 15 July 2021. Interested participants can obtain full details and training data via the Challenge Platform

The Challenge is divided into 2 phases: Phase 1 will last 4 months. The top teams from Phase 1 will enter Phase 2. The best submissions will be counted and shown on the leader board; prize money will be given out based on the ranking in Phase 2.

The announcement of the top three winners is expected to take place in January 2022.

“Technology is being used to create increasingly realistic deepfakes. To identify and counter this manipulation, verification tools are being developed but they are still in the nascent stages. We are in a race between those who want to use deepfake technology for nefarious purposes and those who want to create AI-based tools to counter them. With this as context, we designed the Trusted Media Challenge to provide a platform for AI experts to design and improve machine learning models to help organisations and individuals reliably identify media that has been manipulated, in the near future.”
Professor Ho Teck Hua
Professor Ho Teck HuaExecutive Chairman, AISG
“With deepfake technology becoming more sophisticated and available, it has become easier to create fake content that is difficult for the human eye to differentiate. Maliciously doctored content can lead to public misinformation and social fissures, if left unchecked. CNA is excited to partner in the Trusted Media Challenge, collaborating in continued efforts to combat this impending threat, in our mission to provide timely and accurate news to Singapore and the region.”
Mr Willy TanLead, Al Strategy & Solutions – News Group, Mediacorp
“Fake news is polluting the media landscape, and as it proliferates, it makes it harder for audiences to sift out the truth. This undermines our society’s ability to engage in meaningful discussions on the big issues of the day. Media organisations have a role to play in helping people grapple with this, and should employ all the technologies and tools available to do so. This AI challenge is one way to do so, and we are happy to be able to support this effort.”
Mr Warren FernandezEditor of The Straits Times and Editor-in-chief of Singapore Press Holdings’ English/Malay/Tamil Media Group