The most exciting AI research projects of 2020

At Twine we’re fortunate to work with some of the biggest artificial intelligence research institutions out there.

Every day, freelancers in the Twine community are helping to make the world a better place by providing video research material to researchers at the cutting edge of AI.

In this article we’re going to take you through some of the other exciting AI research projects going on in the world right now.

Fakecatcher

You might have heard of the concept of ‘deepfakes’, a form of synthetic media in which a person in an existing image or video is replaced with someone else’s likeness, as it’s been all over the media in 2020.

Although deepfakes are mostly just harmless fun, using them for political purposes or to spread misinformation can have much more sinister sinister consequences.

To fight this threat, researchers from Binghamton University’s Thomas J. Watson College of Engineering and Applied Science have teamed up with Intel Corp. to develop a tool called FakeCatcher.

FakeCatcher works by analyzing the subtle differences in skin color caused by the human heartbeat.

Incredible, right?

This technique, known as Photoplethysmography, (abbreviated as PPG) boasts an accuracy rate above 90% and uses the same technique used in Apple Watches and wearable fitness tracking devices that measure your heartbeat during exercise.

Ilke Demir, a senior research scientist at Intel, said in interview:

We extract several PPG signals from different parts of the face and look at the spatial and temporal consistency of those signals. In deepfakes, there is no consistency for heartbeats and there is no pulse information. For real videos, the blood flow in someone’s left cheek and right cheek—to oversimplify it—agree that they have the same pulse.

The potential impact of Fakecatcher is huge and we’re really excited to see how it helps in the fightback against fake media.

Deblur your videos in one click with Adobe’s Project Sharp Shots

At their MAX user conference, this year Adobe dropped a bombshell with the announcement of Project Sharp Shots.

Powered by Adobe’s Sensei AI platform, Sharp Shots is a research project that uses AI to deblur videos — no matter whether it was blurred because of a shaky camera or fast movement — with a single click.

In the event demos, the impact of the AI was shown via videos of a woman playing ukulele and a fast-moving motorcycle.

Shubhi Gupta, the engineer behind the project, told Tech Crunch “there’s no parameter tuning and adjustment like we used to do in our traditional methods. This one is just a one-click thing. It’s not magic. This is simple deep learning and AI working in the background, extracting each frame, deblurring it and producing high-quality deblurred photos and videos.”

Multi-modal Video Content Analysis for Content Recommendation

What if AI could recommend new videos to you not just based on meta data but on the actual content in the video itself?

Visual Display Intelligence Lab have found an answer with multi modal AI video content analysis.

They’ve built an AI agent which processes data in multiple modalities (e.g., video, images, audio, language) and learns how to recommend new multi-media content to an user. The AI agent can process different characteristics of a video, and videos in their user history, and recommends new videos of interest to the user based on alignment of characteristics between the videos.

Their architecture vastly out-performs current state-of-the-art models in performance and we’re excited to see what impact it will have on the future of content recommendation!

InnoBrain for Artificial Creativity

What if we told you that there was an artificial creative brain being developed that can innovate new ideas to drive humanity forward. It’s science fiction, surely? Not at AI Singapore.

They’ve developed a first-of-its kind computational system called InnoBrain which is capable of “artificial creativity”, or creative artificial intelligence in computer ideation for new technology-based design concepts for innovation.

AI Singapore describe InnoBrain as mimicking ‘a human brain via the synthesis of an artificial “memory” that stores and organizes the world’s historical and growing data on technologies, and an artificial “mind” that responds to external stimuli to retrieve and combine prior knowledge into new concepts’.

Mind blowing!

Jianxi Luo, the principal researcher behind the project, described ‘augmenting creativity in engineering design and problem solving in diverse engineering domains as well as technology-related planning or management’.

This means that ‘even a novice engineer or analyst without extensive knowledge and creative thinking skills can be empowered to work more creatively on technology-related tasks that would otherwise require knowledgeable experts, creative mindsets or gut feeling and serendipity’.

We can’t wait to see the impact Innobrain has on research and innovation around the world.

ECCV2020 Pose Research

What if you could visualise the position and pose of people in videos where only a portion of their body is visible in the shot?

Researchers at the University of Michigan have trained neural network models to identify to do exactly that. Their breakthrough has opened up a vast array of new innovations for AI video analysis – even enabling machines to learn the meaning behind people’s poses and their interactions with their environment.

Their research pushes the boundaries of a field of study called human pose estimation.

Prepare yourself for profound changes to video editing software and creatives using this technology to enable really complex interactions of characters and environments within film – exciting stuff!

Are there any projects we’ve missed?

Let us know in the comments below.

Comments

Twine

Twine

Twine's platform curates the best quality creative freelancers to grow your business, saving time and money whilst ensuring quality results on your projects.