Estonia-based Guard, which is developing a detection platform for determining manufactured media (aka deepfakes), has closed a $1.35 million seed round from some experienced angle financiers– consisting of Jaan Tallinn (Skype), Taavet Hinrikus (Transferwise), Ragnar Sass & & Martin Henk (Pipedrive)– and Baltics early stage VC firm, United Angels VC.

The obstacle of structure tools to identify deepfakes has been compared to an arms race– most recently by tech huge Microsoft, which earlier this month launched a detector tool in the hopes of assisting pick up disinformation targeted at November’s United States election. “The reality that [deepfakes are] created by AI that can continue to learn makes it unavoidable that they will beat standard detection innovation,” it warned, prior to suggesting there’s still short-term worth in attempting to expose malicious fakes with “innovative detection innovations”.

Guard co-founder and CEO, Johannes Tammekänd, settles on the arms race point– which is why its method to this ‘goal-post-shifting’ issue requires providing several layers of defence, following a cyber security-style template. He says competing tools– discussing Microsoft’s detector and another competitor, Deeptrace, aka Sensity– are, by contrast, only counting on “one expensive neural network that tries to spot defects”, as he puts it.

“Our method is we believe it’s impossible to discover all deepfakes with only one detection approach,” he informs TechCrunch. “We have several layers of defence that if one layer gets breached then there’s a high possibility that the enemy will get detected in the next layer.”

Tammekänd says Guard’s platform uses 4 layers of deepfake defence at this stage: A preliminary layer based upon hashing recognized examples of in-the-wild deepfakes to examine versus (and which he states is scalable to “social networks platform” level); a 2nd layer comprised of a machine learning design that parses metadata for control; a third that look for audio changes, searching for synthesized voices etc; and last but not least a technology that examines deals with “frame by frame” to search for indications of visual control.

“We take input from all of those detection layers and then we settle the output together [as a general score] to have the highest degree of certainty,” he states.

“We already reached the point where somebody can’t state with 100% certainty if a video is a deepfake or not. Unless the video is in some way ‘cryptographically’ proven … or unless somebody has the original video from numerous angles etc,” he adds.

Tammekänd likewise highlights the significance of information in the deepfake arms race– over and above any specific strategy. Sentinel’s boast on this front is that it’s collected the “biggest” database of in-the-wild deepfakes to train its algorithms on.

It has an internal confirmation group working on information acquisition by using its own detection system to suspect media, with 3 human confirmation experts who “all have to agree” in order for it to validate the most advanced natural deepfakes.

“Every day we’re downloading deepfakes from all the significant social platforms– YouTube, Facebook, Instagram, TikTok, then there’s Asian ones, Russian ones, likewise porn sites as well,” he states.

“If you train a deepfake model based upon let’s say Facebook data-sets then it does not really generalize– it can spot deepfakes like itself but it does not generalize well with deepfakes in the wild. That’s why the detection is actually 80% the information engine.”

Not that Guard can constantly be sure. Tammekänd provides the example of a brief video launched by Chinese state media of a poet who it was believed has actually been killed by the military– in which he appeared to say he was alive and well and told people not to worry.

“Although our algorithms show that, with a very high degree of certainty, it is not controlled– and more than likely the person was just persuaded– we can’t state with 100% certainty that the video is not a deepfake,” he says.

Sentinel’s founders, who are ex NATO, Monese and the UK Royal Navy, really began working on a really different start-up idea back in 2018– called Sidekik– building a Black Mirror-esque tech which ingested comms information to develop a ‘digital clone’ of an individual in the form of a tonally similar chatbot (or audiobot).

The concept was that individuals might utilize this virtual double to hand off basic admin-style tasks. However Tammekänd says they ended up being concerned about the potential for abuse– hence rotating to deepfake detection.

They’re targeting their innovation at governments, international media outlets and defence firms– with early customers, after the launch of their subscription service in Q2 this year, including the European Union External Action Service and the Estonian Government.

Their specified goal is to help to protect democracies from disinformation campaigns and other harmful information ops. So that means they’re being extremely careful about who gets access to their tech. “We have a really heavy vetting process,” he notes. “For instance we work just with NATO allies.”

“We have had demands from Saudi Arabia and China however certainly that is a no-go from our side,” Tammekänd adds.

A current research study the start-up performed recommends exponential growth of deepfakes in the wild (i.e. found anywhere online)– with more than 145,000 examples identified up until now in 2020, indicating a ninefold year-on-year development.

Tools to produce deepfakes are certainly getting more available. And while plenty are, at stated value, developed to use harmless fun/entertainment– such as the likes of selfie-shifting app Reface— it’s clear that without thoughtful controls (consisting of deepfake detection systems) the synthesized material they allow could be misappropriated to manipulate unsuspecting viewers.

Scaling up deepfake detection innovation to the level of media swapping going on social networks platforms today is one significant challenge Tammekänd discusses.

“Facebook or Google might scale up [their own deepfake detection] but it would cost a lot today that they would have to put in a great deal of resources and their revenue would obviously fall dramatically– so it’s fundamentally a triple standard; what are business incentives?” he recommends.

There is likewise the threat presented by very sophisticated, very well funded enemies– developing what he describes as “deepfake absolutely no day” targeted attacks (perhaps state actors, probably pursuing an extremely high value target).

“Essentially it is the same thing in cyber security,” he says. “Basically you can mitigate [the large bulk] If the organization incentives are right, of the deepfakes. You can do that. There will constantly be those deepfakes which can be developed as absolutely no days by sophisticated foes. And no one today has an excellent approach or let’s say technique of how to discover those.

“The only known technique is the layered defence– and hope that a person of those defence layers will choose it up.”

Sentinel

co-founders, Kaspar Peterson(left)& Johannes Tammekänd (right). Picture Credit: Sentinel It’s definitely getting less expensive and simpler for any Web user to make and disperse possible phonies. And as the dangers posed by deepfakes rise business and political programs– the European Union is preparing a Democracy Action Strategy to respond to disinformation risks, for example– Sentinel is positioning itself to sell not just deekfake detection however bespoke consultancy services, powered by knowings drawn out from its deepfake data-set.

“We have an entire item– significance we simply do not use a ‘black box’ however also provide prediction explainability, training data statistics in order to alleviate predisposition, matching versus currently known deepfakes and risk modelling for our clients through consulting,” the startup informs us. “Those essential factors have actually made us the option of clients so far.”

Asked what he views as the biggest risks that deepfakes position to Western society, Tammekänd states, in the short-term, the significant worry is election disturbance.

“One probability is that during the election– or a day or two days before– picture Joe Biden stating ‘I have a cancer, don’t elect me’. That video goes viral,” he recommends, sketching one near term danger.

“The innovation’s already there,” he includes noting that he had a recent call with an information scientist from among the customer deepfake apps who informed him they ‘d been called by different security organizations worried about just such a risk.

“From a technical viewpoint it could definitely be managed … and once it goes viral for individuals seeing is believing,” he adds. “If you look at the ‘inexpensive fakes’ that have currently had a huge impact, a deepfake does not have to be ideal, actually, it simply needs to be believable in a good context– so there’s a great deal of voters who can succumb to that.”

Longer term, he argues the risk is really massive: Individuals could lose rely on digital media, duration.

“It’s not only about videos, it can be images, it can be voice. And really we’re currently seeing the convergence of them,” he says. “So what you can in fact simulate are complete events … that I might watch on social networks and all the various channels.

“So we will only rely on digital media that is validated, generally– that has some approach of verification behind that.”

Another much more dystopian AI -distorted future is that people will no longer care what’s genuine or not online– they’ll just think whatever controlled media caters their existing prejudices. (And given the number of people have actually dropped unusual conspiracy bunny holes seeded by a couple of textual ideas published online, that seems all too possible.)

“Eventually individuals do not care. Which is a really risky facility,” he recommends. “There’s a lot of talk about where are the ‘nuclear bombs’ of deepfakes? Let’s say it’s simply a matter of time when a deepfake of a politician comes out that will do massive damage but … I don’t think that’s the most significant systematic risk here.

“The most significant organized danger is, if you look from the perspective of history, what has actually occurred is info production has actually become less expensive and much easier and sharing has become quicker. So everything from Gutenberg’s printing press, TV, radio, social networks, Web. What’s occurring now is the details that we take in on the Web doesn’t need to be produced by another human– and thanks to algorithms you can on a binary time-scale do it on a mass scale and in a hyper-personalized way. So that’s the greatest systematic danger. We will not basically understand what is truth any longer online. What is human and what is not human.”

The potential consequences of such a situation are myriad– from social division on steroids; so much more confusion and mayhem stimulating increasing anarchy and violent individualism to, maybe, a mass switching off, if big swathes of the mainstream merely decide to stop listening to the Internet since a lot online contents is nonsense.

From there things might even go cycle– back to individuals “finding out more relied on sources once again”, as Tammekänd recommends. W ith a lot at shapeshifting stake, something appears like a safe bet: Smart, data-driven tools that assist people navigate an ever more chameleonic and questionable media landscape will be in need.

TechCrunch’s Steve O’Hear contributed to this report

Article curated by RJ Shara from Source. RJ Shara is a Bay Area Radio Host (Radio Jockey) who talks about the startup ecosystem – entrepreneurs, investments, policies and more on her show The Silicon Dreams. The show streams on Radio Zindagi 1170AM on Mondays from 3.30 PM to 4 PM.