Globant’s Bill Bronske on fakes, deepfakes and the Internet’s truth crisis 

Profile picture for user jbowles By Jerry Bowles November 12, 2019
Who do you trust when you can no longer believe your own eyes and ears?


The worldwide internet is in the midst of a truth crisis that presents an existential threat to the survival of brands and governments alike. The unrelenting stream of fake or misleading information available on social media platforms plus unprecedented efforts in some political quarters to label legitimate news “fake” news has left ordinary citizens confused and wondering who or what to believe. 

A series of high tech disasters like the Facebook Russian misinformation campaign, Cambridge Analytica, ransomware attacks, the misuse of private data by Google and Facebook, the Equifax hack and dozens of other scandals has shaken the faith of millions of users who had eagerly adopted new technologies in the naïve belief their trust would not be abused.

Introduce into the mix the creepy new AI-enabled technology called deepfakes, which allows potential bad actors to create film videos and audio soundbites that appear to portray people saying and doing things they never said or did has fueled even greater concern. The rise of deepfakes is forcing us to rethink our long-held belief that video and audio are reliable records of reality.

I spoke about these issues with Bill Bronske, Senior Solutions Architect and AI Studio expert at Globant, a digital transformation technology services company, that works with companies like Google, Southwest Airlines, EA and a number of other major Fortune 500 brands.  

The company, which has about 10,000 employees in 17 countries, is organized by "Studios" that provide tailored solutions across a broad subset of emergent technologies, including blockchain, artificial intelligence, internet-of-things, cybersecurity, gaming, and so on.

Founded by four amigos in Buenos Aires in 2003, Globant was the first Latin American software company to launch an IPO on the NYSE.  Said Bronske:  

I think we're at a time in history when we have a climate in which advertising, marketing, political and influencer campaigns and so on have turned towards crafting their own narratives. A side effect is that now there's a public awareness around effects of what we're calling synthetic media, or manipulated media and people are wondering if they can believe what they’re seeing and hearing.  The public has gotten a little weary of hearing narratives that later turn out to be false.

This has led to a situation not only in which we are skeptical about the lie but also about the truth.  Without truth, there is no trust.  And without trust, there are no sales or votes.  This is extremely bad news for brands and politicians alike.

Deepfakes have added infinitely more uncertainty to the already volatile contest for whose reality prevails.  Deepfakes are dangerous because—undetected--they could theoretically change the outcome of a trial, ruin an IPO, sink a political candidate, even start a war. Bronske said:  

They are a combination of two powerful forces coming together. The first is higher quality video composition tools which are now more widely available than ever before. The second is the fact that we've become trained and hooked on sound bites. Entire narratives can be developed using a single image or a couple of seconds of audio.  

This collision plays to our collective vulnerability to being manipulated. We may understand that context may be missing, text can be misquoted and photos can be photoshopped; however, we all tend to believe things we can see or hear.  We don't immediately recognize that video or audio can be so easily synthesized. I see this as the next evolution of attempts to craft narratives that betray the reality.

 Not all deepfakes are necessarily bad, Bronske said, as long the individual or estate of the person whose likeness is forged has fully agreed to participate and copyrights have been respected. As an example, he pointed out that a few months ago, the soccer superstar David Beckham appeared in a video ad for Malaria No More, a U.K. charity. During his voice petition, he appeared to encourage people to help the fight against malaria using nine languages. As good as Beckham is, he doesn’t speak nine languages. 

 Because deepfakes are created using generative adversarial networks (GAN)—a class of machine learning that pits two neural networks against each other—they get better and less susceptible to detection over time. Bronske says that detecting them will necessitate complex processes and tooling similar to those used for electronic signing for legal documents.

 Politicians have begun to take notice. There is a bipartisan bill pending in the House of Representatives how that would do three things: Require companies and researchers who create tools that can be used to make deepfakes to automatically add watermarks to forged creations; require social-media companies to build better manipulation detection directly into their platforms, and create sanctions, like fines or even jail time, to punish offenders for creating malicious deepfakes that harm individuals or threaten national security.

 Bronske said that the best protection from attacks by deepfakes and other synthetic media is to direct your digital transformation efforts around the principles of fairness, accountability, transparency and social good. 

The focus of our company for the last several years has been around transforming organizations from the inside out. A key driver for that is a people-focused cultural transformation that goes from the newest junior member of the organization all the way to the top and enables people to perform at the optimum level. We know that that's what drives transformation. That's what drives change. That’s what builds and preserves trust.   

 My take

 Deepfakes are simply the latest and slickest manifestation of a much bigger problem—the serious, ultimately crippling, erosion of trust. Social media platforms are accelerating that disintegration by allowing lies to spread at warp speeds while washing their hands of any responsibility or accountability.  Authoritarian leaders have discovered that “fake” news is more powerful than straight-forward censorship. Democratic governments need—at minimum--to invest in developing technology to detect and clearly label deepfakes and to require online platforms to do the same.