skip to Main Content

Meeting the challenges of deepfake and data privacy

Are the tools needed to create a deepfake evolving faster than the technology to battle them? What can be done?

Deepfake is made possible through public research and freely accessible papers. In addition ready-made technologies are publicly available through open-source software repositories. Running that software is a relatively simple task, that requires only few skills, a good PC, some spare time. The instructions provided with the software are often step-by-step procedures. Several pre-canned tools are available for almost any operating system, including mobile, and their availability is increasing due to the availability of that massive quantity of open-source stuff: it’s just a matter of creating a good GUI to wrap a cocktail of python scripts. Big Techs are trying hard to automatically tackle deepfakes and in some cases are joining the forces, but the variety of software, neural networks, new variants made through combinations of both make this challenge harder and harder. The only obstacle to a real threat (e.g. realtime deepfake / deepvoice during a call) has been so far the huge amount of computational power needed to make a credible deepfake, together with the difficulty to craft good source-person and target-person models. This task needs a huge amount of GPUs and there’s no evidence to demonstrate that somebody has really succeeded to achieve a realtime deepfake. But the quantum computing could be the real game changer, for both creation and detection of deepfake scams.

How does human-focused, empathetic security organisation look?

In the triad Technology-Processes-People, most of the organizations tend to prioritize technology. If technology would be the solution, we wouldn’t have threats so far. The truth is that humans are unreplaceable by a CPU or an algorithm, but our brain is still limited by capacity, biases, experience. For this reason we still need to rely on technology to filter billion data and present them in an human-readable format. And we still need processes for everyone in the response team to be prepared for an alert call arriving overnight. An human-focused organization relies on cognitivism to enable people to respond properly: this is empathetic security, made for humans.

What challenges do data privacy and customer protection laws bring in terms of cyber security?

If I look back to 10 years ago, having the opportunity to meet C-levels in any vertical and policy makers, I noticed that security and privacy were often not the priority, underestimated, pressed by budget constraints. More and more during the last 7-5 years policy makers are cyber-aware and give to data privacy and customer protection the proper importance: NIS, GDPR, PSD2 and other relevant regulations are only a few examples of this new sensitivity. Again, if a technology, or a law, or a framework would be resolutive, we wouldn’t have so many headlines on the newspapers, daily. But everything increasing awareness, automation, obligations and sanctions where needed, help to raise the bar.

 

If you are interested to hear Fabrizio’s Insights on Deepfake and Deepvoice join our 5th Annual Cyber Security Summit on 7th - 8th September in Berlin.


Fabrizio SAVIANO is a Chief Information Security Officer at ING. Raised in Tuscany, Fabrizio first moved in London and then joined the Internet Police. Assigned to the Intrusion Squad in Milan, began his career as a cop and also teacher at University in Lugano.

After 8 years he stepped out of Police to build his career in the 8 security domains, that teaches as CISSP trainer. After a long journey as BT Security specialist and external CISO at the Municipality of Milan, ING BANK appointed him as CISO for the Italian branch. He’s also a house music producer as JL & Afterman, speaker about IoT Security, teacher in Crisis Communication.

Back To Top