DeepFake
You can download an app on your phone and change voices and faces in videos. Seems like harmless fun on the outside and for most people, it is. But, can it be dangerous? Yes, and the implications are tremendous as we are creating more data every day than we ever have before.
DeepFake is a weaponized form of social media and also the topic of this thought provoking webinar. We talked about our thoughts and ideas on the consequences of technology, how to stay safe, and if it is possible to regulate it.
It’s only a matter of time before the consequences show up.
How did you feel about social media in 2007 vs. today? Have you learned anything? Social media is a great tool to network, communicate, and share but it can also have adverse effects. People seem too eager to use social media, AI, and video editing apps without considering the consequences.
Remember JibJab? This is one of the first platforms that allowed you to stick the heads of your friends and family members onto funny dancing bodies and send them as greeting cards. And yes, it sure was fun. But technology and what can be done with data has changed and it could be dangerous.
Technology is here to stay. So how can we protect ourselves?
It is virtually impossible to keep up with and regulate everything when technology advances by the minute and at the same time, if we don’t do something soon, it will be too late. What can we do as a society to think about how DeepFake is affecting us, what guardrails can we put up, and how can we tie in the legislature to help regulate this?
It used to take 10 to 15 years for new technology to come out. As a society, we had time to adapt. We don’t have that luxury now and many see a societal collapse in our future if we don’t take the time to adapt and create regulations.
But I Googled it...It has to be true!
We are living in the age of misinformation as analog humans in a digital world. We see video evidence and think it's real but how can we trust it? Has it been edited? Have we seen the whole thing? People don’t know how to verify. Can we even truly trust our sources anymore? If five people searched the same term using Google, they would get five different answers.
So many people aren’t looking beyond what their eyes tell them. They are still trusting the first thing they see on social media, the news, and even memes, etc. We are also losing the desire to be around people that think differently than we do in order to gain perspective.
“Seeing is Believing,” originally from a 17th century English clergyman, the full quote is, “Seeing is believing, but feeling is the truth.” OR I can see the fire is hot, but I don’t believe it unless it burns me. Maybe we should consider using more than our sight in order to believe something or authenticate. We could go by feeling. We have to ask ourselves, “Is this a scam?” and we should always go with “If it’s too good to be true, it probably is."
Do we own our own face?
There is this account on TikTok that looks exactly like Charli D’Amelio and she dances like Charli. Her bio even says that she looks like Charli. Her name is Julia and she is promoting a clothing brand. She has millions of views and people on TikTok are claiming that Julia is using deepfake and filters. It’s hard to tell!
If someone is using your face, can you claim it’s yours? Where and when does this cross the line that it isn’t you that did whatever was captured in the video? What kind of issue would be considered criminal or civil? It all depends on the information being used and how it is being used.
Let’s say you create a video with your boss and some woman that is not his wife and send it to her. This is a good example of a civil case. This same scenario could be turned into a criminal case if your boss is running for office and the video is leaked.
Facebook and sites like it rely on your actions and information in order to make money with advertisers by fine tuning what they show you. How can you put legal guards around it? There are so many different types of data that it will need to be considered in different ways.
Using DeepFake to do good.
You can use DeepFake for anonymizing. That’s a good thing. Now you can just use a generic designed human face on a video to protect someone’s privacy. Check out the This Person Does Not Exist website! These images of real people are not real people at all!
Albert Einstein may be back soon and he will be teaching physics. The combination of DeepFake + voice cloning + Ai bot (that analyzed every bit of information and research) + Remote Schooling will allow kids to learn directly from historical figures.
We are entering a new era. Will movie producers skip real actors and use deepfakes? Will actors license their faces and can they even do that? And should we treat filters differently than deepfake? All of this is happening!
Technology can save us from itself.
Technology can sometimes save us from some of the issues it creates and innovate and support the regulation of data. Perhaps the future is “truth technology.” Identifying false information.
For example, some financial institutions are using technology to not only verify their customers but to also let their customers know they are doing business with the legitimate bank. It’s like a two way digital handshake.
Find out what data gets injected into databases and put some controls in place. Make it clear what data can be used and how it can be used. Utilize a voluntary submission process and allow the owner of the data to control it.
So what can YOU do?
The best thing that you can do is to educate yourself. Treat DeepFake as misinformation. The internet is like a used car salesman. It has an agenda.
- Know your biases and try to find conflicting statements.
- You know how we tell you that If you see any content online that stirs up your emotions to take a minute, come back down to Earth, and then take action? This is also true for video that you are absolutely sure are genuine.
- Don’t share something before you fact check. Let’s not create a bigger problem.
- Do your best to fact check.
Hosted by
-
Wizer’s hacker, Chris Roberts!
Thank you to our esteemed panelists:
- Justin Daniels - Legal & Business Advisor ➥ Cybersecurity Subject Matter Expert ➥ Tedx and Keynote Speaker ➥Blockchain/AI Advisor➥Commercial UAS (Drone) Pilot
- Arti Arora Raman - Founder and CEO at Titaniam, Inc.
- Jeff Man - Information Security Evangelist at Online Business Systems
- Gabriel Friedlander - Wizer Founder & CEO
Gabriel Friedlander
Gabriel Friedlander is the Founder & CEO of Wizer, whose mission is to make basic security awareness a basic life skill for everyone. Wizer has been rapidly growing since being founded in 2019, and now serves 20K+ organizations across 50 countries. Before founding Wizer, Gabriel was the co-founder of ObserveIT (acquired by Proofpoint). With over a decade of experience studying human behavior, he is a prolific content creator on social media, focusing on online safety to elevate public understanding of digital risks. His engaging 1-minute videos have captured the attention of millions worldwide, going viral for their impactful messages.