A recent phenomenon on the internet is the “Deep Fake.” Deep Fakes is the name given to videos that use machine learning technology to replace the face and / or voice of the person in a given video to make it seem like either, they are someone else, or that they are saying something they didn’t say. At first it was used in the porn community, with the seemingly harmless gag of placing someone else’s (a celebrity, say) face on a porn actress with the intent of making it seem like they (say, the star of a major movie franchise) was the one having the sex. And while that may not seem innocuous, there is an even more sinister way this tech can be used.
Fake news.
It’s already hard enough to tell whether that clever meme, or email spouting “facts” about someone is correct or not, and already we have to fact check the simplest of viral articles, but up until now, if there was “video” evidence of someone saying something, we could be pretty sure they said it. That time has unfortunately passed.
And while, at this exact date, it’s slightly harder than “the average Joe” can manage, the tech and apps built around this tech are streamlining the process and making it exponentially easier every day.
So what do we do? Is there anything we can do?
And is this something we need to worry about, or just another writer flying off the handle about some little anecdote of no consequence?
is a group of friends who love to write about life, sports, comedy, tech and other fun stuff!
Consider leaving a comment, we love rewarding engaging Steemians!