Deepfake and data protection
In recent days we started having consultations on a new technological phenomenon known as deepfake that through techniques of Artificial Intelligence, videos and images are altered to simulate that people who appear are different from those originally appeared or that put words in these videos of people who have never said. As technology develops, it becomes increasingly difficult to guess whether the video is real or not.
Needless to say, this technology may pose an extra risk to the already intrinsic to disclose our own images on the Internet because anyone could make us say things that we have never said or just take our image “embed” in situations recorded that we have never actually lived.
How can we protect from Deepfake phenomenon?
As always, you must remember that making pictures or videos accessible to everyone on Internet, is itself a risk and this is multiplied by the fact that it is almost impossible to know if a third party is making use of them. We are far from real solutions to know which use is made of our image.
Until that comes, there are some solutions if we detect our images in videos in situations we’ve ever experienced (or even if we have lived them but our intimacy is at risk).
- Report the video or image through internal channels of the social network or web site where it is published.
- The Spanish Data Protection Authority has a priority channel, since some time ago, to report videos or images of sexual or violent content.
- If the video or image attempts against your honour or privacy, remember that this can be a crime, so you can always report it in front of police.
Deepfake: another challenge and risk to our privacy and intimacy; We’ll be alert.
This work is under a license from Attribution-NonCommercial-NoDerivatives 4.0 International of Creative Commons.