Deepfake and data protection
Recently, we have started having consultations on a new technological phenomenon known as deepfake in which Artificial Intelligence techniques are used to alter videos and images so that the people appearing in them are different to those originally appeared or putting words into these videos that people never said. As technology develops, it is becoming increasingly difficult to guess whether or not the video is real.
Needless to say, this technology may pose an extra burden on the already intrinsic risk of disclosing our own images on the Internet, because anyone could make us say things that we have never said or just take our image and “embed” it in situations that we have never actually experienced.
How can we protect ourselves from the Deepfake phenomenon?
As always, you must remember that making pictures or videos accessible to everyone on the internet, is a risk in itself, and this is multiplied by the fact that it is almost impossible to know whether a third party is using them. We are far from obtaining real solutions to know how our image is being used.
Until that day arrives, there are some solutions that can be used if we detect our images in videos in situations we’ve ever experienced (or if we have experienced them but our privacy is at risk).
- Report the video or image through the internal channels of the social network or web site where it is published.
- The Spanish Data Protection Authority has had a priority channel for some time now to report videos or images of a sexual or violent content.
- If the video or image adversely affects your honour or privacy, remember that this can be a crime that can be reported to the police.
Deepfake: another challenge and risk to our privacy and intimacy. We must stay alert.