{"id":14648,"date":"2020-03-11T11:57:30","date_gmt":"2020-03-11T10:57:30","guid":{"rendered":"https:\/\/www.rosello-mallol.com\/deepfake-i-la-proteccio-de-dades\/"},"modified":"2021-02-15T12:00:58","modified_gmt":"2021-02-15T11:00:58","slug":"deepfake-and-data-protection","status":"publish","type":"post","link":"https:\/\/www.rosello-mallol.com\/en\/deepfake-and-data-protection\/","title":{"rendered":"Deepfake and data protection"},"content":{"rendered":"\n

Recently, we have started having consultations on a new technological phenomenon known as deepfake <\/strong>in which Artificial Intelligence<\/strong> techniques are used to alter videos and images so that the people appearing in them are different to those originally appeared or putting words into these videos that people never said. As technology develops, it is becoming increasingly difficult to guess whether or not the video is real<\/strong>.<\/p>\n\n\n\n

Needless to say, this technology may pose an extra burden on the already intrinsic risk of disclosing our own images on the Internet, because anyone could make us say things that we have never said or just take our image and “embed” it in situations that we have never actually experienced.<\/p>\n\n\n\n

How can we protect ourselves from the Deepfake phenomenon?<\/strong><\/h2>\n\n\n\n

As always, you must remember that making pictures or videos accessible to everyone on the internet, is a risk in itself, and this is multiplied by the fact that it is almost impossible to know whether a third party is using them. We are far from obtaining real solutions to know how our image is being used. <\/p>\n\n\n\n

Until that day arrives, there are some solutions that can be used if we detect our images in videos in situations we’ve ever experienced (or if we have experienced them but our privacy is at risk).<\/p>\n\n\n\n