{"id":18443,"date":"2023-06-19T09:26:50","date_gmt":"2023-06-19T08:26:50","guid":{"rendered":"https:\/\/www.rosello-mallol.com\/?p=18443"},"modified":"2023-06-19T09:26:56","modified_gmt":"2023-06-19T08:26:56","slug":"artificial-intelligence-and-privacy","status":"publish","type":"post","link":"https:\/\/www.rosello-mallol.com\/en\/artificial-intelligence-and-privacy\/","title":{"rendered":"Artificial Intelligence and privacy"},"content":{"rendered":"\n

Amidst the public debate on the virtues and risks of tools like the famous ChatGPT, the European Union has debated and approved the first regulations<\/a><\/strong> on artificial intelligence and privacy<\/strong>, with the aim of regulating this disruptive technology and its mass use.<\/p>\n\n\n\n

Among the risks of this new technology are undoubtedly all those related to the privacy of people, since AI feeds on the processing of large amounts of data<\/strong> (also personal) that is obtained from many different sources, such as data published on the social media, posts or any other that may result in data processing, of which the data subject is sometimes not even aware. <\/p>\n\n\n\n

AI systems are constantly fed by the information they consume and, therefore, the collection and use of this information is an inseparable part of this same system.<\/p>\n\n\n\n

Artificial Intelligence and privacy: what are the risks?<\/h2>\n\n\n\n

In terms of privacy, the risks, which are identified below, can be grouped into three areas:<\/p>\n\n\n\n

    \n
  1. Security risks.<\/span><\/li>\n<\/ol>\n\n\n\n

    Indeed, AI systems feed on the constant and mass collection of data and information (also personal). The more data collected and analysed, the greater the risks that a potential attack on an AI system might cause further uncontrollable damage to data subjects.<\/p>\n\n\n\n

      \n
    1. Risks due to the discrimination of people.<\/span><\/li>\n<\/ol>\n\n\n\n

      Unquestionably, if the system makes decisions in the data collection processes (for example in a personnel selection process) based on the analysis of the social profiles, for example, of the candidates, discriminatory situations may arise<\/strong>, with the problem that the candidates they may not even be aware that their data is being processed.<\/p>\n\n\n\n

        \n
      1. Risks associated with unauthorised use.<\/span><\/li>\n<\/ol>\n\n\n\n

        The improvement in practices such as deep fake, which we discussed here<\/strong><\/a> more than 3 years ago, can mean that people are involved in undesirable situations<\/strong>, such as fake news, which can even have very important consequences on their lives. Obviously, the use of this type of technology involves the unauthorised use of images.<\/strong><\/p>\n\n\n\n

        So far, in four human lines the main risks. Let’s read, therefore, what AI itself says about privacy risks:<\/p>\n\n\n

        \n
        \"\"<\/figure><\/div>\n\n\n

        There is no doubt that, once again, this involves revolutionary technology and foreseeable mass use<\/strong>, generating new privacy risks that ultimately consist of controlling who makes use of our personal information and for what purpose.<\/p>\n\n\n\n

        What measures does the new regulation contemplate to minimise privacy risks?<\/strong><\/h2>\n\n\n\n

        From the outset, the European Union’s AI regulations directly provide for a series of forbidden practices<\/strong> and, more importantly, their application is provided for both by development companies established in the EU<\/strong> and by non-European companies if their practices have an impact on European citizens<\/strong> (somewhat in line with the GDPR model).<\/p>\n\n\n\n

        The forbidden practices include:<\/p>\n\n\n\n

          \n
        • AI tools that deploy subliminal techniques<\/strong> beyond a person’s consciousness to distort their behaviour<\/strong> in a way that causes or may cause physical or psychological harm to that person or another person.<\/li>\n<\/ul>\n\n\n\n
            \n
          • An AI system that exploits any of the vulnerabilities of a specific group of people<\/strong> because of their age, physical or mental disability, to materially distort the behaviour of a person<\/strong> belonging to this group, in a way that causes or is likely to cause that person (or another) physical or psychological harm.<\/li>\n<\/ul>\n\n\n\n
              \n
            • The use of AI systems by public authorities for the assessment or classification of the trustworthiness of natural persons over a given period of time based on their social or personal behaviour.<\/li>\n<\/ul>\n\n\n\n
                \n
              • The use of remote biometric identification systems “in real time” in public access spaces for the application of the law, unless one of the exceptions provided is not met.<\/li>\n<\/ul>\n\n\n\n

                The proposal is also based on the principle of risk management. Therefore, in any project that involves the use of AI, it will be necessary to assess the impact that it may have in relation to the processing of personal data in advance<\/strong>, which will also involve the application of principles such as privacy by design or by default.<\/strong><\/p>\n\n\n\n

                The planned measures affect companies developing AI products<\/strong> and those that do not develop them but use this type of product<\/strong>, for example, in customer service chats or similar solutions that they can implement in their businesses or companies.<\/p>\n\n\n\n

                In short, this is an initial serious legislative attempt to regulate the use of AI. We must remain attentive to how this new regulation is being implemented and to the impact, which is difficult to foresee right now, that AI may have on privacy and the protection of personal data.<\/p>\n\n\n\n

                As always, if you need more information about this artificial intelligence and privacy, don’t hesitate to contact us!<\/p>\n\n\n\n

                \n
                \n

                <\/p>

                  <\/ul><\/div>\n
                  \n
                  \n\n\n\n\n\n\n\n<\/div>\n