7 July 2021
Artificial intelligence technology carries a potential never before seen, but at the same time the threat of interfering with human rights. Protecting citizens’ data, safeguarding transparency and ensuring security are key factors that are linked to this new technology, but its potential is not exhausted here, nor are the impacts it may have on human rights.
Artificial intelligence technology carries a potential never before seen, but at the same time the threat of interfering with human rights. Protecting citizens’ data, safeguarding transparency and ensuring security are key factors that are linked to this new technology, but its potential is not exhausted here, nor are the impacts it may have on human rights. Today we have the honour to lay the foundations for the AI’s structure, which has the potential to shape a bright and flourishing future. At the same time, we have a duty to create the ground for sustainable development, which structure should be in the light of fundamental human rightThis brief study, without claiming to be exhaustive, aims to stimulate the discussion on Artificial Intelligence (AI) and how this new technology can be implemented taking into account the existing framework of fundamental rights. In this respect, the human rights context provides an interesting perspective to address the topic. In the light of the legal framework, especially European, we briefly consider the peculiar characteristics of this new technology and how its effects can be interpreted in the wider context of ethics.
Briefly addressing a specific example, such as the Italian case, is also useful in order to stimulate a constructive discussion on the prospects for the integration of this technology within EU’s Member States. At the same time, liberal perspectives are taken into account to consider the applicability of general principles such as personal freedom and privacy to a specific context.
The human rights achievements of the last half century are unparalleled in the history of mankind. Moreover, without having to argue too much, it can be said that liberalism has been the cornerstone of this normative development in favour of individual rights, from the birth of liberal thought in philosophy to its application in the liberal democracies in which we are fortunate to live.
Although there is still a lot of work to be completed to harmonise the principles of law throughout humanity, Human Rights can be defined as the greatest achievement in the history of mankind.
Among the challenges to be considered nowadays to argue on this issue, an important role is played by new technologies. As in the past, any new technological vanguard, produced by the excellence of human knowledge, needs to be understood and discussed in the light of those same rights just mentioned. But if most technologies can be produced, dismantled and controlled, when it comes to intelligent machines, the question becomes crucial, and questions come to mind: how, who and to what extent, can the application of human rights in the field of Artificial Intelligence be controlled?
The discussion is currently being analysed in all capacities: from computer programmers to lawyers, from policy makers to academic experts. The question these researches are trying to answer is to what extent this technological advancement can be considered ethical. First of all, it should be remembered that the question is not only applicable to artificial intelligence and that the question whether or not a technology is “neutral” is of great importance to understand how to position our thinking towards them.[1] Secondly, in general terms, AI can be defined as “A discipline concerned with the building of computer programs that perform tasks requiring intelligence when done by humans […]”.[2] Thirdly, one of the characteristics of this technology is the possibility of “learning” using a huge amount of data (big data)[3], which is processed automatically and improved over time.[4] Finally, one should not think of AI as a single program, as an algorithm or an abstract entity neither a supercomputer powerful enough to threaten the safety of certain people. Artificial Intelligence is in fact a set of all the technologies, being implemented in the digital processes we have recently approached and characterized by the ability to acquire a certain level of independence in the choices that the machine in which it is implemented can perform.[5]
In the light of the above, especially considering that the growth potential of this “intelligent technologies” is theoretically unlimited, one can understand the multidirectional approaches with respect to this technology. A purely philosophical-empirical conversation would perhaps conclude that there must be a discouragement on the further development of these technologies, as it could also see replicated in these new forms of intelligence the unfathomability of human nature and the impossibility of stating what’s good and what’s evil. If, on the other hand, the issue is analysed from a purely technical perspective, the implementation of this technology undoubtedly poses risks to internet’s end users, since its use by a hostile actor in cyberspace would make the protection of strategic infrastructures difficult. Therefore, it is necessary to act preventively to identify possible malicious uses of the AI.[6]
Using one metric rather than another may have a different purpose and give different results, but what is important above all is to ensure the protection of the individuals’ rights, such as personal freedom, privacy and dignity. It is useful then to think in terms of human rights when sustaining the role of AI in future society. Consequently, it is logical to assimilate the question of AI to the question of ethics, and to ask whether this technology can be ethical on the basis of those criteria commonly accepted as fundamental human rights. It is not easy to establish such an ethical framework related to a technology.[7] But human rights can help, as they have the characteristic of being binding for everyone, and could help to understand how the use of ubiquitous technology can affect our societies.[8] From this point of view, the reasoning takes a rather treacherous turn: in view of these fundamental rights, how can we ensure that an artificial intelligence (or a machine equipped with such intelligence) follows these rules?
Although it is not the place to create an exhaustive list of the possible implications that this technology may have, it is important to keep in mind the pervasiveness[9] of AI technologies, in the light of the digital era unfolding in the world. This means that, potentially, every individual can be reached/affected by the consequences of a misuse of AI, theoretically exposing all to the same risks.[10] What has been said so far has even greater value if we think of the non-human character implied by the use of a machine capable of performing tasks entrusted to man.
For instance, if we think about the world of justice, on the one hand, the Artificial Intelligence could be implemented to improve (or automate) some repetitive bureaucratic and administrative practices, significantly shortening trial time and helping the digitalisation processes.[11] At the same time, the implementation of software that evaluates the possibility of recidivism of a prisoner raises important ethical issues. Can a fair trial be guaranteed?[12]
Another example to consider is that of personal data. As has been said, “Big Data” are for the most part the “fuel” for AI-based systems. This vast amount of information can be processed in countless ways: for example, data can be used to understand user’s behaviour in emergency centres in hospitals, to provide later specific training for health professionals to improve responsiveness in crisis management. At the same time, the geo-localisation data can be shared and exploited for purely commercial purposes to predict a person’s behaviour in advance, based on his/her past experiences with the ability to access personal information (gender, age, social status) with extreme accuracy. This purpose could be equated to a breach of personal data and personal privacy, but even more, dealing with the dignity of the individual, which risks being transformed from a person to a computational object (a data).[13] Does this interfere with the protection of individual freedom?[14]
Data economics, however, is not only about personal data, and in the wider context of the functioning of artificial intelligence technologies, one has to take into account how important it is to favour this market in the wider perspective of a European digital single market.[15]
Politicians try to give different answers when replying to these questions, and the positions of the political groups take on different facets. Certainly – especially since the main discussion should be narrowed around ethics and individual rights – liberalism seems to be helpful in understanding how fundamental it is to ensure a broad context in which the rights of the individual are guaranteed.[16] This is true despite the peculiarities of technology in itself. In fact, it is not the technology that is dangerous or discriminating, but the use that is made of it, and it is therefore the responsibility of the politics to ensure its thoughtful application. Therefore, a “strict human verification and due process” of what machines do, is essential, especially in field of general interest such as the public sector.[17]
Apart from these risks, the implementation of Artificial Intelligence technologies can bring enormous benefits especially if applied to the digitisation of services provided from the public institutions, so having a huge impact on our societies.[18] Member States, in the light of a shared ethical framework at the Union level, are implementing their own strategies of approach to Artificial Intelligence that should be ethical and focused on the well-being of the individual. This does not seem to prevent a competitively advantageous application, but rather, ensuring an ethical framework for technology should be conducive to greater (and trustworthy) results.[19] In any case, the use of data for technology must follow the principle of protection of the individual, which remains the priority in a regulatory context that is being defined.
As anticipated, the Italian case offers an interesting glimpse of the reality that a country that is both dynamic and highly characteristic can present in terms of the application of this technology. While the strategy on IA implementation is under development in the country, and the impact of this technology could potentially result in a revolution of some specific contexts (mainly in the public administration)[20], some issues, common to other contexts within the Union, can be identified to describe how the implementation of IA should take into account numerous factors.
As mentioned, the impact of AI, in many cases, such the Italian one, could be fundamental to “streamline” many bureaucratic processes, especially with regard to the timing of certain administrative procedures.[21] In the Italian strategy for AI, this concept is broadly described and the whole strategy aim at fostering the implementation of this technology in a broad sense, taking into account not only AI “per se”, but the whole evolving processes bounded to an ecosystem to be built around this new technology.[22] At the same time, however, it appear that there is a certain inclination to verticalize the decision processes regarding the use of AI, which seems to remain mainly in the hands of the Prime Minister.[23] This offers guarantees in democratic terms, but also doubts, as the issue of implementation is strongly linked to the political cycle, and therefore some guarantee instruments are needed to protect citizens’ rights.[24]
Moreover, to evaluate the impact of IT and AI advancements in Italy, different aspects must be taken into account, such as the general diffusion of the use of digital services, big data, e-commerce services, which, as the Commission’s indexes show, are below the European average.[25] Finally, not to be forgotten is the geographical peculiarity of a peninsula crossed by a mountain range that divides a country in two[26], with two big islands. All this creates a certain disparity in the conditions of access to the abovementioned services (and more general to internet) between urban and rural areas, cities and very small villages. But it also means that huge investments in infrastructure are continuously needed in order to modernise existing ones to the use of new technologies (as to implement Artificial Intelligence in the system).
Without claiming to be exhaustive, these examples give an idea of how, for the most part, the question must be defined in terms of “correct use” of this technology, and not so much in terms of “the new frontiers of law”. Technically it is in fact difficult to establish that an algorithm is right or correct. Likewise, it is difficult to identify an exact yardstick to judge the ethicality of a technological process (although, as we have seen, a good starting point are human rights). In other words, there it is not a matter of creating new laws to succour any “disruptive” technological advancement, but rather to ensure that the usability of artificial intelligence can follow the respect of human rights and can be implemented and used in a fair and not harmful way within our societies, at every stage of its usability.[27] The need expressed by the European Commission for Artificial Intelligence strategies that are in line with a technology that is “trustworthy” and “human-centred” is at the heart of the concept of Artificial Intelligence in Europe.[28]
Given the increasing use of this technology it is clear that “new challenges [exists] to traditional ethical frameworks due to the implicit and explicit assumptions made by these systems, and the potentially unpredictable interactions and outcomes that occur when these systems are deployed in human contexts”[29]. Therefore, the human context is important, as is the question of how AI is connected to it. Constant monitoring is necessary, as well as increasing education about this phenomenon in order to make end users aware of the risks and to promote the correct use of AI-based systems.
Certainly, artificial intelligence poses technological, social and moral questions for which continuous dialogue is necessary to contribute to a correct implementation of this remarkable technology. Liberalism and its attention to the protection of individuals, once again, proves to be the key to understanding the future, and the relation we will have with thinking intelligent machines.
[1] R. J. Whelchel, “Is Technology Neutral?”, in IEEE Technology and Society Magazine, vol. 5, no. 4, pp. 3-8 (Dec. 1986)
[2] J. Daintith, E. Wright, A Dictionary of Computing, Oxford University Press, Oxford (2008)
[3] “Over the past two decades, there is a tremendous growth in data. This trend can be observed in almost every field. […] In the mid-2000s, the emergence of social media, cloud computing, and processing power (through multi-core processors and GPUs) contributed to the rise of big data […] Making sense out of the vast data can help the organization in informed decision-making and provide competitive advantage” – A. Bhadani, D. Jothimani, “Big data: Challenges, opportunities and realities”, in: Singh, M.K., & Kumar, D.G. (Eds.), Effective Big Data Management and Opportunities for Implementation (pp. 1-24), Pennsylvania, USA, IGI Global
[4] T. Saloky, J. Šeminský. “Artificial Intelligence and Machine Learning”, Studies in health technology and informatics 261 (2019)
[6] G. C. Allen, D. Amodei, H. Anderson, & others., “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation”, Future of Humanity Institute, University of Oxford, Centre for the Study of Existential Risk, University of Cambridge, Center for a New American Security, Electronic Frontier Foundation, OpenAI (2018)
[7] A good example come from the European institutions, with the publication of the “AI Ethics Guidelines”, presented by the European Commission’s High-Level Expert Group (AI HLEG), once again making Europe at the forefront of the discussion about new technology and the European citizens’ rights. See: European Commission, “Draft Ethics Guidelines for Trustworthy AI”, High-Level Expert Group on Artificial Intelligence (2018)
[8] The rights the text is referring to are those contained in the main accepted declarations, such as Universal Declaration of Human Rights (UDHR), the International Covenant on Civil and Political Rights (ICCPR), and the International Covenant on Economic, Social and Cultural Rights (ICESCR), the EU Charter of Fundamental Rights.
[9] Deloitte (sponsor content), “AI Is Not Just Getting Better; It’s Becoming More Pervasive”, Harvard Business review (2019)
[10] Another important question, no less relevant but which cannot be analyzed for the brevity of this document, is that of the use of this technology as an offensive tool in military contexts, there autonomous weapon systems are implemented. For further information, see: P. Scharre, M. C. Horowitz, “An Introduction to Autonomy in Weapon Systems,” Center for New American Security Working Paper (2015)
[11] This type of implementation has been proposed in the case of Italy, with regard to the possibility of lightening the burden of bureaucratic processes (www.agid.gov.it). These implementations are being adopted especially for Public Services and digital ID. The use of AI as an option to manage the Public Administration, is also shared by the Italian strategy for artificial intelligence of the Ministry of Economic Development (MISE). See: MISE’s Expert Group, “Proposal for an Italian AI Strategy”, MISE, Rome (2018)
[12] “All persons shall be equal before the courts and tribunals”, International Covenant on Civil and Political Rights (ICCPR), Art. 14 (1976)
[13] Ibid. European Commission (2018); p. 7
[15] ALDE Pary, “Artificial Intelligence, made in Europe”, OpEd (2020)
[16] Renew Europe, “Renew Europe position on Artificial Intelligence” (2020)
[17] Ibid. Renew (2020); p. 7
[18] European Commission, “Artificial Intelligence, real benefits” (2018)
[19] S. Larsson et al., “Human-centred AI in the EU”, Fores-ELF, ISBN: 978-91-87379-81-9 (2020); pp. 14-44
[20] F. Cappelletti, “AI policy in Italy: Comprehensive focus on core infrastructural robustness and humanistic values”, in “AI in EU”, Fores-ELF (2020); in S. Larsson et al., “Human-centred AI in the EU”, Fores-ELF, ISBN: 978-91-87379-81-9 (2020); pp. 158-177
[21] Much will depend on the fact that we do not want to simply recreate digitally what was happening on the shelves of paper archives: we must aim to take full advantage of the new technologies offered by digital media, improving, not just replacing, the procedures.
[22] Ibid. MISE (2018); parts 2-4
[23] Ibid. MISE (2018); pp. 74-82
[24] Ibid. F. Cappelletti (2020); pp. 176-177
[25] The Digital Economy and Society Index(DESI). Source: European Commission,https://ec.europa.eu/digital-single-market/en/desi
[26] The mountain chain of the Apennines covers the entire peninsula from North to South, running along a central axis and creating areas where for a long time, via-cable internet access was limited by purely physical factors. Today, fibre technology is widespread, as is coverage in most of the mobile network signal territory. Still relevant and to be considered the different level of digitalisation between the North and South of the country.
[27] Ibid. European Commission (2018); pp. 8-10
[28] Ibid. European commission (2018);
[29] AI Now, Event Summary, “The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term”, the White House and New York University’s Information Law Institute (2016); p. 18
Blog Post by:
Francesco Cappelletti, Research Fellow at ELF Policy and Research Unit. He began his studies in music and theatre, graduating in Violin at Conservatory of Florence. He holds two MAs in International Relations from the University of Florence and the Moscow National Institute of International Relations (MGIMO) He focuses his research on digitalisation and security studies.
Marco Mariani graduated in “Law”, in “Political Science”, in “Administration Sciences” and in “Social sciences for non-profit organizations and international cooperation” with full marks. Furthermore, he attended post graduate courses at many other universities (Bocconi, Luiss, La Sapienza, European University Institute). He was researcher at the “Centro Nazionale di Studi e Ricerche sulle Autonomie Locali” and visiting lecturer at the University of Florence. Currently he is lecturer of “Public utilities law” in the e-Campus telematic University.
He is a founding partner of Catte Mariani law firm, based in Florence and Rome. He is licensed to practice before the higher courts, and he mainly deals with issues concerning Public utilities law and public procurement, administrative law, law and economics.
He is the author of numerous publications (including two monographs and 23 anthologies) on the matters of Public utilities law, Local authorities law, Administrative law, Public procurements law, Planning law, Bankruptcy law, Consumers law, Corporate law. In the same subjects he has been lecturer and speaker in many courses and conferences. He currently serves as Board Member of the European Liberal Forum and European Affairs Director of Fondazione Luigi Einaudi.
—
Published by the European Liberal Forum. The opinions expressed in this publication are those of the author(s) and do not necessarily represent those of the European Liberal Forum.