European Approach to Artificial Intelligence: Ethics Is Key

The socio-economic, legal and ethical impacts of AI must be carefully addressed, says the European Commission.

Oliver Schonschek, Security Now News Analyst

August 13, 2019

7 Min Read

Artificial intelligence (AI) has become an area of strategic importance in the European Union (EU). However, socio-economic, legal and ethical impacts must be carefully addressed, says the European Commission. Ethics guidelines are proving key to the European approach here, and there will be additional structure to come: AI certifications concerning privacy, security and social impacts.

There is a strong global competition in AI among the US, China, and Europe, says the European Commission's Science and Knowledge Service: "The US leads for now but China is catching up fast and aims to lead by 2030. For the EU, it is not so much a question of winning or losing a race but of finding the way of embracing the opportunities offered by AI in a way that is human-centred, ethical, secure, and true to our core values."

The European Union wants to embrace the opportunities afforded by AI but not without critical calculation. The black box characteristics of most leading AI techniques make them opaque even to specialists, states the EU Science Hub. The EU should challenge the shortcomings of AI and work towards strong evaluation strategies, transparent and reliable systems, and good human-AI interactions. Ethical and secure-by-design algorithms are crucial to build trust in the disruptive technology, so the authors of the EU study "Artificial Intelligence: A European Perspective" state.

The European approach to Artificial Intelligence is based on three pillars:

  • Being ahead of technological developments and encouraging uptake by the public and private sectors

  • Prepare for socio-economic changes brought about by AI

  • Ensure an appropriate ethical and legal framework

Two of these pillars are among the soft factors, which are seen as the main differences to the AI developments in other countries and regions.

AI applications may raise new ethical and legal questions, related to liability or fairness of decision-making, so the European Commission has appointed 52 experts to a High-Level Expert Group on Artificial Intelligence (AI HLEG), comprising representatives from academia, civil society, as well as industry. The AI HLEG delivered the so-called "Ethics Guidelines on Artificial Intelligence."

These guidelines list seven key requirements that AI systems should meet in order to be trustworthy:

  • Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.

Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all lifecycle phases of AI systems.

Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.

Transparency: The traceability of AI systems should be ensured. Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.

Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.

Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

Commission Vice-President for the Digital Single Market Andrus Ansip said: “Step by step, we are setting up the right environment for Europe to make the most of what artificial intelligence can offer. Data, supercomputers and bold investment are essential for developing artificial intelligence, along with a broad public discussion combined with the respect of ethical principles for its take-up. As always with the use of technologies, trust is a must.”

Carlos Moedas, Commissioner in charge of Research, Science and Innovation, added: “Artificial intelligence has developed rapidly from a digital technology for insiders to a very dynamic key enabling technology with market creating potential. And yet, how do we back these technological changes with a firm ethical position? It bears down to the question, what society we want to live in. Today's statement lays the groundwork for our reply.”

Commissioner for Digital Economy and Society Mariya Gabriel said: "To reap all the benefits of artificial intelligence the technology must always be used in the citizens' interest and respect the highest ethical standards, promote European values and uphold fundamental rights. That is why we are constantly in dialogue with key stakeholders, including researchers, providers, implementers and users of this technology."

„Compared to other regions like the US or China, Europe still needs to walk an extra mile to catch up with the development of Artificial Intelligence, especially in its real-world implementation. However, European digital SMEs produce AI solutions that are trusted by the consumers: they offer more security, privacy and higher quality“, commented Dr. Oliver Grün, President of DIGITAL SME, the largest network of the small and medium-sized ICT enterprises in Europe. The European data protection authorities have already published how the data protection of AI should be evaluated (Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679 (wp251rev.01) https://ec.europa.eu/newsroom/article29/document.cfm?action=display&doc_id=49826).

On the other hand, the research institute Center for Data Innovation (https://www.datainnovation.org) says that the EU needs to reform the GDPR to remain competitive in the Algorithmic Economy. „The General Data Protection Regulation (GDPR), while establishing a needed EU-wide privacy framework, will unfortunately inhibit the development and use of AI in Europe, putting firms in the EU at a competitive disadvantage to their North American and Asian competitors. The GDPR’s requirement for organizations to obtain user consent to process data, while perhaps being viable, yet expensive, for the Internet economy, and a growing drag on the data-driven economy, will prove exceptionally detrimental to the emerging algorithmic economy.“ User surveys in Germany however show that 53% consider technologies developed in Europe to be more trustworthy in terms of privacy and security than those originating from the US or China. Only 29% do not think so. Trust and mistrust play a role in AI solutions.

In a project led by the German Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS, and with the participation of Germany’s Federal Office for Information Security (BSI), an interdisciplinary team of scientists from the Universities of Bonn and Cologne are drawing up an inspection catalog for the certification of AI applications. They have now published a white paper presenting the philosophical, ethical, legal and technological issues involved.

According to the white paper, it must be determined during the initial design process whether the application is ethically and legally permissible -- and if so, which checks and controls must be formulated to govern this process. One necessary criterion is to ensure that use of the application does not compromise anyone using it in their ability to make a moral decision -- as if the option existed to decline the use of AI -- and that their rights and freedoms are not curtailed in any way.

Transparency is another important criterion: the experts emphasize that information on correct use of the application should be readily available, and the results determined through the use of AI in the application must be fully interpretable, traceable and reproducible by the user. Conflicting interests, such as transparency and the nondisclosure of trade secrets, must be balanced against one another. The plan is to publish an initial version of the inspection catalog by the beginning of 2020 and then begin with the certification of AI applications. This development could be of great interest for solution providers in the field of AI trying to enter the European market, which seems to be promising.

In a recent study by the auditing and consulting firm EY, on behalf of Microsoft Germany, 86% of German companies surveyed said that artificial intelligence will have a very strong or strong influence on their industry in the next five years.

However, from a business perspective, the technology also carries risks, according to most German companies. At the forefront, 63% of German firms cite regulatory requirements as a major issue. For many, the guidelines in the country for the use of AI are still too unclear. Also, providing a clear indication of apprehension here, 54% even fear that they lose control of the AI and that it becomes independent.

— Oliver Schonschek, News Analyst, Security Now

Read more about:

Security Now

About the Author

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights