8 minute read

Full certification for trustworthy AI

The testing and certification of artificial intelligence takes place where the right know-how is home – in Styria.

Something remarkable is happening in the world of artificial intelligence – however, it’s not what some people anxiously fear. Much to the contrary, thanks to a new strategic partnership that has formed between Austrian experts and experts for the reliable use of artificial intelligence. This has turned Styria into a global pioneer in testing and certifying artificial intelligence. It is here that the new initiative was created to develop efficient and independent testing procedures and technology for artificial systems.

Advertisement

Those involved are the Know-Center, which is a leading European research centre for data-driven business and artificial intelligence, the SGS Group, which is a globally leading provider of examinations, testing, verification and certification, as well as IAIK from the Graz University of Technology, one of the top research teams for cyber security. Ethical and legal aspects are brought up by the Business Analytics and Data Science Center of the Graz University of Technology and the activities are accompanied by Austria’s Secure Information Technology Center (A-SIT), which acts as a neutral observer.

AI as key technology

Artificial intelligence has already significantly altered products and services and is one of the fastest growing fields. AI offers enormous potential for new business ideas and economic growth. It is the key piece of technology which will guarantee the future viability of the economy and society when faced with global challenges ranging from developments in the pandemic all the way to climate change.

Most AI systems are data-driven, that means they learn desired behaviours from large quantities of data. This highly modern technology allows for extraordinary innovations, however, there’s a good and bad side to everything. AI can also have extremely negative effects if it isn’t used appropriately. When this happens, prejudices can arise, like in the field of human resources or through unsafe recommendations made by AI in the healthcare sector. AI has become a question of trust.

“The potential of AI in Europe will only be fully explored when data is handled in a trustworthy manner, and fairness and reliability of algorithms and their security is guaranteed,” says Stefanie Lindstaedt, CEO of the Know-Center, and illustrates the plans: “We want to ensure that AI usage is technically compliant, reliable and functions in a non-partisan way with an all-encompassing perspective. Our focus is on all areas that are essential for high-quality trustworthiness of AI, like data, algorithms, cyber security, processes, ethics and law.”

© Know-Center GmbH/Jorj Konstantinov

One of the biggest challenges in the field of AI is not the highly-modern technology itself but much more its reliability.

Artificial Intelligence Act

Like the General Data Protection Regulation, the European Commission is planning on introducing a decree to regulate AI systems in the future.

The EU Commission presented the draft of the Artificial Intelligence Act decree to regulate systems that use artificial intelligence. The basic idea of steering the development of AI in the right direction and thus building up trust in the long-term should be fully viewed as a positive development. At the same time, however, there is a danger that the competitive edge will be lost or the development of AI systems will become laborious and more costly, the Know-Center says. It has also “translated” EU-officialese into a more comprehensible language (see box).

The Artificial Intelligence Act intends to include a comprehensive conformity assessment carried out by providers, which would make AI certification indispensable. The aim of the initiative is thus to support businesses in developing competitive and reliable AIbased products and systems, and to dismantle the hurdles for using AI. The multidisciplinary team of experts covers all fields from research and advice to certification.

Obligations for providers and users

The decree’s main focus is regulating AI systems in high-risk sectors such as education, human resource management, critical infrastructure and medicine, among others. In future, compliant AI systems must be CE-branded while providers will be largely responsible for conformity assessments of AI systems in high-risk sectors. In some special and particularly high-risk sectors the assessment will be carried out by an external party obligatorily. Providers must meet extensive compliance requirements like establishing risk and quality management, keeping technical documentation and logs, ensuring accuracy, resilience and cybersecurity adequately. While self-assessment is required for high-risk sectors only, all other applications are encouraged by the decree to be assessed as well. Users (except for mere private use) are obligated to monitor AI-systems by notifying system providers of any risks or malfunctions and by suspending system operations if necessary.

“A cornerstone of trust in AI is the maintenance of standards and requirements, which will be proven via conformity assessments, carried out by accredited third parties like SGS. Within our partnership we will develop new multidisciplinary tools and techniques to make these assessments possible and will include fields such as cybersecurity and ethics. This will offer added value to customers all over the world,” explains Siddi Wouters, Senior Vice President of Digital & Innovation at SGS.

The need for new safety concepts

Despite its enormous technological potential, there are also safety issues and risks associated with the usage of AI. There are many ways to attack AI systems. A huge challenge when assessing AI systems is posed by cybercrime. For example, a self-driving vehicle could make fatal decisions if the data that the AI processes in the vehicle is manipulated.

“Traditional static assessments are not sufficient here,” emphasises Harald Kainz, Rector of the Graz University of Technology, adding: “There needs to be research into fundamental, new security technology concepts in order to maintain continuous proof of the sturdiness of AI systems against cyber-attacks, and to protect privacy. This is the expertise that the Graz University of Technology brings to the strategic partnership. The initiative also represents a logical immersion into already existing collaborations in the fields of IT, software engineering and cybersecurity with SGS, the KnowCenter and the University of Graz. This also benefits university research and teaching, which can make use of this new and current information”.

Even if the usage of AI in previous years has increased across all sectors, companies are often still left feeling unsure when it comes to data protection and legal requirements. The planned EU regulation could create additional excessive demands – the DSGVO debacle comes to mind – and decrease or even completely thwart the added value of AI.

A lack of legal security due to not having an audit certificate is one of the biggest barriers. According to the Know-Center, concrete support and sponsorship offers are decisive to prevent having the brakes put on businesses’ innovative capacity and the commercialisation of research results. “Auditing approaches for AI are essential for a broad usage of AI in the economy. This is not only a legal requirement, but also builds up trust and can positively influence acceptance in society. Our studies in the field of recruitment show, for example, that people who feel discriminated would rather have their qualifications assessed by AI than by another person. This is the case in particular when it comes to certified usage of AI with explanation components,” says Stefan Thalmann, director of the Business Analytics and Data Science Center at the University of Graz.

Austria is on the right track

Herbert Leitold, General Secretary of A-SIT, also emphasises that “thanks to the pooling of different expertise, we were able to overcome the complex challenges of AI certification – making sure that Austria is on the right track to be able to give providers and users of AI applications better guidance and certainty about the quality of the applications.”

Barbara Eibinger-Miedl, Regional Minister for Research and Economic Affairs, welcomed the initiative: “Artificial intelligence is a key issue in digitisation. Along with great opportunities, there are also challenges. We have to guarantee that there will be reliable systems and a high level of data protection in order to remove barriers when employing artificial intelligence. The fact that the global player SGS is relying on Styrian know-how emphasises the excellent work done by our own domestic partners. Thanks to numerous research projects and digitisation initiatives, we in Styria were able to build up comprehensive capabilities in this field and assume a global pioneering role.” ◆

© TU Graz/Lunghammer

The EU Artificial Intelligence Act decree seeks to regulate artificial intelligence and thus make AI systems and their data more resistant to bad actors.

The EU Decree of an Artificial Intelligence Act

The proposal for a regulation lays down harmonised rules on artificial intelligence (Artificial Intelligence Act). Artificial Intelligence (AI) is a fast-evolving family of technologies that can bring a wide array of economic and societal benefits across the entire spectrum of industries and social activities. By improving prediction, optimising operations and resource allocation, and personalising service delivery, the use of artificial intelligence can support socially and environmentally beneficial outcomes and provide key competitive advantages to companies and the European economy. Such action is especially needed in high-impact sectors, including climate change, environment and health, the public sector, finance, mobility, home affairs and agriculture. However, the same elements and techniques that power the socio-economic benefits of AI can also bring about new risks or negative consequences for individuals or the society. In light of the speed of technological change and possible challenges, the EU is committed to strive for a balanced approach. It is in the Union’s interest to preserve the EU’s technological leadership and to ensure that Europeans can benefit from new technologies developed and functioning according to Union values, fundamental rights and principles. Against this political context, the Commission puts forward the proposed regulatory framework on Artificial Intelligence with the following specific objectives: c ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and

Union values; c ensure legal certainty to facilitate investment and innovation in AI; c enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to

AI systems; c facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.

This article is from: