This Science News Wire page contains a press release issued by an organization and is provided to you "as is" with little or no review from Science X staff.

Trust in AI is about more than technical performance—ethical principles and human values are equally important

March 18th, 2025 Florian Meyer
Trust in AI about more than technical performance—ethical principles and human values are equally important
Credit: Tara Winstead/Pexels

Trust is, at its core, a deeply human phenomenon. When we step onto a bus, it's the driver we trust to bring us safely to our destination—but what about the bus? Can we place the same trust in it as we do in people? Or is all we ask of technology that it functions reliably? And what about when artificial intelligence takes the wheel?

"Absolutely. Trust can be placed in AI just as it is in humans," says Petar Tsankov, CEO and co-founder of LatticeFlow AI, an ETH spin-off that helps companies develop trustworthy, reliable and performant AI for real-world applications.

According to Tsankov, AI becomes trustworthy as soon as its models deliver consistent, error-free responses across different environments and make reliable decisions: "When users see an AI system behaving predictably and dependably, they begin to trust it—just as they would a reliable person."

The first and most critical step towards building trust, he explains, is ensuring that an AI performs reliably even when faced with unfamiliar data. What's essential, he emphasizes, is that AI not only functions in controlled lab settings but also delivers consistent results when applied to real-world data.

"All too often, we see AI models failing to meet expectations when exposed to real-life conditions, and that undermines trust," says Tsankov.

Margarita Boenig-Liptsin seconds Tsankov's view that people can place trust in a technology. As ETH Professor for Ethics, Technology and Society, she studies how social values coevolve with transformations in digital technologies, including AI. For her, the key insight is that trust is relational—and that the trust built within these relationships is directed not only towards other people but also towards institutions or technical equipment.

At its core, she explains, trust boils down to a simple question: Can I rely on you? In advanced technological societies, "you" usually refers to both human and technological agents working together.

Her understanding of trust encompasses entire networks of relationships. "Trustworthiness isn't a property just of the technology itself, but of the broader social and technical environment in which it's embedded," says Boenig-Liptsin.

This environment includes designers, users and institutions: "To assess the trustworthiness of an AI system, we need to examine it from the perspectives of different stakeholders."

Her approach focuses not only on development and application, but also on how AI impacts knowledge and responsibility. "This socio-technical 'systems view' provides key impulses for AI researchers looking to design trustworthy models," she says. "When researchers promote transparency or engage stakeholders in discussions about a model's features, limitations and potential—all of that gives them valuable input on how relationships of trust in the system are affected and where they can make changes."

For Alexander Ilic, Executive Director of the ETH AI Center, trustworthiness enters the equation whenever technology and society meet. He believes that the profound transformations driven by AI are far from complete: "The next phase is about unlocking the potential of private data in different industries and developing highly personalized AI companions that enhance the way we tackle complex tasks.

"At the same time, we need to reflect on the implications of these new capabilities for users and consider how to foster trust in AI, ensuring that concerns about its risks don't overshadow its benefits." Identifying risks is a key part of the AI Center's work. It therefore encourages frank discussion among researchers and also with various stakeholders.

Openness is key

For Andreas Krause, Professor of Computer Science and Chair of the ETH AI Center, openness is crucial for inspiring trust. "As researchers, we can't make people trust in AI," he explains. "But we can create transparency by disclosing the data we use and explaining how our AI models are developed."

Krause is researching new approaches that account for uncertainties in AI models, ensuring that AI systems become better at recognizing what they don't know. These uncertainty estimates are essential for assessing confidence in statements, detecting "hallucinations" and guiding data collection.

Ilic explains that the ETH AI Center operates on the basis of open and transparent principles that independent parties can verify and evaluate. At the same time, the Swiss AI Initiative and the Swiss National AI Institute serve as real-world laboratories for open AI development. Here, more than 650 researchers from ETH Zurich, EPF Lausanne and 10 other Swiss academic institutions are developing a Swiss large language model and next generation foundational models, thus creating the basis for generative AI based on Swiss values.

In this context, openness means making source codes, tools, training data and model weightings—which influence an AI system's decisions—freely accessible. This also allows SMEs and start-ups to build their own innovations on this basis. In addition, common open-source foundation models save significant costs and reduce the carbon footprint.

Tustworthy AI rests on a number of key principles, including reliability, safety, security, robustness, consistency and transparency. For Ilic, these are fundamental requirements: "Only when we understand what's going on inside AI, can organizations start adopting AI to transform their core processes and work with sensitive data." Black box systems trained on data with hidden biases or alien political values, for instance, can be deeply unsettling. "Trust in AI grows when we can be sure it's based on the same ethical principles and values as we have," he explains.

Human values

Tsankov highlights another critical aspect: "People expect AI to respect ethical norms, avoid discrimination and produce content that is consistent with human values. Trust in AI is about more than just technical performance—it also needs to be in agreement with the principles our society is built on."

One way of defining these principles is to establish rules of governance, standards and laws. But Tsankov cautions that these are only part of the picture. "Trust isn't built on abstract principles alone," he explains. "It also requires rigorous technical validation to ensure that an AI system performs in a robust, reliable, fair, safe, secure, explainable and legally compliant manner. Turning principles like these into something technically measurable poses a huge challenge for the development of trustworthy AI."

In sensitive fields, two additional principles come into play: interpretability and explainability. This means that users of AI systems understand how the computer arrives at its decisions and can then clearly explain those decisions to others.

A question of perspective

These last two principles are particularly critical in medicine, not least when AI is being used to help diagnose and treat pediatric conditions. In this context, says Boenig-Liptsin, people's judgment of an AI's trustworthiness depends on their personal perspective. The AI researcher, doctor, child and parents all have very different experiences, knowledge and responsibilities, and they do not experience outcomes in the same way.

"In medicine, AI models must be transparent, interpretable and explainable to earn trust," says computer science professor Julia Vogt. She heads the Medical Data Science Group at ETH and develops AI models that aid doctors in diagnosing and treating diseases. Vogt's research shows that trust hinges not only on an AI system's performance, but also on whether its decisions and recommendations are comprehensible to both doctors and patients. To ensure this, the models undergo rigorous validation, with strict adherence to data protection standards.

One example of her group's work is an interpretable machine learning model designed to assist doctors in diagnosing and treating appendicitis in children. Using ultrasound images, the AI assistant assesses the severity of the condition and suggests treatment options. These models are highly interpretable because the AI system is able to scan ultrasound images for signs of the same symptoms that doctors already use in clinical practice.

In the context of appendicitis, for example, these might be surrounding tissue reactions or severe constipation. This allows doctors to verify whether AI has based its recommendations on clinically relevant symptoms. It also enables them to explain the AI diagnosis to both parents and children in a clearly understandable way.

In another project, Vogt's group developed an interpretable and explainable AI model to estimate the severity of pulmonary hypertension in newborns—a serious heart condition in which effective treatment relies on early and accurate diagnosis. The team employed a deep-learning approach using neural networks to analyze ultrasound images of the heart.

Based on the ultrasound images, this AI generates specialized symptom maps that highlight the areas of the heart it has used to make its diagnosis. These maps allow doctors to quickly assess whether the AI really is focusing on clinically significant cardiological structures. As a result, they can interpret its recommendations with confidence and clearly explain its diagnosis. This is particularly noteworthy because deep-learning models often function as a "black box," thereby rendering their decision-making processes opaque.

Ensuring that AI decisions are verifiable and understandable is a good way to foster trust. Of course, some aspects of an AI's decision-making process will remain inexplicable to humans. But Krause insists that this uncertainty does not necessarily undermine trust: "It's just as challenging to fully explain an AI model as it is to completely understand human decisions from a neurobiological perspective. Yet people are eminently capable of trusting one another."

Provided by ETH Zurich

Citation: Trust in AI is about more than technical performance—ethical principles and human values are equally important (2025, March 18) retrieved 18 March 2025 from https://sciencex.com/wire-news/503769687/trust-in-ai-is-about-more-than-technical-performanceethical-prin.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.