This Science News Wire page contains a press release issued by an organization and is provided to you "as is" with little or no review from Science X staff.

We must stop technology-driven AI and focus on human impact first, global experts warn

March 4th, 2024
AI
Credit: Photos Hobby, Unsplash

We need to stop designing new AI technology just because we can, causing people to adapt practices, habits and laws to fit the new technology; instead we need to design AI that fits exactly with what we need, according to human-centered AI advocates.

Fifty experts from around the world have contributed research papers to a new book on how to make AI more "human-centered," exploring the risks—and missed opportunities—of not using this approach and practical ways to implement it.

The experts come from over 12 countries, including Canada, France, Italy, Japan, New Zealand and the UK, and more than 12 disciplines, including computer science, education, the law, management, political science and sociology.

Human-Centered AI looks at AI technologies in various contexts, including agriculture, workplace environments, health care, criminal justice, higher education, and offers applicable measures to be more "human-centered," including approaches for regulatory sandboxes and frameworks for interdisciplinary working.

What is human-centered AI?

Artificial intelligence (AI) permeates our lives in an ever-increasing way and some experts are arguing that relying solely on technology companies to develop and deploy this technology in a way that truly enhances human experience will be detrimental to people in the long-term. This is where human-centered AI comes in.

One of the world's foremost experts on human-centered AI, Shannon Vallor from the University of Edinburgh in Scotland, explains that human-centered AI means technology that helps humans to flourish.

She says, "Human-centered technology is about aligning the entire technology ecosystem with the health and well-being of the human person. The contrast is with technology that's designed to replace humans, compete with humans, or devalue humans as opposed to technology that's designed to support, empower, enrich, and strengthen humans."

She points to generative AI, which has risen in popularity in recent years, as an example of technology which is not human-centered—she argues the technology was created by organizations simply wanting to see how powerful they can make a system, rather than to meet a human need.

"What we get is something that we then have to cope with as opposed to something designed by us, for us, and to benefit us. It's not the technology we needed," she explains. "Instead of adapting technologies to our needs, we adapt ourselves to technology's needs."

What is the problem with AI?

Contributors to Human-Centered AI lay out their hopes, but also many concerns with AI now and on its current trajectory, without a human-centered focus.

Malwina Anna Wójcik, from the University of Bologna, Italy, and the University of Luxembourg, points out the systemic biases in current AI development. She points out that historically marginalized communities do not play a meaningful role in the design and development of AI technologies, leading to the "entrenchment of prevailing power narratives."

She argues that there is a lack of data on minorities or that available data is inaccurate, leading to discrimination. Furthermore, the unequal availability of AI systems causes power gaps to widen, with marginalized groups unable to feed into the AI data loop and simultaneously unable to benefit from the technologies.

Her solution is diversity in research as well as interdisciplinary and collaborative projects on the intersection of computer science, ethics, law, and social sciences. At a policy level, she suggests that international initiatives need to involve intercultural dialogue with non-Western traditions.

Meanwhile Matt Malone, from Thompson Rivers University in Canada, explains how AI poses a challenge to privacy because few people really understand how their data is being collected or how it is being used.

"These consent and knowledge gaps result in perpetual intrusions into domains privacy might otherwise seek to control," he explains. "Privacy determines how far we let technology reach into spheres of human life and consciousness. But as those shocks fade, privacy is quickly redefined and reconceived, and as AI captures more time, attention and trust, privacy will continue to play a determinative role in drawing the boundaries between human and technology."

Malone suggests that 'privacy will be in flux with the acceptance or rejection of AI-driven technologies," and that even as technology affords greater equality it is likely that individuality is at stake.

AI and human behavior

As well as exploring societal impacts, contributors investigate behavioral impacts of AI use in its current form.

Oshri Bar-Gil from the Behavioral Science Research Institute, Israel, carried out a research project looking at how using Google services caused changes to self and self-concept. He explains that a data 'self' is created when we use a platform, then the platform gets more data from how we use it, then it uses the data and preferences we provide to improve its own performance.

"These efficient and beneficial recommendation engines have a hidden cost—their influence on us as humans," he says. "They change our thinking processes, altering some of our core human aspects of intentionality, rationality, and memory in the digital sphere and the real world, diminishing our agency and autonomy."

Also looking into behavioral impacts, Alistair Knott from Victoria University of Wellington, New Zealand, and Tapabrata Chakraborti from the Alan Turing Institute, University College London, UK, and Dino Pedreschi from the University of Pisa, Italy, looked at the pervasive use of AI in social media.

"While the AI systems used by social media platforms are human-centered in some senses, there are several aspects of their operation that deserve careful scrutiny," they explain.

The problem stems from the fact that AI continually learns from user behavior, refining their model of users as they continue to engage with the platform. But users tend to click on the items the recommender system suggests for them, which means the AI system is likely to narrow a user's range of interests as time passes.

If users interact with biased content, they are more likely to be recommended that content and if they continue to interact with it, they will find themselves seeing more of it. "In short, there is a plausible cause for concern that recommender systems may play a role in moving users toward extremist positions."

They suggest some solutions for these issues, including additional transparency by companies holding data on recommender systems to allow for greater studying and reporting on the effects of these systems, on users' attitudes toward harmful content.

How can human-centered AI work in reality?

Pierre Larouche from the Université de Montréal, Canada, argues that treating AI as 'a standalone object of law and regulation' and assuming that there is 'no law currently applicable to AI' has left some policymakers feeling as if it is an insurmountable task.

He explains, "Since AI is seen as a new technological development, it is presumed that no law exists for it yet. Along the same lines, despite the scarcity—if not outright absence—of specific rules concerning AI as such, there is no shortage of laws that can be applied to AI, because of its embeddedness in social and economic relationships."

Larouche suggests that the challenge is not to create new legislation but to ascertain how existing law can be extended and applied to AI, and explains, "Allowing the debate to be framed as an open-ended ethical discussion over a blank legal page can be counter-productive for policy-making, to the extent that it opens the door to various delaying tactics designed to extend discussion indefinitely, while the technology continues to progress at a fast pace."

Benjamin Prud'homme, the Vice-President, Policy, Society and Global Affairs at Mila—Quebec Artificial Intelligence Institute, one of the largest academic communities dedicated to AI, echoes this call for confidence in policymakers.

He explains, "My first recommendation, or perhaps my first hope, would be that we start moving away from the dichotomy between innovation and regulation—that we acknowledge it might be okay to stifle innovation if that innovation is irresponsible.

"I'd tell policymakers to be more confident in their ability to regulate AI; that yes, the technology is new, but that it is inaccurate to say they have not (successfully) dealt with innovation related challenges in the past lot of people in the AI governance community are afraid of not getting things right from the get-go. And you know, one thing I've learned in my experiences in policymaking circles is that we're likely not going to get it entirely right from the get-go. That's ok.

"Nobody has a magic wand. So, I'd say the following to policymakers: Take the issue seriously. Do the best you can. Invite a wide range of perspectives—including marginalized communities and end users—to the table as you try to come up with the right governance mechanisms. But don't let yourself be paralyzed by a handful of voices pretending that governments can't regulate AI without stifling innovation.

"The European Union could set an example in this respect, as the very ambitious AI Act, the first systemic law on AI, should be definitively approved in the next few months."

More information:
Catherine Régis et al, Human-Centered AI, (2024). DOI: 10.1201/9781003320791

Provided by Taylor & Francis

Citation: We must stop technology-driven AI and focus on human impact first, global experts warn (2024, March 4) retrieved 27 April 2024 from https://sciencex.com/wire-news/471004888/we-must-stop-technology-driven-ai-and-focus-on-human-impact-firs.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.