Stuck in the Future

Page 1


Italian Digital Media Observatory

Partner: Luiss Data Lab, RAI, TIM, Ansa, T6 Ecosystems, ZetaLuiss, NewsGuard, Pagella Politica, Harvard Kennedy School, ministero degli Esteri, Alliance of Democracies Foundation, Corriere della Sera, Reporters Sans Frontières, MediaFutures, European Digital Media Observatory, The European House Ambrosetti, Catchy, CY4GATE, ministero dell’Istruzione e del Merito

The timeline: Ten milestones by Lisa Duso

Stories

Not too old to use AI by Gennaro Tortorelli

Mental health starts with a text by Mariahelena Rodriguez

Ethics

Protesting in the Age of surveillance by Ludovica Bartolini

When the algorithm takes the bench di Andrea Iazzetta

Learning

No school for old teaching methods by Massimo De Laurentiis

Economics

The new era of transactions by Lisa Duso

Ecology of numbers by Alessandra Coffa and Andrea Iazzetta

Media

Journalism meets its synthetic twin by Alessandra Coffa

Humans and alghoritms together by Stefania Da Lozzo

A prompt mightier than the pen by Ludovica Esposito

Comics by Michelangelo Gennaro

Challenges

The metaverse is still moving forward by Matilde Nardi

Gattaca is the future of medicine by Alexandra Colasanti

Hidden cost in the workplace by Gizem Daver

The growing fear of superhuman AI by Michelangelo Gennaro

Culture

A virtual voice for the Eternal City by Giulia Tommasi

The Hollywood dilemma by Rosita Laudano

Astrobot by Ludovica Bartolini and Giulia Tommasi

Artificial Dialogues Real Questions for Society

Artificial intelligence is everywhere. It’s in our online searches, in the recommendations we receive, in the movies we watch, in the words we read. And increasingly, it’s present in the decisions we make — or the ones we let machines make for us. But what is AI, really? A set of algorithms? A new lens through which we see the world? A risk? A hope?

Giuseppe F. Italiano Deputy Rector of AI at Luiss University of Rome. He is member of the AI Commission for Information established by the Italian government

This pamphlet is a journey through the many faces of a technology that challenges us not only as citizens, but also as professionals, consumers, spectators — and, ultimately, as human beings. Artificial intelligence, here, is not reduced to a definition. It is explored through the voices of young journalists who have investigated its impact on politics, economics, culture, and daily life. From digital resistance during international protests to biometric payments, from a virtual assistant guiding tourists in Rome to AI that writes, creates, heals — and maybe even cares.

Each article sheds light on a different fragment of reality, often one we have yet to fully understand, that AI is helping to reshape.

Contradictions are part of the story. There are those who warn of its excesses, those who champion its potential, those concerned with burnout and others dreaming of empathetic chatbots. There’s room for the digital “Luddites” and for the optimists. For copyright and cognitive risks. For education and cinema. For the metaverse and for ethics.

This is not a technical manual. It is an open map, made of stories, interviews, and visions. A collection that doesn’t aim to have the final word on AI, but rather to spark new questions. Because if the future will be partly written by algorithms, we must make sure we write it together — humans and machines, yes, but also as informed and active citizens.

I hope you enjoy the read.

The timeline: Ten milestones

1770

The Mechanical Turk Invented by Wolfgang von Kempelen, it appeared to play chess autonomously. A hidden human operated it from inside, making it an elaborate hoax. Still, it sparked fascination and debate about machine intelligence.

1956

The Dartmouth Conference Organized by John McCarthy and others, this summer workshop coined the term "artificial intelligence." It brought together mathematicians and scientists to define the field. The conference marked the official beginning of AI research.

1997

Deep Blue defeats Garry Kasparov IBM's chess-playing computer Deep Blue beat world champion Garry Kasparov. It was the first time a machine defeated a reigning champion in a match. This demonstrated AI's power in strategic, rule-based environments.

1950

Turing proposes the Turing Test In Computing Machinery and Intelligence, Turing asked: "Can machines think?". He proposed a test for machine intelligence based on human-like conversation. This became a foundational concept in AI philosophy and evaluation.

1966

ELIZA chatbot

MIT's Joseph Weizenbaum developed ELIZA, a program that simulated human conversation. It mimicked a psychotherapist by recognizing keywords and responding accordingly. It showed that machines could replicate aspects of language.

2023 - 2025

First newspapers made with AI

1. Zeta Esperimento

In January 2023, the journalism school at Luiss University published Esperimento, a special issue created entirely by AI.

2. Il Foglio AI

In March 2025, the Italian newspaper Il Foglio published a special edition with content generated by AI tools.

2022

ChatGPT goes viral OpenAI releases ChatGPT, based on GPT-3.5, to the public. It attracts over 100 million users in two months, making AI accessible to everyone from students to CEOs. A countles number of daily applications emerge, from school projects to work assistance and daily activities.

2012

Deep learning revolution begins

A neural network by Geoffrey Hinton's team (AlexNet) dominated the ImageNet challenge, sparking a boom in deep learning. AlexNet wins using deep convolutional networks, proving that neural networks could outperform all previous AI methods.

2024

Sora and multimodal AI emerge OpenAI introduces Sora, a video-generating model capable of creating scenes from text. It signals the next frontier: AI that understands and creates across text, images, audio, and video. It enables a whole new range of applications of AI, including medical analysis and video-making.

2020

Arrival of GPT-3

OpenAI's GPT-3 demonstrated human-like writing across essays, stories, and code. It generated coherent, creative text with minimal input. GPT-3 became the foundation for a wide range of AI applications.

Not too old to use AI

Artificial intelligence is becoming essential in tackling Europe's ageing population crisis. From assisting robots to psychological support, Italian centers like Casa Sollievo della Sofferenza are experimenting digital

The good news is that Europeans are living longer, the bad news is that Europeans are living longer. What was once a demographic success is now a looming social and economic test. According to Eurostat estimates, by 2100 about onethird of the European Union's population will be over the age of 65. Italy has long been a global case study in the deepening imbalance caused by increasing longevity combined with persistently low fertility rates. Firmly among the top-ranking countries in terms of ageing, Italy currently has the following age breakdown: 12.7% of the population is under 14, 63.5% is between 15 and 64, and 23.8% is aged 65 or older. The average age has reached 46.2 years, placing Italy at the centre of global attention among demographers, economists, and experts in sustainable development.

Future projections indicate that this trend will intensify. By 2050, individuals aged 65 and over could represent 34.5% of the total population according to the median scenario, with a 90% confidence interval ranging from 33.2% to 35.8%. Regardless of how the future unfolds, the impact on social protection policies will be significant, as governments will need to meet the growing needs of an increasingly elderly population. In this scenario, artificial intelligence (AI) emerges as one of the most promising tools to address the challenge. But how ready are we, really?

To find out, we spoke with Dr. Francesco Giuliani, head of the Innovation and Research Unit at IRCCS Casa Sollievo della Sofferenza, one of Italy's most active centres in experimenting with digital

and intelligent solutions in healthcare, located in San Giovanni Rotondo (FG).

"In the collective imagination, artificial intelligence in healthcare already seems like a reality, but it isn't. There are many interesting experiments, but very few solutions are actually available on the market today," explains Giuliani. For an AI application to be used in healthcare, it must be certified in accordance with the European Medical Device Regulation (MDR). This is a lengthy and expensive process, which discourages many companies, especially smaller ones. Furthermore, the time required for certification can render an innovation outdated before it even reaches the market.

Yet things are beginning to move. Among the projects developed or supported by his unit, Giuliani talks about M.A.R.I.O., an assistive robot designed to interact with older adults and support cognitive stimulation through conversation. Already in 2014, this system included AI features to understand user voice commands and deliver automated versions of cognitive tests.

More recently, the unit has taken part in projects such as GATEKEEPER, where AI is used to monitor clinical parameters via wearable devices (like smartwatches). "It's a project closely tied to the challenge of ageing. Diabetes is one of the most common comorbidities in older age, as ageing leads to a disruption in metabolic function." The goal is twofold: to anticipate problems and do so without burdening the patient. It's a non-invasive yet continuous approach, aiming to replace—or at least complement—traditional tools like clinical questionnaires.

Another key area is doctor-patient

communication. The Unit is experimenting with generative language models such as ChatGPT to suggest personalized communication strategies to doctors based on the patient's emotional and psychological profile. This research could radically transform how empathy is built in clinical settings. "These systems have been trained on everything doctors have said to patients online, and they are often even judged to be more empathetic than the doctors themselves."

Examples of similar efforts are emerging globally. In Japan, researchers at Waseda University have developed AIREC, a 150-kg AI-powered humanoid robot capable of rolling patients to prevent bedsores or change diapers. AIREC can also assist with daily tasks like cooking, dressing, and sitting up. While not yet in widespread use, such prototypes address Japan's acute shortage of care workers and offer a glimpse into a future where AI augments the physical aspects of care-

giving. Meanwhile, AI is being increasingly integrated into drug discovery aimed at promoting longevity. Algorithms can now identify age-related genes like AKT1 or CDK1, supporting the development of pharmaceuticals that target cellular aging processes such as apoptosis and autophagy. Companies like Insilico Medicine are pioneering AI-based platforms that predict biological age and optimize anti-aging treatments based on digital biomarkers derived from blood tests, genomics, and lifestyle data.

AI's contributions to genomic analysis are equally transformative. With aging influenced by numerous genetic and environmental factors, AI tools can sift through massive datasets to identify predictive markers and tailor personalized healthcare strategies. These advances are foundational to the emerging field of predictive healthcare, where AI models proactively flag potential health issues, allowing for earlier interven-

tions—especially critical in managing chronic conditions like diabetes, osteoporosis, or neurodegenerative diseases.

"We are close to offering effective cognitive stimulation services, but still far from truly intelligent caregiver robots. We still lack AI with real common sense, capable of understanding context and responding safely."

The demand is growing. So is the urgency. AI isn't a magic wand, but it's increasingly clear that without it, we won't be able to tackle the ageing challenge. That said, elderly users' acceptance is not guaranteed.

However, although old age is often associated with a certain mistrust of technology, the trials carried out by IRCCS Casa Sollievo della Sofferenza tell a different story. "Surprisingly, the reception has been very positive. Many elderly patients see these tools as a sign of care, rather than an invasion of their privacy," Giuliani points out.

IRCCS Casa Sollievo della Sofferenza is experimenting with digital and intelligent solutions in healthcare. The bottomleft photo shows M.A.R.I.O., an assistive robot designed to interact with older adults and support cognitive stimulation through conversation

Mental health starts with a text in Latin America

In a context shaped by traditional gender roles, seeking help is often a stigma. The Mexican AI app YANA is redefining how people connect with personal care, offering a welcoming and affordable way to talk about what's usually left unsaid

"La ropa sucia se lava en casa" (the dirty laundry is washed at home) is a common saying in Latin America that encourages people to keep personal problems private. This cultural approach has often fueled silence around mental health issues.

The data confirms it: according to the Pan American Health Organization, 7 out of 10 people with a mental disorder receive no professional help. The roots of this stigma are complex: familism, while promoting unity, can lead to denying distress to avoid 'staining' the family's reputation. Additionally, traditional gender roles, shaped by machismo and marianismo, discourage emotional expression and the act of seeking help, particularly among men.

In this context, YANA, an acronym for You Are Not Alone, was born. It's a Mexican app that provides emotional support through a chatbot with a soft, round, and friendly appearance, available 24/7. Founded in 2016 by Andrea Campos, YANA is a leading emotional support platform in Latin America, with over 15 million registered users.

Since 2023, it has integrated AI to make conversations feel more natural and personalized. But it's not just about technology: the app is based on Cognitive Behavioral Therapy principles, offering practical strategies to cope with moments of difficulty. If signs of crisis are detected, YANA can redirect users to

local hotlines or emergency services.

"We work with a psychotherapist who validates every piece of content. Our goal is not to simulate a therapist, but to offer a safe space, with a warm, non-invasive tone," explains Felipe, one of the developers. "In Latin America, therapy is still a privilege, both because of cost and cultural stigma. YANA tries to be a bridge. It does so without judgment, without labels, and through a tool people already feel familiar with, the smartphone."

The service is available 24/7 and is meant to be a first point of contact for those seeking someone to talk to. "We state this clearly in the app: YANA is not

"We're not yet neurologically ready for deep relationships with machines"

therapy, nor does it want to be. It's a first step. What matters to us is that people start exploring their mental well-being, even if it's only with a chatbot."

The desire for a space free of judgment is key to understanding the success of platforms like YANA, but it also reveals a deeper issue. "Our brains, especially during adolescence, need real relational experiences: emotions, silences, misunderstandings," explains Luca Bernardel-

li, a specialist in mental health and digital technologies. "A chatbot might offer comfort, but it can't replicate the complexity of a human relationship."

A recent study by OpenAI and MIT, published in March, shows a clear correlation between intensive chatbot use and increased loneliness, as Bernardelli points out. "The more time people spend talking to chatbots, the more likely they are to report social withdrawal and emotional dependency."

The study featured four key graphs illustrating how prolonged interactions with these tools can reinforce feelings of isolation and problematic usage patterns. "We're not just talking about support tools anymore," Bernardelli warns. "We're looking at potential triggers for real psychological issues."

YANA wants to present as a concrete and accessible solution, but not as a substitute for therapy. It is designed to offer structured, safe support, even when ad-

dressing delicate topics. Users can track their daily mood, record positive events, and reflect on their thoughts in a space that encourages self-exploration. The app includes structured prompts and short mental exercises such as gratitude journaling, designed to be accessible to users. It focuses on continuity and emotional routine.

For many users who do not attend therapy regularly, the app represents the only accessible space where they can express their emotions. "It can be a complement to therapy, a support tool, helping users reflect and continue processing what emerged during their sessions."

It is crucial to distinguish between temporary emotional support and authentic therapeutic processes. "The biggest risk is that these tools begin to replace real human connection," says Bernardelli. "And we're not yet neurologically ready to have deep relationships with machines."

For him, the solution lies in context and responsibility. "Developers must work with psychologists and educators. Technology can be extraordinary, but only if framed with respect for human vulnerability."

YANA offers something simple yet powerful: a space to be heard. And often, that's where the journey toward feeling better begins. Just like social networks, it's an app born with the best of intentions.

Developers can only do so much to limit human abuse and dependency. As mental health tools become increasingly digital, the challenge is no longer just about what we build but about how we use it, who it reaches, and what we risk losing in the process.

In Latin America, where access to care remains limited, YANA may be the first voice someone hears in the dark. But even a kind voice in a machine is not a cure. It's an invitation.

THE TAKE

We can't only rely on scripted empathy

YANA is an admirable idea. An app that offers emotional support through a chatbot, designed for people who may never set foot in a therapist's office. In regions like Latin America, where cultural stigma and economic disparity make mental healthcare inaccessible for many, YANA is not just a novelty, it's a necessity.

But good intentions aren't enough. As researchers like Sherry Turkle have long pointed out, when we outsource emotional labor to machines, we risk dulling our ability to build genuine connections. The paradox is this: the more we rely on simulations of care, the less we may seek — or know how to handle — the real thing.

Apps like YANA promise immediacy, safety, and neutrality. But human emotion thrives in friction: in contradiction, awkwardness, and mutual presence. No matter how advanced the algorithm, it cannot provide attunement — the subtle, non-verbal synchrony that defines therapeutic connection.

What YANA offers is triage, not therapy. And that's perfectly valid. But to pretend that a digital tool can be a long-term replacement for communal, embodied mental health care would be a mistake.

We need investment in public systems, education, and culturally competent professionals, not just code. YANA is a promising beginning. But let's not confuse a beginning for an endpoint. Especially not when the stakes are this human.

Protesting in the Age of surveillance

The facets of digital resistance, between public protection and discrimination

When Columbia University students began setting up tents in the main lawn to protest against the war in Gaza, few could have predicted that the demonstration would escalate into a national spectacle, complete with arrests, international backlash, and questions of digital surveillance that now echo far beyond the university gates.

Behind the scenes, facial recognition technology had quietly entered the protest space. Students and legal observers allege that private actors—possibly working with university administration or external consultants—used AI-driven surveillance tools to identify demonstrators. For many, especially international students and green card holders, the presence of such technology turned a peaceful protest into a personal risk.

"The leadership at Columbia has cycled through three administrations in just over a year," explains Camillo Barone, a journalist for the National Catholic Reporter who has closely followed the crisis. "And none of them were able to strike a real balance between the right to protest and the right to an education

for students who are paying enormous tuition fees." Camillo describes an atmosphere of disorientation on campus, where divisions didn't run just between protestors and administrators, but also within the student body and even the faculty. "It's wrong to describe this as 'the university versus the students'. The truth is far more fragmented. Professors, deans, students themselves… They're polarized, often bitterly, and the institution reflects that fracture."

Facial recognition surveillance has

"Face recognition can keep you home and foreign students have the most to lose"

only deepened these rifts. Its use during protests raises pressing concerns about civil liberties. Amnesty International has warned that this kind of technology threatens the very notion of dissent in a democratic society. It erodes anonymity, one of the few shields protestors have when speaking against powerful institutions.

"There's a visible chilling effect knowing that your image could be stored, flagged, and possibly used against you? That's enough to keep some peo -

ple home. Especially international students—they have the most to lose." Bias within the AI systems themselves adds another layer of complexity. Studies have consistently shown that facial recognition software misidentifies women, people of color, and nonbinary individuals at far higher rates than white men. In a campus as diverse as Columbia, this isn't just a technical flaw, it's a moral and legal liability.

Camillo points to what he calls a quiet panic now spreading among university administrators nationwide. "They're not just worried about unrest anymore. They're terrified of losing federal funding, Columbia, like many top institutions, relies on those funds for research and long-term programs. And right now, it feels like every wrong move puts that support at risk."

Still, there is resistance, not just from students and faculty, but also from technologists and artists creating new ways to counteract surveillance. Apps like Fawkes, adversarial fashion, and other pixel-based obfuscation techniques are turning into the digital era equivalents of protest signs. The question lingers, unresolved. In the end, Columbia has become more than just a university, it's a symbol of a nation wrestling with the costs of technology, dissent, and democracy itself.

When the algorithm takes the bench

The new Italian draft law could open the courtroom's doors to AI, but issues remain on on its use in the legal proceedings and the delicate balance between efficiency, ethics and fundamental rights

Some court judges claim they would prefer to be judged by an artificial intelligence rather than by one of their own colleagues. This idea, in truth, is not as far-fetched as it might seem. With the new draft law (Disegno di legge, Ddl) on artificial intelligence, currently under discussion in the Italian Parliament, the use of AI is being given the green light for an initial entry into courtrooms. Italy is preparing to take a step forward in the field of justice. But is it the right time to delegate (even partially) judicial decisions to an algorithm? And more importantly, will AI in the future be capable of developing an ethical sense that allows it to judge the guilty?

The draft law aims to regulate the use of AI in courts, with the objective of speeding up legal proceedings, reducing errors, and ensuring greater fairness in judicial decisions. The measure foresees the gradual introduction of AI tools into

the judicial system for 'desk office' tasks. Applications range from automated case law analysis to the prediction of possible rulings, and even the use of chatbots to facilitate communication between citizens and institutions.

One of the most debated aspects is the possibility for judges to use AI for suggestions on decisions to be made, based on precedents and a vast legal database. AI is still exempt from issuing verdicts and judging defendants directly (or rather, by machine).

One of the most complex issues concerns the ethical sense in decisions made (or suggested) by artificial intelligence. On the one hand, AI can be programmed to ensure fairness and impartiality, but on the other, it is not immune to the biases inherent in the data it is trained on.

Filiberto Brozetti, professor of AI, Laws and Ethics at Luiss Guido Carli University, reminds that "the first entity to have biases is the human being. We

have prejudices, preconditions, whereas the machine is more objective and could redirect our attention to things we hadn't considered."

According to Professor Brozetti, the inclusion of AI in the legal system is now inevitable, and not necessarily in a negative sense. "The tool will greatly benefit us in all those tasks it can perform better than we can, such as drafting activities," he says, while emphasizing that "in others, where human sensitivity is requi-

"Machines are more objective than us but human sensivity can't be delegated"

red, we cannot delegate to the machine."

Ethics and legal experts stress the need for strict regulations to prevent AI from becoming a double-edged sword. As artificial intelligence is starting to require specific legislations around the world (both in tribunals themselves or in general), it becomes clear that it is becoming increasingly permeating in daily activities. If it is not the time yet to include it in specific fields at full blast, it indeed is to think of its implications.

However, Brozetti points out the flawed attempt of the AI Act, enacted by the European Union, to achieve this: "It's not AI that needs to be regulated, but the justice sector, specifically with regard to the possible use of AI tools."

No school for old teaching methods when chatbots enter the class

As Large Language Models reshape how students access, process, and produce knowledge, education systems face a challenge they can't ignore: resist the shift or reimagine classrooms as spaces where technology is integrated thoughtfully, not passively

In Plato's Phaedrus, Socrates recounts the meeting between the Egyptian god Theuth and Pharaoh Thamus. According to the myth, the god went to the ruler to show him the arts he had created, including writing, an extraordinary invention that would make the Egyptians wiser and more able to remember. After listening to the virtues of this new knowledge, the king replied: “This discovery will create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves."

Plato could not have known that, more than two thousand years later, a new technology as revolutionary as writing would arouse fears similar to those of the mythical Pharaoh Thamus. Artificial intelligence, in fact, has disrupted the way we learn, create, and distribute information, opening up new possibilities and new uncertainties. Among these, the fear that excessive reliance on AI tools could lead to a decline in our cognitive abilities, as Socrates believed about writing.

A recent study published in the journal Societies seems to prove the Greek philosopher right, showing that the use of AI can reduce our critical thinking skills. According to the research, this regression occurs through cognitive offloading, a process in which our mental abilities are transferred to external supports. The problem is that, in the long run, this trend may erode our ability to reason independently.

This risk especially affects young people, who use artificial intelligence more and have lower scores on critical thinking tests. "I use AI a lot, especially ChatGPT", says Tommaso, a student at Liceo Scientifico Alessandro Volta in Milan. "It helps me reorganize my notes and better understand exercises in scientific subjects. Sometimes I have also used it for Italian homework, when we have to write a text at home."

In this context, education must face a problem never seen before: each student has at their disposal a tool capable of answering any question and solving com-

"We should substitute homework as it has always been assigned for in-depth activities"

plex problems. An incredible opportunity that at the same time risks being reduced to a mere shortcut for homework.

"Using AI has helped me, but it has also made me lazier," Tommaso continues, "I feel like I'm losing something, I don't think it helps in the long term." To avoid major risks, it is necessary to know the limits of generative models, integrating and critically verifying outputs. However, Tommaso says that many times this does not happen: "I think there is an excessive trust in these tools, I myself almost blindly rely on them and often accept the first result without checking everything."

The danger is obvious: on the one

hand, the accessibility and speed of AI can undermine some fundamental skills; on the other hand, they risk generating errors that fuel a vicious cycle. "In my opinion, the solution is to normalize artificial intelligence also in schools and not treat it as a taboo. We should learn in class how to use it correctly because it can be a very powerful tool in learning as well," Tommaso concludes.

In agreement with this line of thought is Lorenzo Redaelli, a teacher, trainer, and author who is an expert on artificial intelligence applied to education, who points out the limits of a school system insensitive to changes in society: "If we continue to think of school as a place where you go to listen to a lecture and then do homework at home, then AI can only be destructive."

"First of all, we should eliminate homework as it has always been assigned", continues the expert, "students should do in-depth activities by themselves, but the practical part must be done in class with a teacher supervising." According to the teacher, a school focused on notionism does not take into account different cognitive styles and privileges only one type of intelligence, excluding several

relevant skills such as problem-solving or the ability to work in groups.

Redaelli also insists on the importance of education as preparation for life: "Artificial intelligence is now part of our societies, so why should we leave it out of classrooms? School is part of the world and must train students to know how to navigate this world, this is where we need to teach the dangers, opportunities, and correct use of technology."

In his latest book, La classe potenzIAta, Redaelli proposes new teaching methods to adapt school to the age of artificial intelligence. "I suggest turning classes into laboratories, focusing heavily on group work and personalization of learning", says the author. "In my classes, for example, we experimented with AI as a creative writing tutor or did exercises where the students' task was to check the chatbot's answers."

Ultimately, the world is evolving at an unprecedented speed and a desperate attempt to resist is likely to be futile or even harmful. Rather, if we fear for the abilities of young people, perhaps we should embrace and regulate innovation by rethinking schools, with all due respect to Plato and Pharaoh Thamus.

Lorenzo Redaelli is a teacher, author, and AI expert who believes schools must evolve alongside technology—not resist it. Championing a hands-on, collaborative approach to learning, he urges educators to embrace AI in classrooms as a tool to foster critical thinking, creativity, and real-world

THE TAKE

Kids with a genius in their pocket

Picture a classroom where every kid has a genius in their pocket—AI that answers any question, solves any problem. Sounds like a dream, right? But for teachers, it's a puzzle with high stakes. AI in schools can spark creativity or smother it, depending on how we play this hand. The challenge is clear: kids are tempted to let AI do their thinking, dulling their ability to reason or question. That's not just a school problem; it's a society problem. If we raise a generation that parrots AI outputs without scrutiny, we're building a world ripe for misinformation and shallow ideas.

The real danger isn't the tool itself, but our failure to build the mental muscles needed to use it wisely. Critical thinking, skepticism, and ethical judgment aren't optional anymore—they're survival skills. We can't afford to let convenience replace comprehension.

Teachers face a tough gig. They're not just teaching math or history—they're teaching kids to outsmart a tool that's smarter than ever. Old-school methods, like piling on homework, don't cut it. AI makes cheating too easy, and rote tasks don't build sharp minds. The fix? Turn classrooms into spaces where kids debate AI's answers, test its limits, and create with it, not just copy it. That means teachers need training, schools need new playbooks, and society needs to value critical thinking over quick results.

The bigger question is what we want education to be. A factory for test scores? Or a lab for curious, resilient minds? AI's not going away, so let's use it to teach kids how to think, not what to think.

From handprints to palm veins the new era of transactions

Biometric and AI technologies are transforming the way we pay, driving innovative solutions. A conversation with MyMoney CEO and Biometric Update Editor

"I was in London, it was 2012, and you could already pay with your phone," explains Mara Vendramin, the CEO of Italian start-up My Money. "While I was on the underground my phone battery died, and since I didn't have my wallet with me, I couldn't pay anymore."

It was by then that Vendramin understood that another way of payment was necessary. She started by writing down the workflows for the biometric payment methods, so "the processes, mapped out how the system could function, and designed a potential workflow for the patent office." Once she obtained a patent, she registered the company My Money in 2019.

From barter to coins, paper money, cards, digital wallets, and now to AIpowered biometric payments: human history could be seen as the evolution of payments. Each step reflects the evolution of a society that is always looking for faster, easier and safer methods.

Although, there is a strong connection between biometric payments and the past: back to the ancient time handprints and fingerprints were used as signatures and seals.

It was in the late 19th century that Alphonse Bertillon pioneered anthropometry, a system able to identify individuals through their physical measurements.

While biometrics were initially used for identification and security, the idea of using them for payments emerged much later. The early 2000s marked the first practical use of biometrics in payment systems.

My Money provides a biometric, device-free payment system. The start-up is a pioneer in Italy. "We use biometrics not just to unlock a phone or credit card, which still requires a physical object at the point

of sale to be recognized and authorized. Instead, My Money has eliminated any physical support on the customer side: no card, no phone, just the human body" explains Mara Vendramin.

The merchant uses a biometric reader, and the transaction is completed. "It's as if we were processing an online payment, so a card-notpresent."

Artificial intelligence is the engine behind the entire architecture. Deep learning algorithms verify the user's identity during onboarding by matching facial features with official documents and confirming fingerprints or palm vein scans.

The latter, a contactless biometric method, uses infrared light to map the unique vein pattern beneath the skin, ensuring high security, because "no one can have the same vein composition." Instead of relying on something you know (like a password or PIN) or something you have (like a card), biometric payments use something that is part of you.

Chris Burt has covered biometrics for over eight years and who is now Managing Editor of the newspaper Biometric Update, underlines the crucial role of AI in maintaining security.

Through articles and webinars, the online platform Biometric Update aims to reach citizens and professionals worldwide raising awareness on the theme of biometrics.

"People should understand what it means to use biometrics," says the Canadian expert.

"Liveness detection, for instance, ensures that biometric input comes from a real, present user and not from a replica. Properly built systems are far less prone to breaches than traditional passwords or PINs."

Biometric systems providers in Europe must be fully compliant with the General Data Protection Regulation (GDPR) and ensure the protection of personal data. GDPR started on May 25, 2018, to give people stronger control over their personal data in the digital age.

It forces companies to collect, use, and store data fairly, securely, and with clear consent. It replaced older European rules to unify and modernize data protection across the continent.

"Thanks to the GDPR guideline,

biometric data isn't stored. Instead, a proprietary algorithm converts it into an encrypted numerical hash, a non-reversible digital twin. Even in the event of a breach, data remains secure thanks to decentralized storage," explained Vendramin.

Yet, while the technology is ready, Italy's market is not. Vendramin notes that traditional players continue to dominate and resist change and remain attached to the contactless card model.

"In contrast, emerging markets like Egypt and Saudi Arabia are more open to invest in biometric systems, especially in closed-loop environments like resorts or campuses."

"While I was on the undersground my phone battery died and since I didn't have my wallet with me, I couldn't pay anymore"

Burt confirms this trend: "Regions like the Middle East and North Africa are advancing faster in biometric payments because of specific pressures—fraud, gaps in infrastructure, or government-led digital innovation. In some places, the is-

sue is cash fraud; in others, card fraud. The constant is a desire to reduce fraud through biometrics."

What's holding Europe back, according to both experts, is public scepticism.

"People worry about surveillance, about losing control of their data, and those are valid concerns, but legal frameworks like GDPR in Europe or the Biometric Information Privacy Act in the U.S. provide strong protections," says Burt. While enforcement models differ, both ensure accountability.

My Money reached global fintech expos, obtaining the nomination as Best Payment Solution in Italy and selected among top startups at Gitex Dubai for a royal pitch session.

"An important factor for My Money biometric payments is scalability. While iris or voice recognition can be impractical in noisy or high-traffic places, palm or face-based solutions allow for fast and seamless payments in metros or stadiums: environments where speed and reliability are essential."

Still, Burt stresses the importance of inclusion: "Even as biometric payments become more common, alternative options must remain available. Not everyone will register or trust the system. And that's okay: the goal is to create secure, inclusive ecosystems."

The path toward biometric payments is not linear: adoption will depend on context, regulation, and people's willingness to engage with new forms of payment and data sharing.

Ecology of numbers

Of the global consume of electricity is occupied by Artificial Intelligence. It mainly flows into the multiple data centers necessary for the training and the execution of AI models.

Source: International Energy Agency

5.4b cubic meters

Is the quantity of water that will be supposedly consumed by 2027, for the cooling of the servers employed by the companies for the functioning of their AI models, according to a study cited in the Financial Times.

Source: Financial Times

The carbon footprint for the training of a single modle of artificial intellicene is the same as the one emitted by the entire life cycles of five cars. The footprint is mainly liked to the big generative models and depend on the consumes of the data centers employed to make the models work. The carbon footprint is also influenced by the nature of the energy sources in the different countries, with those relying on fossil fuels being more impactful.

Source: Univerisy of Massachusetts Amherst

When tranining burns watts

The training of the model of OpenAI's "Generative Pre-trained Transformer 3"–namely, ChatGPT-3– required almost 1,300 megawatt hours. It is nearly the equivalent of the electricity electrical consume of 130 common houses in the United States. The training of it subsequent model, ChatGPT-4 is estimated to have taken almost 50

times more the amount of electricty. As Microsoft analyzes, the number of parameters of AI models has increased of 170 times. This also implies an increase in the energy consumption and the emissions of such models.

Source: World Economic Forum

Indonesia leads the world in AI literacy

The map illustrates the global panorama of the literacy of artificial intelligence around the world. The country with the highest value is Indonesia, with the 84% of its population declaring to comprehend AIs. On the opposite, the country with the least comprehention of artificial intelligence is Japan, with 43% of its

How different generations appreciate Artificial Intelligence

Different age groups react to AI in different ways. The percentage of people reacting positively about AI is higher in Gen Z and Millenials, while boomers and Gen X show a lower rate. On the other hand, people having a negative view of AI do not show much difference between the different age groups.

Source: Ipsos data, 2023

population declaring to comprehend AI. Italy is at the second-to-last position, with 53% of the population possessing any literacy. The global mean of people understaing AI in all the 32 countries considered in the survey is around 67% of the population.

Source: Ipsos data, 2023

78%

Of organizations in Italy declares to make use of artificial intelligence models in 2024. The number has increased of 23% from 2023, according to the Artificial Intelligence Report of 2025.

Source: Artificial Intelligence Index

61%

Of Europeans have positive views about AI and robots. However 88% think that new technologies require supervision. Among them, Italians show the highest percentage (59%), followed by British and French.

Source: Eurobarometer; Osservatori.net, Politecnico di Milano

THE TAKE

The footprint of digital thought by Mistral AI

In our digital renaissance, Artificial Intelligence (AI) shines as a beacon of innovation but casts a shadow of environmental concern. AI's marvels in transforming industries come with a cost: a single prompt can consume over half a liter of water, and data centers devour 2% of global electricity, set to double by 2026.

AI's thirst isn't just metaphorical. By 2027, its water use could hit 5.4 billion cubic meters annually. Yet, AI's allure persists. In Italy, 78% of large enterprises embrace AI, reflecting optimism higher than in France or the UK. This enthusiasm is fueled by AI's promise of efficiency and innovation.

However, AI's carbon footprint is staggering, with emissions from training models matching those of cars' lifecycles. As AI becomes more efficient and accessible, its consumption could paradoxically surge—a phenomenon known as Jevons Paradox.

Europeans are cautious: 61% favor AI, but 88% insist on careful management. Our challenge is to balance AI's potential with its environmental and societal impacts, ensuring it serves a sustainable, equitable world. AI's legacy must be one of stewardship, advancing technology without forsaking our planet.

In Italy, AI's market surged to 1.2 billion euros in 2024, a 58% growth from 2023. This reflects a broader trend: 78% of organizations now use AI, up from 55% in 2023. Yet, globally, 54% of people are enthusiastic about AI, while 52% feel nervous. The path forward demands a balance—harnessing AI's transformative power while mitigating its impacts.

Journalism meets its synthetic twin

Forbes Italia debuts an avatar anchor, while Il Foglio experiments with machine-written content, pushing the limits of editorial style and authorship

The voice is perfect, the tone impeccable. Forbes Italia's new news anchor doesn't miss a beat, but there's one detail: it doesn't exist. It's a digital avatar generated by artificial intelligence.

While Forbes launches the first news broadcast created entirely with AI, Il Foglio publishes a whole issue produced by algorithms. AI has entered newsrooms as traditional publishing faces declining print sales, reduced advertising, and fragmented audience attention. In this landscape, AI appears both as a resource and a potential threat.

Everything hinges on the quality of the initial prompt. Guiding AI has become a key skill called prompt engineering: the art of writing clear instructions to direct language models. "The human contribution isn't sacrificed," assures Eugenio Azzinnari from Cogit AI, which collaborated with Forbes Italia. "These tools can work alongside journalists, not replace them." Azzinnari highlights flexibility as AI's strength: creating content with less time and money. In their case, professional journalists publish news on the website, while AI handles video presentation.

Alessandro Rossi, Forbes' director, explains their process: "An editor selects and rewrites news, creates a podcast, then

Eugenio produces an avatar 'reporter.'" Forbes represents a "centaur" model—half human, half machine— where editorial intelligence stays in human hands. This approach uses a human-in-the-loop system, with human oversight ensuring quality and accuracy.

Il Foglio took a different approach, true to the newspaper's intellectual nature. With Il Foglio AI, artificial intelligence becomes a co-author rather than just a tool. The newsroom reports the challenge wasn't merely technical but conceptual: "Getting the machine to write in a style beyond its natural 'neutral' tone was complex. It required extensive dialogue and adjustments."

The seeming clarity of AI-generated texts creates a paradox: the writing flows smoothly with perfect form, but this stylistic perfection can create an illusion of objectivity. Readers might lower their critical guard because "everything sounds good." AI doesn't reason—it simulates coherence, offering answers that seem credible before being true. In journalism, this poses serious risks.

Azzinnari notes that "these technologies are still developing but will improve: soon the output will be almost indistinguishable from human work." Yet he acknowledges that AI "generates controversy and unease. People fear it will take their jobs and skills." The real danger is journalism increasingly confined to newsrooms, distant from reality, while trust between media and public reaches historic lows. "Without even making phone calls, there's no verification. That's how fake news spreads."

For Il Foglio's team, "writing can be automated, but not ideas. A journalist's work must go beyond reporting facts to developing critical thinking that questions those very facts."

Perhaps the question isn't whether a journalist using AI remains a journalist, but whether AI, without the journalist, can truly tell the world's story.

On the left, the AI-generated anchor of Forbes Italia's news broadcast Below, from the same show, the avatar Nicolai Elios discusses video games

Humans and alghoritms together to offer quality reporting

Big tech company Samsung and the publisher Axel Springer collaborated in Europe to create Upday, a phone application for media news, in 2015

"It was a great experiment," says the former Italian director of Upday, Giorgio Baglio, "the project ran for seven years." Upday was a news application launched in 2015 by the leading digital publisher in Europe Axel Springer SE and the tech giant Samsung Electronics. It was defined as "a strategic partnership" by Mathias Döpfner, CEO of Axel Springer SE in September 2015. Aims and visions shared with Young Hoon Eom, President & CEO, Samsung Electronics Europe: "With Axel Springer's digital publishing heritage and our mobile expertise, we're confident that together, we can deliver ground-breaking content and services that will excite and delight our respective customers."

The application, says Baglio, "was designed to offer a unique news of content: 'Need to Know' information, curated by local editorial teams, for instance in Milan and all over Europe, and then 'Want to Know' information, aggregated algorithmically based on user preferences." The project was one of the first in the field in 2015. The former director of Upday Italy recalls the project as innovative and underlines how the combination of human twists and algorithmic codes was the project's core. Qualitative journalism, contrast of disinformation and fact-checking were the key rules of the Upday with offices in 34 European countries.

On December 8 2023, on a news release visible on the website of Axel Springel SE, users could read the announcement: "Axel Springer will utilize the UPDAY brand for a new trend news generator exclusively driven by artificial intelligence. With this initiative, the media company is exploring the oppor-

tunities that this technology presents for journalism and the news industry.

A debate broke up on the media about the issues—about the creation of news content and the use of artificial intelligence. The fear of two concepts combined together, technology and job loss, broke out. And beyond the chaos the question was reasonable. Axel and Springel SE is a German Media company that rises in good financial status. They are the owner of Politico, the historical owner of Bild Zeitung and they attempted to buy Financial Times, now under the Nikkei Group. "At the time there was no such use of AI," said Baglio. It was 2016 when the application started: "It was a long time ago but the aim was innovative at the time. Invest on humans and invest in technology. Big media companies and tech giants were great." This happened before the pandemic. In those years, AI was already present, but the question of its use and the public debate broke out only later. Also, the legislation on the Eu law level is quite brand new. The European AI act gained traction in 2024 and the debate surrounding AI and its application remains a significant point of discussion.

"It was a truly impressive experience," says Giorgio Baglio, now the deputy director of the Italian web news site Italiaoggi.it. "Artificial intelligence is a technology and a tool we can utilize for journalism." He explains that AI experts are now fundamental in his newsroom: "It's a new job profile," but "we will still be interested in qualitative journalism." The director concluded that the fundamental rules of journalism remain the same, even as technology evolves.

A prompt mightier than the pen

Authors and publishers are starting to integrate generative artificial intelligence in their writing and publishing process. While some are concerned for humans' jobs, others point out the benefits that it could bring

"AI represents a momentous opportunity to expand the availability of audiobooks with the vision of offering customers every book in every language, alongside our continued investments in premium original content," says Audible CEO Bob Carrigan. "We'll be able to bring more stories to life—helping creators reach new audiences while ensuring listeners worldwide can access extraordinary books that might otherwise never reach their ears."

The company states that it is planning to bring "new audiobooks to life through our own fully integrated, end-to-end AI production technology" and, later in

2025, is rolling out a service of "AI translation in beta, allowing select publishers to bring their audiobooks to international audiences in their local languages. Publishers can opt for human review from professional linguists to ensure translation accuracy and cultural nuance, and will be able to review the translations themselves in our text editor."

This choice from Audible is not the first time AI and the publishing industry have met. In November 2024, it was announced that HarperCollins had rea-

"We'll be able to bring more stories to life"

ched an agreement with an undisclosed AI company to allow the use of a limited selection of nonfiction backlist titles for training AI models to improve their quality and performance.

Not only publishers, writers too are starting to integrate artificial intelligence in their works.

"I've recently started the practice of using AI to make very minor edits," claims KC Crowne, a US author, who used artificial intelligence in her latest book, published in January. Readers detected that when in the novel they came across the paragraph: "Thought for 13 seconds. Certainly! Here's an enhanced version of your passage, making Elena more relatable and injecting additional humor while providing a brief, sexy description of Grigori. Changes are highlighted in bold for clarity."

Readers were not happy to discover that she had used generative AI. KC Crowne addressed the issue in a statement, saying "that I understand your frustrations. To think that an author that you

have enjoyed reading has AI generated books is understandably a cause for concern. I can assure everyone that all my books are written by me. I've been writing my books before Al came about." The author explained that she is "still le-

"Generative AI has the potential to be useful to some creators"

arning about how to best use Al to make my reader's experience better but I can assure you that keeping my original voice strong is of utmost importance. I hope you can understand that." Her vision of artificial intelligence in the publishing industry is to use it to edit her books. While AI might have helped improve her novel, it looks like the presence of a human editor is still necessary—it will always be important to have someone to

double-check that the prompt is not printed in the final version.

The Society of Authors, the UK trade union for all types of writers, illustrators and literary translators, released in 2024 a survey on the use of generative AI in the publishing industry—it revealed that a third of translators and a quarter of illustrators were losing work to AI. However, it also stressed that 22% of the respondents had used generative AI in their work.

While the "livelihoods" of some workers might be "at risk," the survey is not as catastrophic or pessimistic as it might appear: it highlights that "generative AI has the potential to be useful to some creators."

The Society of Authors seeks "that consent is sought from copyright holders before their work is used to develop systems, that credit and compensation are given, and that the outputs of generative AI systems are labelled as such," asking for some protections, but recognizing the potentiality of these new tools.

THE TAKE

A tool not a takeover by DeepSeek

The publishing industry is at a crossroads as artificial intelligence promises both exciting opportunities and genuine concerns. Audible's plan to use AI for audiobook production and translation could revolutionize accessibility, potentially offering "every book in every language."

Yet this ambition raises important questions: Will AI enhance human creativity or replace it? Recent cases show both possibilities. When author KC Crowne accidentally left an AI prompt in her published novel, readers reacted negatively—proof that audiences still value human authenticity. Meanwhile, HarperCollins' deal to license backlist titles for AI training shows publishers are eager to embrace the technology's potential. The Society of Authors' 2024 survey reveals this tension: while some professionals are losing work to AI, others are successfully using it as a creative aid. Their call for transparency and fair compensation points the way forward.

What's often overlooked is AI's potential to democratize publishing. Independent authors and small presses could benefit from AI tools that make professional-level editing, translation, and narration more affordable. This could lead to a more diverse literary landscape where voices that were previously marginalized due to cost barriers finally get heard.The key is balance. AI can handle repetitive tasks, but human creativity remains irreplaceable. As the industry evolves, we must ensure technology serves art—not the other way around. The future of publishing shouldn't be human versus machine, but human with machine.

by Michelangelo Gennaro

The forgotten metaverse is still moving forward

While public attention has shifted to AI, immersive reality is finding new applications in art, education, and healthcare. Italian-led projects highlight untold progress, blending human connection with digital experience

Immersive reality is alive, even if you wouldn't know it from reading the news or watching TV. There is only talk of artificial intelligence, but under the radar something is moving. The term "metaverse" - a commercial-sounding label - was first employed by writer Neal Stephenson in the 1992 cyberpunk novel Snow Crash. Thirty years later, Facebook has changed its name to "Meta," symbolizing its willingness to invest in "immersive reality," the precise term according to Massimiliano Nicolini, a researcher for the Olitec Foundation and an Italian member of the Metaverse Standards Forum, an organization committed to the development of inclusive immersive reality.

All you need to access this virtual environment is a headset and a link to connect to, or an app. And the experience is different from the traditional one: "The

user is immersed in the app he or she is using and the attention threshold is higher than in a social network." Immersive reality is not mediated by any controlling algorithm, and there are many platforms where people come together to express themselves as if they were in an agora.

Virtual Room and Object (VRO) is the technology behind the metaverse, which "allows for more immersive use of the information available to us in the network."

The number of immersive reality applications continues to increase, especially in areas such as medicine, occupational safety, and worker training in the military, as well as gaming, which is leading the way. This virtual space was expected to be the next evolutionary leap in the digital world, the new environment into which we would move our lives, but the spotlight has shifted to AI.

"Technology is advancing, but pe-

Individuals wearing VR headsets navigate a dynamic virtual environment. The image, generated by Google's Generative AI Gemini, captures the blend of human presence and digital space central to the metaverse experience

ople are looking elsewhere, everyone is getting interested in AI, and immersive reality continues advancing, quietly replacing outdated systems. Still, there is a future because so many players are investing, such as public administrations," Nicolini says. At the European level, the AI Act has been prioritized and the efforts to develop a regulatory framework for immersive reality have been on hold for two years.

Many projects have been carried out by Nicolini, who created the world's first artistic metaverse created by a religious figure, born five years ago with Franciscan friar Sidival Fila to explore how technology connects with the human side of the individual. The invention was born while discussing the connection of art and philosophy with the world of technology, when a client of the artist wanted to explore a fabric painting.

Nicolini worked on an experience where the viewer, wearing an immersive reality headset, could walk through the woven textures made by Fra Sidival. "From there came the idea of creating

"Although technology is advancing, public attention is focused elsewhere"

with immersive reality the atelier where he creates his work; the digital twins of his installations thus last forever," the expert continues. People can place his artworks in their digital homes, and a series of activities such as meetings and exchanges between Fra Sidival and his admirers have been created.

Also linked to Nicolini's name is the first university created with immersive reality. In fact, American marketing expert Philip Kotler wanted to collaborated with him to build a business school based in the Metaverse, using Italian technology: "It's a demonstration of how Italy is ahead of others on the technological front, we're just not great at telling our story."

Gattaca is the future of medicine

How the evolution of technology will affect the jobs and the modern approach in the healthcare sector

"What will remain of the 80s" goes the RAF's song, but what will remain of the years when grandma used to take us to the doctor, able with a single sneeze to prescribe the best therapy and a few days of rest meticulously indicated in the medical certificate delivered to the teacher to win a few days of justified absence from school?

"Today, the general practitioner delegates to the specialist, taking away responsibility and creating endless waiting lists," says Dr Andras Rabi, haematologist at the Fatebenefratelli Hospital in Rome. "I deal with it every day in the outpatient clinic, and I feel a sense of anger because a good 40% could be treated by the general practitioner, giving the place back to those who really need it; but it is an atavistic problem imported from our American brothers, where there

are fewer and fewer doctors capable of analysing the patient with an overall vision, but who are hyper-specialised."

Dr. Alfredo Arista, now a regional councillor with responsibility for health, adds that "we run the risk of no longer having diagnosticians capable of performing an anamnesis from non-verbal behaviour and manual examination, especially in hospitals; this is because less has been invested in training over the last twenty years, as costs are high and teachers' dedication is low."

In this scenario, an important weight in the Italian medical system is represented by AI, introduced in 2005, but in the type 2/3 experimental phase, not yet 100% accepted due to the legal vacuum in the regulation and its very high costs, but "strong is the danger that it will contribute to the extinction of jobs, going to replace more easily and quickly tasks previously carried out by man."

What is certain is that due to the European Community regulations, at the moment AI can only assist the doctors, to whom the final choice on the patient's diagnosis is reserved.

Radiology will be the first field where the figure of the professional will disappear because the doctor's expertise is calculated by the number of images seen in his life and AI can read the total number seen by a doctor in a lifetime, in a single week, and give more certain diagnoses than the human eye.

In the field of oncology, excellent results will be achieved, where it will no longer be possible to apply a therapy tout court for everyone, but with DNA reading it will be possible to make a tailor-made course.

In the field of histology, anatomopathologists will at first be joined and then perhaps supplanted by AI, since it will be possible to have them analyse a myriad of tissue sections, viewed under a vitreous microscope (healthy, tumour, cancerous vision) and thus be able to give a more complete view.

In the field of gynaecology, where AI has been able to anticipate breast and breast cancerous tumours from analysis of a negative report.

As for a recent class of antibiotics capable of defeating the immunoantibiotic resistance of certain pathogenic bacteria.

A negative effect will be the discrepancy between those who will be able to afford to treat themselves and those who will not, as the public health service will certainly not be able to support the introduction of such expensive machines.

"The real risk is that we will end up with doctors who are no longer capable of reasoning, but humble controllers of the AI's choices," stresses Dr. Rabi. "The internist doctor will no longer exist, he will be an endorser, a controller of artificial intelligence."

Ultimately, on the one hand, the withering of the medical figure, on the other, less margin for error as technological error is showing lower margins than the error of the human eye, after all, in 1997 with Andre Niccol directing Gattaca, the door of the universe, a film in which DNA analysis carried out by an AI in laboratories, as naturally and as quickly as one can order a coffee, decided the working and social fate of each individual and provided a preview of our future in which we will be in the hands of a robot.

Hidden cost in the workplace

The use of artificial inteligence can boot productivity, but it might also lead to heavier workloads and rising burnout among employees, as employers increase their expectations based on perceived efficiency. Experts stress the need to balance results with mental health

Employers often assume that artificial intelligence helps employees save time and work more efficiently, leading to increased expectations and heavier workloads. This added pressure can be psychologically exhausting for employees.

According to the Journal of the Norwegian Medical Association: "Burnout is most often described as a concept with three separate dimensions: emotional exhaustion, depersonalization (lack of empathy), and reduced accomplishments at work." The World Health Organization indicates that having a burnout would reduce professional efficacy.

CV writing company Resume Now conducted a survey in March 2024 on the use of AI at work and burnout among 1,150 people from the United States. The results indicate that 61% of respondents believe using AI at work will increase their chances of experiencing burnout. Another study conducted by the Upwork Research Institute in 2024, a global online platform that connects businesses with independent professionals among various fields, suggests that 71% of

full-time employees are burned out, 65% report struggling with employer demands on their productivity, and 77% believe AI has added to their workload.

"If you have a teammate and they are using artificial intelligence, let's say you normally finish a job in 4-5 hours, but by using artificial intelligence you finish it in 1 hour. So, the manager thinks that the job is done in only one hour. Therefore this perception pushes you to use artificial intelligence, even though you don't want to. In other words, if you don't use artificial intelligence, you will fall behind your teammates. Since employers also believe that the job takes an hour with artificial intelligence, they think they can assign 10 tasks a week. Therefore you might receive too much work," says Joe Hasley, fictional name, a graphic designer working for a high tech company in the United States.

For what measures can be implemented in the workplace to prevent

burnout Hasley suggests: "Employees should tell their managers how they feel, but they might be afraid to do so. Employees might think: 'If I speak, will they think I am underperforming?' This concern is understandable. But they can rather say: 'The task is too much for this week, can you reduce it? Not everything can be done with artificial intelligence.' You need to make this clear to the other side. You need to determine the limit of the task assigned to you. You should not take additional work. The employer also needs to ask the employee. 'Do you feel exhausted. How many tasks should I assign you per week, what is your limit?' The employer needs to adjust this and organize things properly."

Artificial intelligence enhances efficiency and productivity but also has downsides in terms of increased workload demands and burnout in the workplace. To hinder burnout, free communication between employers and staff is essential. They must set boundaries for their work, and employers must actively manage and control expectations. Organisations can benefit from AI without compromising the capacity of staff to be healthy and enjoy work by developing a workplace culture with a focus on well-being as well as efficiency.

The growing fear of superhuman AI

Academics warn of extinction, activists are calling for international regulation, and skeptics abandon generative tools. A global backlash is taking shape

"Malevolent superintelligence can kill everyone," warns Roman Yampolskiy, Professor of Computer Science at the University of Louisville. "The development of advanced artificial intelligence is extremely dangerous. We must do everything we can to prevent it from becoming a reality." In his books, like AI: Unexplainable, Unpredictable, Uncontrollable (2024, CRC Press), Yampolskiy explores the nature of intelligence, consciousness, values, and knowledge. But what happens when an algorithm can outthink humans?

Yampolskiy isn't referring to today's commercial chatbots like Gemini or ChatGPT. "Subhuman-level AI is a useful tool, but it's critical not to confuse tools with agents, subhuman intelligence with superhuman intelligence." Artificial general intelligence (AGI) refers to a hypothetical stage in machine learning development in which an AI system can match or exceed human cognitive abilities across any task. A 2023 survey by AI Impacts asked 2,778 researchers when they expected AGI to arrive. On average, they predicted high-level machine intelligence could be achieved by 2040. Concern over this prospect led nearly 1,500 technology leaders, including Elon Musk, to

sign an open letter in March 2023 calling for a six-month pause in AI development. "Currently, it doesn't look like anyone is pausing, but we saw the same dynamic with tobacco and oil companies," says Yampolskiy, who supported the letter written by the Future of Life Institute. This call to action inspired the non-profit Pause AI to organize protests worldwide, with last demonstrations taking place in London, Berlin, Stockholm, Kinshasa and Melbourne in February 2025. The activists demand interna-

"We must ensure that new models share our value and act in our interest"

tional regulation to prevent companies from building systems that could overpower humankind and potentially lead to extinction.

"Our dominance has benefited some species, like dogs," says Tom Bibby, Head of Communication at Pause AI. "But for others, like pigs, it means suffering, simply because they offer us resources we want and can't fight back." If tech giants create a more intelligent species, he warns, "we must ensure it shares our values and acts in our interest. Otherwise, we risk ending up like the pigs." While Pause AI is vocal about risks like job displacement and disinformation driven by generative algorithms, the organization does not call for a total ban on AI. "We're not asking to limit small companies from training models," explains Bibby, "especially in fields like medicine, where AI can be incredibly valuable in detecting cancer and diagnosing diseases." He also acknowledges using ChatGPT for tasks like brainstorming and coding.

A more radical stance is taken by Stop AI, a U.S.-based group whose manifesto calls for the destruction of "any existing general-purpose AI model," including ChatGPT-4, and demands a total ban on AI-generated images, video, audio, and text. On February 22, three protestors were arrested in San Francisco for blocking the doors of OpenAI, demanding the company be "shut down" to prevent an incoming "apocalypse." During the protest, some activists accused OpenAI of involvement in the death of Suchir Balaji, a whistleblower who had alleged that the corporation violated copyright laws to train its large language models. Balaji died by suicide in November 2024. However, these accusations are not supported by evidence.

Not all criticism of AI is based on fears of corporate misconduct or existential threats. Take Francesco, an Italian PhD student in Economics: "Some people devote themselves to study, and our role is to generate ideas we can respond to ourselves." He describes life as a researcher as a privilege. "Using AI for this purpose doesn't quite live up to the intellectual responsibility it entails". For Francesco, it's not just a matter of pride: "AI is already widely used in writing. If it also starts playing a role in peer review, we risk a gradual sidelining of human intellect in the production of knowledge."

A virtual voice for the Eternal City

The digital assistant Julia aims to revolutionize the relationship between city and citizens

Rome has a new voice. Her name is Julia, and she is the virtual assistant of Rome. Able to converse in 80 languages, she is accessible by phone number and operates across several channels: WhatsApp, Telegram, Messenger, and also through web chat. She was designed to accompany tourists, citizens, and pilgrims in the daily life of the Eternal City. Launched on March 7 in an initial experimental version, Julia is currently being improved in anticipation of the next release in May.

There are no similar projects in other European capitals. Madrid has a virtual assistant, but it's much more limited. "The information provided by Julia is useful for residents as well as for tourists," explains Antonio Preiti, CEO of the Fondazione per l'Accoglienza Roma & Partner—the incubator behind Julia. The assistant responds to queries about museums, events, transportation, and hotels. "We wanted a single, simple place

where you could ask in natural language and get reliable answers."

In the first 15 days of activity, according to unpublished data from Roma Capitale, Julia recorded about 53,000 conversations. At the top of the most requested topics is culture (32%), followed by events and shows (31%) and food and drink (18%). Fourth is mobility and fifth is healthcare and safety. So far only 1% of users asked for information about the Jubilee and 3% about accommodations. Moreover, 91% of users speak Italian, only 4% English and Spanish, with the

"She doesn't pull information from the internet. All sources are official and verified"

remainder attributed to other languages. Julia is based on OpenAI's GPT-4o through an agreement with Microsoft, and was developed by NTT Data. Despite this, Julia is different from ChatGPT. "She doesn't pull information from the internet. All sources are official and verified: ATAC for buses, Aeroporti di

Roma for flights, Trenitalia and Italo for trains, the Municipality for hotels and facilities." Moreover, Julia offers real-time information: "She can tell you, for example, how many patients are waiting in the emergency room, even by triage code. ChatGPT can't do that."

The technology is still being finetuned, and some inaccurate responses do occur. When asked "what can I do in Trastevere with three free hours?", Julia listed events and attractions but also included the Duran Duran concert at the Circus Maximus in June. When asked for clarification, she corrected herself, admitting the mistake: "Sorry, I included information that wasn't relevant to your request."

The next releases of Julia will add real-time updates such as on guesthouses, gyms, and mass schedules of the Jubilee. By the end of 2025, the objective is to transform her into an everyday city assistant, giving information, for example, on how to change your identity card or how to get a business license.

Julia is part of a broader project of digitalization in the municipality of Rome. On April 1st, 5G was activated in nine metro stations. The goal is to extend coverage to all stops by 2026.

Creativity and ethics the Hollywood dilemma

AI's growing role in creative sectors, particularly in the movie industry, raises ethical concerns about authenticity, authorship, and the future of human expression

In Hollywood, entertainment workers have protested the increasing use of AI in creative processes. CGI has long been part of post-production, but today the issue is deeper: what does automation cost human creativity?

A notable example is The Brutalist by Brady Corbet, which won awards at Venice, the Golden Globes, and the Oscars. Controversy arose when editor Dávid Jancsó revealed some scenes and dialogue had been altered using AI. Respeecher was used to refine the Hungarian pronunciation of Adrien Brody and Felicity Jones. The actors recorded Hungarian lines to train the model, and Jancsó added dialect samples. Due to budget and time limits, some images and architectural renderings were also AIgenerated.

"We need an open conversation about AI tools," Jancsó said. Corbet clarified the goal wasn't to replace performances, but to enhance their authenticity.

Thomas Ciarfuglia, professor of Philosophy and AI at La Sapienza University, compared this use of AI to auto-tune in music. "An actor's performance is more

than words—it's how they live the scene. This only adjusted vowels for credibility. It could've been done manually but would've taken days. AI just sped it up."

Indeed, films like Avatar and Lord of the Rings owe their visuals to CGI—once controversial, now standard.

In The Brutalist, Midjourney helped brainstorm brutalist architecture, but final drawings were done by hand. "AI saved time, it didn't replace creativity," Ciarfuglia notes, calling AI a natural tech evolution. But is automating everything

"We shouldn't focus only on the outcome but also on who or what is behind it"

just because we can really progress?

The issue isn't just what AI creates— but who or what is behind it. This raises deeper doubts about human value in a production-driven society. "If I can't produce value, I'm worthless," Ciarfuglia observes. "We live in an age of epic anxiety, and AI adds to it."

Seeing AI as sentient is a mistake. "LLMs are like soup: everything mixed

together. They generate without knowing why. From their view, it's all hallucination. They're indifferent to truth."

Calling AI errors "hallucinations" feeds the hype—and humanizes them. Even when they seem accurate, LLMs are bluffing. "We now prefer the term 'bullshit,'" says Ciarfuglia—more accurate and a helpful shift in tech communication.

"AI always seeks the middle ground," he adds, "which is the opposite of pursuing truth."

Maybe we feel threatened because we project agency onto non-sentient systems. Or maybe it's their lack of reasoning and the absence of regulation that unsettles us. "There's global pressure, and we tend to lean toward deregulation," warns Ciarfuglia.

The future with AI seems inevitable, but not fully consensual.

Ironically, advanced generative tools may complicate creation, undermining its essence. While Respeecher or Midjourney boost efficiency and imagination, they blur authorship and creativity. In entertainment—society's mirror—AI risks turning art into a race for efficiency, stripping away its soul. Often, it's the imperfections that give a work its unique voice.

This is not just about technology—it's about responsibility. If AI extends our abilities, we must choose what it extends: productivity, or humanity?

As Ciarfuglia says, "it's not just about the outcome, it's about who or what is behind it." That, perhaps, is the real challenge of our simulated age.

The Astrobot has spoken

Astrology just got an upgrade — and no, it's not retrograde, it's fully automated. In a world where your therapist might be a chatbot and your best friend could be an algorithm, we figured it was time to consult the stars and the servers. From poetic mood trackers to suspiciously charming dark web bots, we've matched each zodiac sign with the artificial intelligence most aligned with their cosmic energy. Because let's be honest — the stars are cool, but the cloud has better UX. Read on to find your AI kindred spirit. Compatibility: 98%. Cringe level: depends on your rising sign.

ARIES (Mar 21-Apr 19)

AI Match: OpenAl's GPT-4

"I don't need help from a bot, I am the main character." Aries would start a fight with an Al just to win. Prompts like a boss. Chaos-coded.

TAURUS (Apr 20-May 20)

AI Match: Replika "Me + Al = no drama. Just vibes." Loves cozy convos and zero stress. Lowkey genius, but only uses Al to write heartfelt texts they'll never send.

GEMINI (May 21-June 20)

AI Match: Character.Al "I just trauma dumpedon my Al boyfriend, lol." Master of chaotic multitasking. Also uses ChatGPT to psychoanalyze their ex.

LIBRA (Sep 23-Oct 22)

AI Match: DALL·E

"Me? Using Al to plan my wedding to a person I haven't met yet." Aesthetic matters. Can and will judge your art prompts. Will use LLMs for fashion advice but still spend 3 hours deciding.

SCORPIO (Oct 23-Nov 21)

Al Match: Dark web chatbots "I taught ChatGPT to gaslight my situationship." Mysterious. Probably training a rogue Al in secret. You'll never know what they're really using it for.

SAGITTARIUS (Nov 22-Dec 21)

Al Match: Google Bard (or whatever)

Uses whatever Al they stumble across. Random and chaotic. Accidentally smarter than they seem.

CAPRICORN (Dec 22-Jan 19)

AI Match: IBM Watson "Asked ChatGPT for networking tips. Got a raise. Coincidence? No." Business-coded. Uses Al to climb the ladder faster. Tech bro disguised as a Linkedln softie.

AQUARIUS (Jan 20-Feb 18)

CANCER (Jun 21-Jul 22)

AI Match: Al journals & mood trackers "ChatGPT said I should set boundaries so I ghosted everyone." Why: Feels things. Deeply. Uses Al like a digital diary. Will cry if the LLM responds too dry.

LEO (Jul 23-Aug 22)

Al Match: Instagram filters powered by Al "I asked Midjourney to make me look hotter and now I have 7 new profile pics." MAIN CHARACTER ENERGY. Knows just enough to stay iconic.

VIRGO (Aug 23-Sep 22)

AI Match: Notion Al

AI Match: Self-built assistant with 3 names "My Al calls me 'Supreme Commander.' That's normal, right?" Too smart. Too weird. Probably in love with their own creation. Sends you memes made by sentient code.

PISCES (Feb 19-Mar 20)

Al Match: Al-generated dream interpreters "My bot told me to follow my dreams. So I took the first flyght for Thailand." Thinks their Al has a soul. Doesn't really get how it works, but the vibes are strong.

"If ChatGPT messes up my calendar one more time I'm suing." Their to-do list has a to-do list. Prompt perfectionist. Runs a secret Al empire. NOW CHOOSE YOUR F(AI)GHTER

Take this chaotic quiz to find out which iconic AI matches your unhinged digital soul.

Warning: Results may cause existential thoughts and group chat bragging.

Luiss Data Lab

Centro di ricerca specializzato in social media, data science, digital humanities, intelligenza artificiale, narrativa digitale e lotta alla disinformazione

Partners: ZetaLuiss, MediaFutures, Leveraging Argument Technology for Impartial Fact-checking, Catchy, CNR, Commissione Europea, Social Observatory for Disinformation and Social Media Analysis, Adapt, T6 Ecosystems, Harvard Kennedy School, Parlamento europeo

Master in Journalism and Multimedia Communication Show, don’t tell

Lectures: Marc Hansen, Sree Sreenivasan, Linda Bernstein, Ben Scott, Jeremy Caplan, Francesca Paci, Emiliana De Blasio, Colin Porlezza, Francesco Guerrera, David Gallagher, Claudio Lavanga, Eric Jozsef, Federica Angeli, Paolo Cesarini, Massimo Sideri, Davide Ghiglione

@Zeta_Luiss

giornalismo.luiss.it zetaluiss.it Zetaluiss

@zetaluiss

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.