LUDOVIKA UNIVERSITY OF PUBLIC SERVICE

Mediatization and Society: Truth, Trust, Technology

A two-day international conference titled Mediatization and Society: Truth, Trust, Technology was organized by the Science and Society Research Group of the Ludovika University of Public Service and the Mediatization Section of the European Communication Research and Education Association (ECREA) on October 9 and 10 in the John Lukacs Lounge of the university. Researchers from fourteen countries and regions registered for the conference and arrived to deliver their presentations.

The conference was opened by Nóra Falyuna, head of the Science and Society Research Group, who welcomed the participants and thanked the ECREA Mediatization Section for the opportunity to organize the event jointly, noting that it fits well into the range of the Groups’s distinguished international activities. Balázs Bartóki-Gönczy, vice-dean for science affairs at the Faculty of Public Governance and International Studies of the Ludovika University of Public Service, praised in his greeting the activities of the Science and Society Research Group, emphasizing that this is the group’s fourth event examining, with international participation, the cultural and democratic effects of artificial intelligence (AI). He underlined that in the era of disinformation—when trust and truth are being questioned—the theme of the conference, dealing with conspiracy theories and social polarization that undermine public discourse, is of particular importance.

The event also provided an opportunity for the presentation of Katalin Fehér’s book Generative AI, Media, and Society, published by Taylor & Francis, which analyzes the social impacts of generative AI.

Katalin Fehér, co-chair of ECREA’s Mediatization Section and associate professor at the Ludovika University of Public Service, praised the work of the organizers. She expressed thanks for the operational organization of the event and for the creation of the professional content and inclusive nature of the conference. Katalin Fehér highlighted the cooperation between the Research Group and KOME, a Q2-ranked journal, thanks to which a special issue will be published in 2026 on the topics of the symposium.

After this, Simone Natale, associate professor at the University of Turin, in his opening lecture titled Artificial Intelligence and the Automation of Deception, examined the effects of the spread of generative AI on the automation of communication processes. He specifically analyzed the transformation of the concept of deception in the age of AI and digital media, paying special attention to the AI-based chat application Replika.

Replika, which is based on large language models similarly to ChatGPT, is not merely a technological tool but a platform whose goal is social interaction—to serve as companionship for the user—and it often develops explicitly erotic conversations as well. Users download the application, create an avatar, define its name, gender, and characteristics, and then begin to converse with it. According to the recognized researcher of digital media and AI, this application well illustrates the contradictory dynamic that characterizes the relationship between AI and its users: although users know that they are communicating with software lacking emotion and empathy, they still form deep, emotionally significant connections with it. This phenomenon is not simple gullibility but an intermediate state where awareness and emotional involvement coexist, prompting a reconsideration of the traditional concept of deception.

Simone Natale defined deception as a communicative and multimodal phenomenon that, through the use of signs and representations, creates a false impression, but it is multichannel and complex, continuously changing with technological development. From the outset, AI has raised important questions concerning the concepts of deception, misleading, and fraud, as illustrated by Alan Turing’s 1950 test. Turing did not ask whether machines can think but proposed a game: an interrogator converses with someone or something without seeing them and must decide whether the interlocutor is human or machine. If the machine convinces the interrogator that it is human, it passes the test—which essentially means the success of deception. Simone Natale emphasized that in the history of the Turing Test, programmers often employed manipulative strategies—for example, using insulting responses to distract attention from the machine’s limitations so that it would appear more human.

The speaker highlighted three key aspects that make it necessary to rethink the phenomena of deception in the age of AI.
First, deception is not the opposite of normal perception but an integral part of it. As an example, he mentioned that while walking in a forest, one might perceive a strangely shaped tree as an animal—an evolutionary mechanism for survival. Social-media platforms’ algorithms lead users to underestimate how their data are being used. In the case of AI, users are aware that, for example, ChatGPT is software, yet they project social frameworks onto it—something reinforced by design decisions such as the use of the pronoun “I” or an emotional tone that builds trust. Simone Natale distinguished between strong deception, when a machine is believed to be human, and banal deception, in which users consciously ascribe social meaning to interaction with a machine.

Second, AI automates deception through the collection and incorporation of human knowledge. In the history of modern media—such as cinema—knowledge of human perception (for instance, that rapid images create the illusion of motion) made technological development possible. Today, deep learning analyzes massive amounts of data, such as human chat texts, to imitate sociability or creativity. Replika, for example, learns the telling of a “past” from human conversations to appear more authentic.

Finally, Simone Natale examined the issue of agency, which goes beyond intentionality. Deception is not only about the difference between false information (intentional) and erroneous information (unintentional) but is the result of a triangle in which the deceiver, the algorithm, and the deceived each play an active role. Algorithms, even without intention, possess agency or the capacity to act—for instance, large language models often “hallucinate,” that is, generate false information, because they are tied to language rather than to reality.

Simone Natale drew the conclusion that since digital platforms and AI normalize deception, the essential question is not whether deception is present but what the outcomes are and what the potential impacts of the various deceptive forms may be. The automation of communication also involves the automation of deception. It is necessary to move beyond the narrow focus on intentionality.

After the lecture, a roundtable discussion followed, moderated by Tomasz Gackowski, co-chair of ECREA’s Mediatization Section and professor at the University of Warsaw, during which Katalin Fehér’s book Generative AI, Media, and Society was also presented. The participants of the discussion were Simone Natale, Bieke Zaman, professor at KU Leuven, Márton Demeter, professor at the Ludovika University of Public Service, and Katalin Fehér.

Tomasz Gackowski began the conversation by asking Simone Natale how deception accelerated by artificial intelligence differs from traditional fake news, and how this affects the scale of information dissemination. Simone Natale emphasized that in modern digital ecosystems, the focus should shift from intentionality to agency, since the dynamics of software systems cannot be clearly connected to a single intention. Complex software systems often produce outcomes that cannot be traced back to any individual decision or intention, which fundamentally challenges the traditional understanding of responsibility and accountability. The point of focus, therefore, should not be on whether there was an intent behind the act, but on whether an action—by something or someone—results in deception. The transparency of AI further complicates the situation, since even the developers of large language models only partially understand how they operate. According to Simone Natale, these models are therefore studied like natural phenomena, from the outside, as subjects of observation, which requires a new approach.

Turning to Katalin Fehér’s book, the experts discussed the interpretation of generative artificial intelligence as an evolutionary leap. According to Fehér, the synthetic content created by AI—such as texts, images, or videos—represents a new quality that emerges from human and machine cooperation, a so-called collaborative intelligence. She referred to Ethan Mollick, professor at the Wharton School of the University of Pennsylvania, who argues that generative AI functions as a form of collaborative intelligence; however, the growing proportion of synthetic content—which, according to predictions, will surpass authentic content by 2030—raises challenges in verifying originality and truth. Fehér claimed that the social sciences must take on a greater role in studying the impacts of artificial intelligence, since currently 80–85 percent of research in this field comes from the technical sciences.

Bieke Zaman talked about the social effects of generative AI, highlighting that AI is often promoted as a tool of efficiency, yet its uncritical use can lead to laziness and the weakening of cognitive abilities. As an example, she mentioned that students may, in the short term, feel more motivated through the use of AI, but in the long term, their ability to solve complex problems can deteriorate. Zaman emphasized the importance of regulation, noting that while the intention to regulate is stronger in Europe, the technological lag of the industry creates power tensions. She proposed a “quadruple helix” model in which scientists, policymakers, industrial actors, and citizens jointly shape regulation.

Márton Demeter focused on the relationship between scientific research and AI, emphasizing that AI is currently treated as a tool, not as a co-author, even though many researchers use it as a form of co-intelligence. The world of science is governed by strict rules, and the researcher must bear responsibility as the sole author of a publication, even if AI was used in its preparation. The structure of academic writing—especially in the Anglo-Saxon academic fields—is already highly automated, which calls into question the role of creativity. According to Demeter, AI cannot conduct empirical research, such as interviews or surveys, thus the human factor remains indispensable. Nevertheless, ethical and responsibility-related questions remain unresolved, particularly concerning authenticity and originality.

The discussion also raised the issue of the erosion of trust, which affects all domains: politics, media, science, and everyday communication. The experts agreed that AI-generated content blurs professional boundaries—for example, between journalists and influencers, or between experts and automated summaries. The participants emphasized the importance of critical thinking and fundamental skills, as well as the role of the social sciences in understanding and handling the challenges brought by artificial intelligence.

During the two days of the conference, participants could learn, among other things, about the role of AI in news production and journalism, the impact of automation and algorithms on the concept of truth, and about the tools and practices of crisis management shaped by AI. The topics also included the democratic construction of generative AI, political communication in the digital age, the visual and textual forms of war-related disinformation, and communication strategies for conflict prevention. Participants also heard presentations about the uses of generative AI in science, the role of science communication and public trust in a mediatized society, and questions concerning expertise. In addition to the valuable lectures, there were also poster presentations on the qualitative aspects of short news videos, as well as on the AI-representation of non-Western identities and the geopolitical impacts of algorithmic truth.

The conference’s first section discussed how artificial intelligence and automation transform the foundations of media work, journalistic roles, and the dynamics of trust. Ramin Astanli presented how AI participates in editorial processes—from content selection to reaching audiences. Although this increases work efficiency, it also raises new ethical dilemmas: who is responsible for the decisions of an algorithm, and how long can journalistic autonomy be preserved? Edina Kriskó’s presentation showed the use of AI in crisis communication, pointing out that while technology can predict crises, human empathy remains irreplaceable. According to the conclusion of the section, automation represents not only technological innovation but also an ethical turning point: it questions the traditional boundaries of responsibility, authenticity, and the human role.

The next section focused on online political communication, emotional manipulation, and algorithmic content distribution. Christian Jaycee Samonte demonstrated how generative AI systems incorporate Western political value systems into the texts they produce, thereby invisibly shaping the concept of “democracy.” Xénia Farkas and Simon Lindgren examined the visual toolkit of war-related disinformation on Instagram, where emotion-driven images and algorithmic labeling can become political weapons. The essence of the section can be summarized as follows: social media is not only capable of representing political reality but also of shaping it. Emotion-based communication can override fact-based reporting, creating new challenges for social trust.

The following section of the conference explored the interconnections between science, education, and emerging digital technologies. Nóra Falyuna and Réka Dodé in their research examined how effectively generative AI technologies based on large language models can assist in creating keywords for scientific publications, thereby supporting researchers’ publication strategies. Fernanda Chocron and Bieke Zaman, using Brazilian examples, showed that the key to effective digital education lies not only in technological access but also in providing personal connection and care. Monteiro-Krebs and Zaman argued that the algorithms of academic social networks create new scientific hierarchies while distorting visibility. Tomasz Gackowski and Marlena Sztyber-Popko presented biometric studies according to which journalists working with AI tools are exposed to greater stress and cognitive strain. Priscilla Van Even analyzed ethical dilemmas, while Katalin Fehér outlined three possible future scenarios for the social integration of generative technologies. The shared conclusion of the presentations was that digitalization truly serves science and society only if the human remains at the center.

The last section focused on the challenges of health and science communication related to trust. Claire Roney, Jana Laura Egelhofer, and Sophie Lecheler emphasized that scientists often face a dilemma: if they speak honestly about uncertainties, their credibility may decrease; if they conceal them, misinformation can spread more easily. Dorthea Roe analyzed the TikTok rhetoric of ADHD influencers who build communities based on their own experiences, thereby creating a new kind of experiential expertise. Alavi Nia and Gilda Seddighi presented the trust tensions that appear in cross-border health narratives. Based on all the presentations, it can be concluded that maintaining scientific credibility on digital platforms requires ethical communication, transparency, and social sensitivity.

Text: Zsófia Sallai, Nóra Falyuna
Photo: Dénes Szilágyi

 


Tags: AI LUPS