LUDOVIKA UNIVERSITY OF PUBLIC SERVICE

How Is Technology Shaping Our Future?

Ludovika University of Public Service (LUPS) hosted a two-day international and interdisciplinary conference titled “The Future of Technology” on 26–27 February in the Zrínyi Miklós Hall of the Ludovika Main Building. The event was organised by the University’s International Office within the framework of the Ludovika Fellowship Program, with the involvement of Pier Paolo Pigozzi, Vice-Rector for International Affairs, and Federico Ponzoni, former Ludovika Fellow and professor at the Pontifical Catholic University of Chile. The conference explored how emerging technologies—particularly artificial intelligence—are transforming human decision-making, legal reasoning, social relations, and ultimately our understanding of ourselves.

Participants were welcomed by Gergely Deli, Rector of Ludovika University of Public Service. In his opening remarks, he emphasised that understanding new technologies can no longer be confined to a single academic discipline. Artificial intelligence is already present in education, public administration, communication, national security, and many other fields. At the same time, fundamental questions surrounding responsibility, regulation, and the societal impact of these technologies remain unresolved. As he noted, addressing such challenges is especially important for a public service university that prepares future decision-makers.

In the conference’s opening lecture, Federico Ponzoni highlighted that artificial intelligence has reached a stage where its implications can no longer be fully understood within the boundaries of a single discipline. At the same time, the technology offers immense societal opportunities, making it essential that discussions about technological development also engage philosophical, ethical, and political perspectives.

The first panel discussion, titled “Inhabiting Technologies – Rethinking Ethics,” featured a presentation by Luca Valera, Associate Professor at the University of Valladolid, followed by reflections from Bernát Török, Director-General of the Eötvös József Research Centre and Head of the Institute for Information Society at LUPS.

In his presentation, Luca Valera argued that technology should no longer be viewed merely as a tool but rather as a defining environment of human life. Technological systems increasingly permeate everyday activities: they do not simply support human action but also shape and structure it. According to Valera, technological development triggers profound economic and social transformations while simultaneously creating new forms of power, vulnerability, and inequality. He also addressed emerging neurotechnologies that connect not only to the human body but directly to cognitive processes. Brain–computer interfaces and systems capable of translating human thoughts into data raise significant ethical and legal questions. As a result, protecting the human mind, mental privacy, and free will is becoming increasingly important.

Responding to Valera’s presentation, Bernát Török emphasised that people today no longer simply “go online” to perform certain tasks; rather, they exist continuously within a digital environment. He described this condition using the concept of “onlife,” highlighting the growing inseparability of digital and physical realities. Török also noted that current regulatory approaches primarily focus on the behaviour of individual users, while many technological systems operate in ways that are largely invisible and difficult to control at the individual level. For this reason, he suggested that regulation should increasingly focus on the structure and logic of technological systems and service providers. He further pointed out that the code underlying digital technologies does not merely describe technical processes but also carries normative implications. It determines which behaviours become possible and which are excluded. In this context, safeguarding human dignity in the age of digital technologies may require the emergence of new rights, such as cognitive sovereignty, the right to genuine human relationships, or the right to a future not shaped by predictive algorithms.

The conference’s next panel examined the philosophical and theoretical foundations of artificial intelligence. Gene Callahan argued that public discourse often exaggerates the novelty of AI. The principles underlying modern computing remain rooted in the same theoretical framework that defines computation more generally. Through illustrative examples, he demonstrated that the signals processed by computers carry no inherent meaning; interpretation always depends on human conventions. Although large language models can produce impressive results, their operation is still based on the statistical processing of patterns. Phenomena such as so-called “hallucinations” should therefore not be seen as extraordinary errors but as natural consequences of how these systems function.

The panel’s second speaker, Miguel Nussbaum, professor at the Pontificia Universidad Católica de Chile, delivered his presentation via recorded video. He argued that technology is often understood merely as a tool that extends human capabilities, whereas digital systems in fact exert far deeper influence on how societies function. Technological systems not only enable certain actions but also restrict others. Design decisions embed values and norms within technological infrastructures, meaning that technology inevitably shapes the frameworks within which human action takes place. Over time, digital infrastructures may create new dependencies, transform the role of expertise, and blur the boundaries of responsibility. The first day of the conference therefore focused primarily on the philosophical and theoretical dimensions of artificial intelligence, with concluding reflections offered by Luca Valera.

The second day of the conference turned to the legal and philosophical implications of technological development. In his lecture, Gergely Deli examined how the emergence of artificial intelligence invites a reconsideration of fundamental aspects of legal reasoning. He raised the question of whether algorithms could one day make legal decisions—and if so, how such a development might affect our sense of justice and the persuasive force of legal argumentation. According to Deli, legal decisions are shaped by multiple forms of reasoning. Alongside doctrinal, rational, and narrative arguments, deeper and less easily articulated considerations also contribute to the persuasive force of legal judgments. Law therefore cannot be reduced to the mechanical application of rules; it is also a complex interpretative practice.

The emergence of artificial intelligence, he suggested, offers an opportunity to reconsider the nature of legal reasoning and to better understand the factors that shape the persuasiveness of legal decisions.

The theme of regulation was further explored by Mathis Bitton, a PhD candidate at Harvard University, who reflected on major trends in the regulation of digital technologies over the past two decades. He noted that the last decade has witnessed an unprecedented volume of technology regulation, particularly within the European Union. Legislative initiatives such as the Digital Services Act and the AI Act illustrate how policymakers worldwide are addressing issues such as children’s online safety, digital addiction, content moderation, and the governance of algorithmic systems. According to Bitton, these regulatory efforts share a common assumption: that the right experts within the right institutions can make the harms associated with digital technologies manageable. Yet many societal challenges—including loneliness, declining mental health among young people, and the spread of online disinformation—continue to intensify.

Bitton argued that this tension does not necessarily stem from bad faith or flawed legislation but rather from a deeper philosophical orientation. The prevailing “managerial” approach assumes that social problems can be addressed through expert management, while paying insufficient attention to the role of social structures and communities themselves. He identified three forms of power that are essential for the functioning of communities: the power to define boundaries and membership, the power to shape internal norms, and the power to determine collective purposes. Without these mechanisms, he argued, social goods such as trust, solidarity, and civility cannot be produced solely through top-down expert regulation.

In his lecture, Vincent Blok, professor at Erasmus University Rotterdam, argued that debates surrounding the ethics and regulation of artificial intelligence often approach the issue from an overly narrow perspective. Many discussions focus primarily on the responsibilities of technology designers, while the broader societal implications receive less attention. Blok emphasised that technology is not merely a tool but a phenomenon that reshapes the world in which humans live and act. AI systems require environments that can be represented and processed as data. As a result, not only does technology adapt to human needs, but humans gradually adapt to the logic of technological systems.

He illustrated this transformation through examples such as digital twin technologies and AI-driven brain implants, which open new possibilities in diagnostics and medical treatment. At the same time, these developments raise fundamental questions about the nature of human intelligence, consciousness, and self-understanding.

The conference’s final panel featured Nolen Gertz, philosopher at the University of Twente, who examined the cultural representations of artificial intelligence. He argued that public perceptions of technology are significantly shaped by works of popular culture, particularly films and other visual narratives. Stories about artificial intelligence do not merely reflect social fears and hopes; they also shape them. Early films often emphasised the dangers of technology, whereas later narratives increasingly portray technological progress as both inevitable and transformative. According to Gertz, recurring conceptual oppositions—such as human versus machine, freedom versus control, or intelligence versus instinct—serve not only as narrative devices but also structure the way society thinks about technology.

The two-day conference demonstrated that discussions about the future of technology cannot be reduced to purely technical questions. Artificial intelligence and other emerging technologies raise complex philosophical, ethical, legal, social, and cultural challenges. Participants agreed that addressing these issues requires sustained interdisciplinary dialogue. Technological development can only be guided responsibly if technical considerations are examined alongside questions of human dignity, social responsibility, and the common good.

The conference concluded with closing remarks by Pier Paolo Pigozzi, Vice-Rector for International Affairs and Professor of International Law at LUPS. Reflecting on the presentations and discussions from a legal perspective, he emphasised that rapid technological progress raises increasingly complex questions concerning law, regulation, and institutional responsibility. He noted that interdisciplinary dialogues of this kind play a crucial role in deepening our understanding of the societal and legal implications of technological change.