New Publication out!

4–6 minutes

read

it’s out the Proceedings of the 22nd International Conference on the Ethical and Social Impacts of ICT (ETHICOMP2025), which will be held this week (September 17–19) at the Universidade Autónoma de Lisboa.

I’m glad to be part of it with an open-access paper that questions data governance from a philosophical standpoint: “On the Current (Im)possibility of Achieving Public Value Through the EU’s Digital Strategy: An Ethics Method to Seek a ‘Collectual’ Equilibrium

This paper has two aims, one theoretical and one practical: 1) to highlight the criticalities and the ultimate impossibility to achieve public value (singular) by/through digital technologies, based on the current regulatory framework of the European Union; 2) to redress such criticalities by advancing a complementary transdisciplinary approach, realized through a problem-opening method that has been tested in an educational setting at Delft University of Technology.

We repeatedly hear that the digital transformation, especially within/through the public sector, is meant to benefit the collectivity at large. In this respect, various documents part of the EU’s digital strategy declare the intent to pursue a “people-centric” approach to the digital transformation. But what does this mean? Although quite fuzzy in its conceptualization, “people-centric” entails an exercise of balance between, on the one hand, fundamental human rights – enshrined in the European Charter of Fundamental Rights – and, on the other hand, collective values, such as social inclusion, democratic participation, and environmental sustainability.

The tension between individual and collective standpoints is reflected in the Declaration on Digital Rights and Principles [3], which defends “a European way for the digital transition, putting people at the center.” Specifically, the Declaration pins down six principles: 1) preserve people’s rights; 2) support solidarity and inclusion; 3) ensure freedom of choice; 4) foster democratic participation; 5) increase safety, security, and empowerment of individuals; and 6) promote sustainability. Noteworthy is that these principles equally split between a half (1, 3, 5) focusing on the individual and the other half (2, 4, 6) pertaining to society as a whole. Hence, the Declaration does identify the need to strike a balance between individual and collective standpoints.

However, literature has pointed out that what is being enacted in the EU is, de facto, a human right-based only approach that, while protecting individual freedoms, autonomy, and privacy, tends to overlook the collective-level dimension and effects of the digital transformation, especially its potential societal harms. Hence, the need for striking a “collectual” (collective+individual) balance in the fostering of the digital transformation remains currently unmet from a regulatory normative point of view. To tackle this, it is necessary to complement the current EU’s approach with one that a) recognizes the systemic sociotechnical complexity of the scenario we are enmeshed, based on the idea that public value is non-normative strictu sensu; and b) leads to the development of critical competences to be embedded in iterative processes of assessment of such a scenario, if we are to govern it fairly.

On the one hand, theoretical work on public value has shown that such a concept moves well beyond the public sector. This means that public value cannot be reduced to an institutional and legal understanding only, but rather demands the recognition of the full roundedness and contradictory nature of human experience as a relational intersection between subjective evaluation and the societal collective. In connection to that, work in behavioral ethics has highlighted the multifaceted nature of people’s decision-making processes whenever they are confronted with moral dilemmas. This means not only that most of individuals’ actions fall in a grey area that resists a clear-cut discernment in terms of “good vs. bad”, but also that individuals are inconsistent agents when it comes to understand what they value, insofar as the same subject might well endorse mutually exclusive values at once. Hence, a regulatory normative focus on the individual in relation to the digital transformation is insufficient and must be accompanied with a real-world exploration of how people use digital technologies, why, and with which consequences at collective level.

On the other hand, as works in philosophy of technology have made clear, technology is a pharmakon – both poison and antidote – or, as Kranzberg’s first law puts it, “Technology is neither good nor bad, nor is it neutral”. At all times, technology can have both positive and negative (un)intended consequences whenever it is applied in context, insofar as its uses generate value-laden entanglements that cannot be taken apart. This implies, in other words, a fractal understanding of public value as irreducible to essential categorizations and demanding, instead, ongoing negotiation and hierarchization of its various (collectual) facets and downsides. There cannot be, for instance, openness (e.g., of data) and transparency, without acknowledging and tackling inevitable closures and opacities. Similarly, a digital service designed to promote inclusive participation might achieve that for certain people and not for others; or it might be inclusive for certain people under certain conditions, but then result exclusive for these same people in later occurrences.

To operationalize a fractal understanding of public value, what is needed is a transdisciplinary approach able to cut across epistemological boundaries, resists any privilege point of reference, and configures an ongoing multidimensional analysis – or, as I call it, an uncertainty-based “problem-opening” method. Such a method is meant to foster an open-ended explorative exercise about the non-neutral impact of digital technologies in any given context. As such, this exercise can be regarded as complementary to the regulatory normative framework of the EU. The paper provides an example of this exercise, by discussing its application and results in a master course on “Data Ethics for the City” taught over the last three years at TU Delft. Notably, the course was based on three pillars – a sociotechnical understanding of data, the city as a complex system, and ethics as a non-normative non-axiomatic practice – and led students to 1) identify a case study about the implementation and use of a given digital technology in an urban setting; 2) unpack the value-laden entanglements and un/intended consequences of such implementation and use, through literature review and data collection; 3) expose and/or redress such entanglements and consequences through the design of a digital or physical artefact (to be exhibited).

Leave a comment