Big Data Cogn. Comput. 2025, 9, 62 https://doi.org/10.3390/bdcc9030062
Article
Trustworthy AI for Whom? GenAI Detection Techniques of
Trust Through Decentralized Web3 Ecosystems
Igor Calzada
1,2,3,4,5,6,7,
*, Géza Németh
7
and Mohammed Salah Al-Radhi
7
1
Public Policy & Economic History Department, Faculty of Economy and Business, University of the
Basque Country, UPV-EHU, Oñati Square 1, 20018 Donostia-San Sebastián, Spain
2
Basque Foundation for Science, Ikerbasque, Plaza Euskadi 5, 48009 Bilbao, Spain
3
Wales Institute of Social and Economic Research and Data (WISERD), School of Social Sciences, Social Sci-
ence Research Park (Sbarc/Spark), Cardiff University, Maindy Road, Cathays, Cardiff CF24 4HQ, UK
4
Decentralization Research Centre, 545 King St. W, Toronto, ON W5V 1M1, Canada
5
Fulbright Scholar-In-Residence (S-I-R), US-UK Fulbright Commission, Unit 302, 3rd Floor Camelford
House, 89 Albert Embankment, London SE1 7TP, UK
6
Astera Institute, 2625 Alcatraz Ave #201, Berkeley, CA 94705, USA
7
Department of Telecommunications and Artificial Intelligence, Budapest University of Technology
and Economics, ENFIELD Horizon, BEM, 1117 Budapest, Hungary
* Correspondence: igor.calzada@ehu.eus; Tel.: +34-630-752876
Abstract: As generative AI (GenAI) technologies proliferate, ensuring trust and transpar-
ency in digital ecosystems becomes increasingly critical, particularly within democratic
frameworks. This article examines decentralized Web3 mechanisms—blockchain, decen-
tralized autonomous organizations (DAOs), and data cooperatives—as foundational tools
for enhancing trust in GenAI. These mechanisms are analyzed within the framework of
the EU’s AI Act and the Draghi Report, focusing on their potential to support content
authenticity, community-driven verification, and data sovereignty. Based on a systematic
policy analysis, this article proposes a multi-layered framework to mitigate the risks of
AI-generated misinformation. Specifically, as a result of this analysis, it identifies and
evaluates seven detection techniques of trust stemming from the action research con-
ducted in the Horizon Europe Lighthouse project called ENFIELD: (i) federated learning
for decentralized AI detection, (ii) blockchain-based provenance tracking, (iii) zero-
knowledge proofs for content authentication, (iv) DAOs for crowdsourced verification,
(v) AI-powered digital watermarking, (vi) explainable AI (XAI) for content detection, and
(vii) privacy-preserving machine learning (PPML). By leveraging these approaches, the
framework strengthens AI governance through peer-to-peer (P2P) structures while ad-
dressing the socio-political challenges of AI-driven misinformation. Ultimately, this re-
search contributes to the development of resilient democratic systems in an era of increas-
ing technopolitical polarization.
Keywords: generative AI; decentralization; Web3; trustworthy AI; blockchain; DAOs;
data cooperatives; big data; detection techniques; democracy
1. Introduction: Trustworthy AI for Whom?
The rise of generative artificial intelligence (GenAI) has introduced transformative
tools capable of generating complex, human-like content in text, imagery, and sound [1,2].
While these technologies hold vast potential for innovation across industries, they also
pose significant risks related to trust, authenticity, and accountability. As the European
Academic Editor: Domenico Ursino
Received: 10 January 2025
Revised: 17 February 2025
Accepted: 1 March 2025
Published: 6 March 2025
Citation: Calzada, I.; Németh, G.;
Al-Radhi, M.S. Trustworthy AI for
Whom? GenAI Detection
Techniques of Trust Through
Decentralized Web3 Ecosystems. Big
Data Cogn. Comput. 2025, 9, 62.
hps://doi.org/10.3390/bdcc9030062
Copyright: © 2025 by the authors.
Licensee MDPI, Basel, Swiꜩerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Aribution (CC BY) license
(hps://creativecommons.org/license
s/by/4.0/).