Research Article Volume 7 Issue 3
1Italian Institute of Technology, Italy
2Department of Civil, Chemical and Environmental Engineering, University of Genoa, Italy
Correspondence: Cinzia Leone, Italian Institute of Technology, Via Morego, 30, Genova, Italy
Received: October 10, 2025 | Published: October 23, 2025
Citation: : Leone C, Taramasso AC. From inclusion to innovation: reimagining AI development through diverse knowledge systems. Art Human Open Acc J. 2025;7(3):125‒128. DOI: 10.15406/ahoaj.2025.07.00263
This paper focuses on inclusion, questioning the status quo of knowledge production and emphasising the potential for diverse contributions to AI development. When AI collects data about humans, it acts as a mirror, reflecting the stereotypes and inequalities present in society. Scientists and scholars argue that these biases often arise from a lack of diversity among those developing AI systems and tools, as well as from biases embedded within science and culture. AI teams and developers often have gaps related to their technical training, areas of expertise, and cultural and social backgrounds, which are inevitably reflected in the tools they create. This paper argues that the voices and perspectives of people who speak different social languages and who think and act in ways that diverge from the status quo must be included. It presents a case study based on experiences from an EU-funded project. This approach moves AI development beyond the current cycle of knowledge production, enabling new developers and professionals to adapt tools to the needs and experiences of their diverse communities through collaboration across disciplines, cultures, scientific fields, and related approaches. The thesis is that fostering a deeper and more substantive dialogue between the social sciences – particularly sociology, education, and psychology – and STEM is essential for developing a science that is inclusive from the outset and at the design stage.
Keywords: AI, equality diversity inclusion (EDI), bias, interdisciplinarity, knowledge
AI, Artificial Intelligence; EC, European Commission; EDI, Equality Diversity and Inclusion; EU, European Union; SSH, Social Sciences and Humanities; STEM, Science, Technology, Engineering, Mathematics
Bias in the development of artificial intelligence models, particularly in generative AI, is a relatively new topic. However, it has recently attracted numerous and diverse contributions from the scientific community. It is increasingly evident that the various branches within the social sciences and humanities (SSH), such as sociology, anthropology, law, pedagogy, and others, which have traditionally studied issues related to equality, diversity, and inclusion (EDI), remain too disconnected from the domains of those who design and develop AI. Conversely, AI teams and developers often have gaps related to their technical training, area of expertise, and cultural and social background, which are inevitably reflected in the tools they create. As a result, it has not yet been possible to broaden or deepen understanding or facilitate interaction by applying these disciplines, which still appear separate, to different contexts.1 This is one of the motivations for this paper, which seeks to address a research need identified through participation in a European-funded research and innovation project dedicated to promoting interdisciplinary cooperation between SSH and STEM subjects.2,3
In this paper, we argue that a perspective incorporating EDI and potential intersectional discrimination has not been evaluated or monitored for an extended period in the development of AI or digital innovation, while holistic approaches have often failed to provide a foundation for productive interdisciplinary dialogue. Researchers frequently remain confined to their respective fields, and within SSH, primarily to sociology or gender and feminist studies, thereby limiting the scope of research, which has long focused mainly on gender issues. Even in Europe, the predominant approach to equality or inequality has been largely associated with gender equality,4 although this has also become a topic of widespread interest in recent decades, with significant but intermittent and discontinuous progress at the European level.5
AI is one of the most transformative drivers of contemporary technological development. However, as with previous waves of innovation, its long-term success depends not only on technical advancement but also on the establishment of appropriate frameworks for governance, ethical rationalisation, and equitable diffusion. When AI systems process and learn from data linked to human activity, they often reproduce – and in some cases amplify – the structural inequalities and cultural stereotypes embedded in those datasets.6,7
A growing body of research suggests that these distortions are not merely technical flaws but result from deeper socio-structural imbalances in the development process itself.8,9 The limited diversity within AI design teams– composed of actors from similar socio-cultural, geographic, and disciplinary contexts– tends to encode particular worldviews and epistemic assumptions into the systems they create. These issues are compounded by entrenched hierarchies within scientific and technological cultures, where certain forms of knowledge, methodologies, and perspectives continue to be privileged over others.10,11 The development of AI systems is often influenced by the perspectives and assumptions of their designers – mainly developers and institutions based in Western countries, the United States, or China – resulting in datasets and models that privilege these worldviews and reproduce their implicit biases. Consequently, many AI systems rely excessively on datasets that favour Western perspectives and stereotypical assessments of what constitutes valid or reliable information. AI has the potential to reproduce this bias by favouring the views and agendas of the powerful, thus marginalising holistic and interdisciplinary knowledge and different cultural.12,13
Teams working in the field of AI often have cultural gaps that are inevitably reflected in the AI tools they produce. In this contribution, we affirm that the solution to the problem is to reverse and influence AI: to include the voices and characteristics of people who speak different social languages and think and act in ways that diverge from the status quo.
This means taking the development of AI out of the current knowledge production cycle and enabling new developers and engineers to adapt tools to the needs and experiences of their diverse communities through the collaboration of different disciplines, cultures, scientific fields, and approaches.
After extensive consultations, the European Commission (EC) has made significant progress in establishing ethical guidelines for AI researchers, with AI ethics becoming a central focus of various projects and initiatives. The introduction of the Ethics Guidelines for Trustworthy AI by the High-Level Expert Group in 2019 marked a pivotal step in this direction. The guidelines defined trustworthy AI as lawful, as it complies with the law; ethical, as it respects ethical values and principles; and robust, both technically and in consideration of the social environment in which it operates. Recently, the European Union (EU) Parliament adopted the AI Act14 to introduce a comprehensive framework of safeguards, limitations, and prohibitions, along with penalties for non-compliance.
Below, we briefly present some ideas that emerged from the research and innovation activities of a specific European-funded project to refute the thesis that precise regulation alone can eliminate the risk of bias in AI. We argue that substantial work must be undertaken upstream with those who produce knowledge to bring about the cultural change necessary to achieve outcomes that are less biased and less influenced by discriminatory aspects that persist today and are addressed by the AI Act.
As far as we can ascertain to date, there is little concrete practical advice on how to ensure that diversity and inclusion considerations are embedded in both specific AI systems and the wider global AI ecosystem.15 In particular, we refer to the phase preceding the creation and placing on the market - regardless of purpose or use - of the AI tool, which has therefore already been developed, tested, and implemented by the time it is placed on the market and used.
At the time of writing, a quick keyword search on the cordis.eu website – CORDIS (Community Research and Development Information Service) is the European Commission’s primary public repository and portal for disseminating information on EU-funded research and innovation projects, providing open access to project descriptions, results, reports, and deliverables across all framework programmes – shows that recent EU research initiatives and projects address AI, ethics, and EDI topics, but almost always separately. For example, some have conducted comprehensive analyses and produced guidelines on AI and robotics, covering ethical, legal, and human rights aspects, as well as public attitudes and practical frameworks for their development and use. Others have focused on ethics by design, producing guidelines for the ethical development and use of AI and big data systems. Several significant projects funded by the European Union have achieved notable results but have remained limited to recommendations or guidelines, largely confining themselves to observing what already exists – that is, what has already been produced, placed on the market, and used. Many guidelines for AI are generic and require practical application in specific AI research areas. Many scientists and researchers in the field of AI also note that excessive regulatory restrictions could halt development and innovation in the AI sector at the European level, while enabling competitors in other regions without such constraints, or with none at all, to advance more rapidly.16,17 Legislative restrictions, though understandable in some contexts, may hinder AI's potential contribution to research, for example in biometric identification systems or data collection for socially sensitive research. Moreover, AI's reliance on historical data poses challenges due to inherent biases,18 particularly regarding gender equality and other EDI approaches.15,19 Addressing these biases is crucial for the ethical and fair application of AI.
The European STEP project (2022–2025, references in the Acknowledgement section below)20,21 whose initial findings we present here – has experimented with an approach that diverges from all of the above and attempts to address the problem before it arises. The aim of the project was to promote intensive dialogue – towards better scientific production – between disciplines that are currently distant from one another, namely STEM and SSH subjects.
We have observed that educational paths remain distant and divergent where sociology should interface with AI developers, and pedagogy with those who design the language of tools, and so on.
We collected feedback from participants in summer schools, seminars, and project workshops. The vast majority of responses highlighted the scientific and educational value of incorporating EDI issues into STEM topics. At least 250 satisfaction questionnaires were distributed during and after the events organised by the project, all of which focused on how to integrate EDI approaches and aspects into STEM-related subjects.
During the project, EDI became one of the criteria for evaluating the departments of various institutions involved. For national accreditation purposes, the project was rated as excellent and contributed significantly to the success of the application in one case.
All those who participated in the project activities in various capacities – more than 2,500 people – agreed that, while the linear approaches of 20th-century social sciences provide insights, it is now necessary to recognise the complex interplay of numerous interacting variables.22 It is essential to consider the many variables and all social components from the initial product design stage, rather than only after AI has been developed, marketed, and used.
This brief contribution, to be followed by further scientific publications analysing the results of the activities described above using specific methodologies, aims to support the necessary commitment to address the multiple dimensions of AI research in collaboration with researchers from the social sciences and humanities. In this context, diversity and inclusion in AI systems and their development promote a humanistic approach to product development. This interplay of collective diversity and interdisciplinary capabilities will have a significant impact and usher in a new era of AI research and the increasingly prominent production of linked knowledge, while promoting enhanced dialogue among disciplines and areas of expertise, and extending the discussion to researchers, scientists, and professionals from different geographical areas. This last fundamental component of the dialogue would facilitate discussion between different geographical areas and regulatory environments, which could then contribute different perspectives based on the constraints that exist in their respective countries.
The objective of the activity described in this paper was to address significant gaps in the current scientific and technological landscape regarding the integration of EDI into fields of science and technology that are traditionally considered removed from such.23,24 Although there is increasing attention to ethics and inclusiveness in research, many scientific disciplines continue to adopt narrowly technical or discipline-specific perspectives that overlook the influence of social dimensions on technological outcomes.25 The initiative discussed here aimed to bridge this gap by introducing an EDI-sensitive perspective into areas where innovation is often driven by technological efficiency rather than human-centred values.
Below, we summarise the main aspects that demonstrate the innovative and forward-looking nature of this project:
The transformative potential of this work exemplifies a holistic approach to STEM – particularly AI – by linking technological innovation with social, economic, and educational advancement. This strategy transcends traditional disciplinary silos, creating synergies that amplify the positive societal impacts of scientific progress. In doing so, it helps redefine excellence in research and innovation as a multidimensional construct encompassing both scientific rigour and social relevance.
Spanning multiple countries, disciplines, and types of institutions, the consortium represented genuine interdisciplinarity and demonstrated how diverse cultures can engage in constructive dialogue. This collaborative model facilitated the cross-fertilisation of ideas and set a precedent for further interdisciplinary and cross-sectorial networks to benefit from the project’s implementation and results. Moreover, it demonstrates how diversity within research teams can be leveraged as a source of creativity and scientific advancement, shaping knowledge production processes that are respectful of EDI from the outset.
Greater education on addressing issues related to ethics and inclusion in universities and individual university courses would ensure that the innovative products of future researchers would not be affected by any sanctions contained in the AI Act, as they would already be created with the appropriate components, ethical considerations, and inclusive approaches. In fact, this corresponds to a very simple reasoning: if the initial design process is free from bias, inclusive, and comprehensive, then the result will be as well.
The authors of this contribution have experimentally introduced EDI aspects into civil engineering courses and have achieved considerable success.26 The same has occurred in several courses taught to PhD students in robotics engineering at their university.
Our experience – gained from both the European project in question and other European-funded projects, as well as from introducing EDI topics into STEM course syllabi – leads us to conclude that these are not radical changes requiring significant bureaucratic, administrative, or policy effort, but rather the simple inclusion of holistic perspectives that also contribute to the training of future researchers. This prepares them to better face the technological and non-technological challenges of contemporary societies in which they will apply their knowledge and innovative drive.
In our opinion, and for all the above reasons, the conclusion is that we must focus on the younger generations and increasingly seize the opportunities offered by European projects to achieve a critical mass that produces inclusive knowledge at a global level in a culturally informed and scientifically supported manner.
This work was partly funded by the Horizon Europe Project STEP - STEM and Equality, Diversity and Inclusion: an Open Dialogue for Research Enhancement in Portugal under Grant Agreement No. 101078933.
The authors declare that there is no conflict of interest.
©2025 :, et al. This is an open access article distributed under the terms of the, which permits unrestricted use, distribution, and build upon your work non-commercially.