Building responsible AI
Francesco Ferrero, head of the Flagship Initiative on Artificial Intelligence at the Luxembourg Institute of Science and Technology (LIST), talks about AI adoption and the importance of building ethical, frugal, human-centred AI.
What are the main challenges we face today in developing and deploying AI responsibly?
AI is reshaping society at an unprecedented pace, touching jobs, education, creativity, and regulation. On the job market, routine and repetitive tasks are increasingly automated, while new roles emerge in supervision, creativity, and decision-making. The challenge is to adopt a “co-pilot” approach, where AI supports humans rather than replaces them. Education is evolving too. Generative AI tools are changing teaching and learning methods, but we must avoid over-reliance that could erode creativity and critical thinking. Creativity, empathy, and human connection remain uniquely human, and AI should enhance these capacities rather than substitute them. Another key challenge is public understanding and trust. The speed of technological change can create fear. For AI to be successful, society needs to understand it, accept it, and engage with it safely and ethically.
How can policies and ethical guidelines keep pace with rapidly evolving AI technologies?
Keeping regulation aligned with the rapid pace of technological change is an ongoing challenge that requires agility and collaboration. The AI Act, for instance, had to be adjusted swiftly following the emergence of ChatGPT. Europe’s approach, however, remains distinctive: it seeks to strike the right balance between innovation and regulation. AI is too powerful a technology to be left unregulated. Through the AI Act and the GDPR, the objective is to foster AI that is ethical, trustworthy, and human-centred.
What matters now is practical implementation. Transparency, traceability, and accessible guidance—facilitated by trusted environments such as data spaces and AI sandboxes—are essential to translating these principles into practice.
What should we know about the true impact of AI on the environment, and how can we make AI more sustainable?
AI consumes significant energy and water, particularly in large data centres, and the environmental impact is compounded by a lack of standardised reporting. Most large providers do not disclose detailed information about energy use or carbon footprint, making independent benchmarking difficult.
The “bigger is better” approach—developing massive general-purpose models—is unsustainable both financially and environmentally. The solution lies in smaller, specialised models that deliver similar performance with far lower energy consumption. Frugal AI isn’t just environmentally responsible—it’s economically efficient, reducing costs for both companies and public administrations.
What unique advantages or challenges does Luxembourg face in building an impactful AI ecosystem?
Luxembourg offers a combination of assets that make it an ideal environment for developing and testing responsible AI. Its robust infrastructure — including the MeluXina supercomputers (especially the upcoming MeluXina-AI one), world-class data centres, and high-speed connectivity — provides a solid foundation for advanced research and innovation. The country’s compact size is another strength, allowing new AI solutions to be deployed and tested quickly and safely in real-life conditions. This scale also fosters close collaboration between researchers, regulators, and industry, enabling agile experimentation and rapid feedback. Equally important is Luxembourg’s strategic focus on sectors where AI can deliver both economic and societal value.
Can you share examples of projects run by LIST and how they contribute to ethical, human-centred AI solutions?
LIST promotes frugal, transparent, and human-centred AI in every project, promoting open source to ensure the technology remains accessible. A key example is our AI Sandbox, where companies can test their models for technical robustness, regulatory compliance, ethical behaviour, including bias detection, hallucination analysis, and linguistic inclusiveness. We have partnered with organizations such as Banque Internationale à Luxembourg (BIL) to evaluate banking chatbots, ensuring reliability, fairness, and human oversight.
Democratising access to AI and software development is also a priority. Through low- and no-code platforms like BESSER, we enable non-technical users to build AI applications, accelerating digital transformation for SMEs and startups.
LIST is a founding partner of the Luxembourg AI Factory, alongside LuxProvide, LuxInnovation, the University of Luxembourg, LNDS, and other partners. Co-funded by the European Commission and the Government of Luxembourg, the AI Factory supports companies throughout their AI journey — from concept to deployment — offering expert guidance, comprehensive services, and dedicated training and apprenticeship programmes.
Looking ahead, what kind of AI development do you hope to see in the next five years — and what kinds of risks should we be careful to avoid?
AI will continue to evolve at an unprecedented pace. At some point, models may become capable of self-improvement, accelerating progress in ways that are difficult to predict. While this can have enormous benefits, it also carries risks, including significant societal and economic impacts. What concerns me most is that the development of AI is led by an oligopoly of private and state-controlled actors whose alignment with the needs of our society cannot be taken for granted.
Our goal is not just to create more AI, but to create better AI, prioritizing ethical safeguards, transparency, sustainability, and societal benefit over sheer scale.