͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ 
Is this email not displaying correctly? View it in your browser.
Image description

October 2023

SEFI Ethics Special Interest Group Newsletter

Generative AI. Generating morality.

Generating Engineering Ethics Education.

Dear reader,



Fall 2023. What an exciting time to be in engineering education!


We all have been preaching our entire careers, be it months or decades, that engineering students should be scaffolded to develop themselves as more responsible engineers. We, in ethics, took our time at the comfortable sidelines giving intelligent ethical comments to NASA’s Challenger engineers, Volkswagen software developers, or our local ecosystem partners. And suddenly, we find ourselves in the middle of a global debate on generative AI and its widespread effect, also impacting our daily lives in engineering ethics education.


It seems that nobody in the entire world really knows what to do. In this newsletter, including this introduction, we get this state-of-the-art: exploratory and cautious philosophical analyses of how ambiguity, global justice, virtue theories, integrity, narratives, plausible nonsense, and engineering responsibility can add to the debate. Some contributions go a step further and already reflect on the impact on educational aspects such as assessment.



But the debate is still very open! How can we, as engineering ethics educators and researchers, and this time from within, co-generate morality and influence innovations for the better? How can we influence our universities and their ecosystems to be socially responsible? How will we generate and redesign engineering ethics education so that this super-fast developing technology of generative AI is a constructive tool instead of a threat for students to learn important things, such as being critical and designing constructive values in this very AI innovations? And more and more, student involvement will not be an exotic choice of a few enthusiasts, but a necessary part of a process to involve how they see learning and use their technical skills to answer the ever-changing situation.



Fall 2023. Exciting times indeed! We hope that the following contributions might stimulate you to contribute to the global challenge of generative AI in engineering ethics education.


  • Andrew Katz (Virginia Tech, USA), in Generative AI and the role of uncertainty in classroom assessment, states that faculty members must consider what they try to achieve through assessment. Engineering Ethics Education teachers and researchers should anticipate a non-trivial impact, especially as the models transition toward multimodality.

Image description
  • Vlasta Sikimić (TU Eindhoven, The Netherlands) explains in

    AI and education for global justice that AI can help disadvantaged populations with access to information and tools, but the monopolies of companies and moral and epistemic values have to be closely monitored. This can be done by revising existing standards, human-in-the-loop approaches, and constantly keeping in mind the needs of the users – students and teachers.

Image description
  • Constantin Vică (University of Bucharest, Romania), in The choreography of virtues for living with AI, argues that even if codifiable principles of responsibility are possible, AI systems cannot be made accountable, and will not be anytime soon, no matter how efficient deep learning methods become. They see hope when “the future lords of the AI rings should undergo moral education during their university studies.”

Image description
  • In Benchmarking AI tools and assessing integrity, Sasha Nikolic (University of Wollongong, Australia) and Scott Daniel (University of Technology Sydney, Australia) provide an extended list of guiding questions for further research to avoid many unintended consequences.

Image description



Image description
Image description
  • Mihály Héder (Budapest University of Technology and Economics, Hungary) gives a concrete example of how the above can be realised in The Human-Centered AI Masters programme. He describes how he and his colleagues designed a curriculum that incorporates humanities - especially ethics and law -, and social sciences into the technical education of Artificial Intelligence.



Image description



Image description

To conclude, we want to draw attention to this call for contributions exploring the ethical implications of AI Hype, including the overinflation and misrepresentation of AI capabilities and performance.



Thank you once again for thinking on these topics alongside us!



Gunter Bombaerts & Diana Martin

Image description
Image description
Image description
Image description
Image description

SEFI aisbl

39, rue des Deux Eglises
1000 Brussels

[email protected]

+32 2 502 36 09
www.sefi.be

Data protection policy for our MembersStakeholder ContactsSuppliersEmployees

If you want to unsubscribe, click here.