Instructor
Laura Grossenbacher, Ph.D., Director, Technical Communication Program, College of Engineering
Course
Engineering Professional Development 712 – Professional Ethics, taken by master’s degree students in STEM.
Assignment
A team of students researches a discipline-specific ethical problem in their field: for example, mechanical engineering students study the Boeing MAX 8 debacle. They must provide more than just background on the given case – they must research, discuss and generate a variety of options the engineers involved might have considered as a practical response to the ethical crisis as it was unfolding.
For their analysis, the teams are asked to consult a Professional Engineer’s Code of Ethics and an Ethical Decision-Making System that I provide to them, with multiple “ethics tests” that can be applied to each option to help gauge its acceptability.
Before generative AI came along, this sort of analysis gave engineering teams a chance to deliberate over their different options and attempt to reach consensus.
Now, students begin by going through ethical deliberation and documenting their decision-making, including the specific strategies they might take to get “the most ethical option” done in the real world. Then I ask them to run the case, Ethical Decision-Making System, Professional Code of Ethics, and their strategies through Google Gemini and Microsoft Copilot, with a prompt asking which of the options appears to be most ethical. Their presentation must grapple with the AI output: Do the AI tools agree? Are the AI conclusions defensible? Did AI miss anything?
Results
Incorporating generative AI has added a deeper layer to the ethical analysis, as most teams see something lacking in the AI response and yet they sometimes struggle to explain why it is inadequate.
My follow-up questions usually include, “Would you want other engineers to rely on AI for ethical thinking? Why or why not?” Many of my grad students end up saying no, at least not entirely – because the AI responses do not anticipate unpredictable behaviors very well, and they don’t deal as well with uncertainty as human beings can.
Students often point out, however, that using an AI tooI to check your analysis (after having an open discussion with your team) can potentially give you the courage to speak up about a problem. And as AI tools continue to improve, they might become a natural addition to problem-solving when dealing with complex ethical challenges.
Contact
Dr. Grossenbacher would appreciate hearing from anyone who tries a similar assignment – she is looking for potential collaborators on related research. Email her: lrgrossenbac@wisc.edu.