How can instructors successfully create and teach courses about generative AI in their discipline? Laura Albert, a professor of industrial & systems engineering, tackled that challenge in spring 2025 with a “topics” course, ISYE603 AI and Systems. Albert identified four learning themes for the course: understanding basic principles, gaining hands-on experience with AI tools, considering what responsible use of AI looks like, and applying AI to real-world problems in areas like smart infrastructure, manufacturing, transportation, and healthcare. She spoke with CTLM recently about this experience.

What led you to take on creating an entirely new course about AI?
I became interested in AI not so much from my research but mostly from professional service and leadership and seeing where the discipline (of industrial engineering) was going and where it needs to go. Part of this was government advocacy work that I was doing for my discipline – I was meeting with Senate staffers and House staffers and members of the White House Office of Science & Technology Policy about AI, and I kept coming back to the same theme: Industrial engineers are the people who know how to deploy AI at scale in complex systems and do so responsibly. And that became the one-sentence description of the course.
Creating a course in such a rapidly evolving area could be pretty intimidating. What was most helpful to you in identifying the learning themes and deciding what topics to cover?
It was completely daunting. A couple of things really helped. One was talking with other leaders in my field about how we could make contributions with AI. For example, engineers study safety-critical systems. This is not new to engineers, but what might be new is thinking about that in the context of AI. The other thing that was really helpful was giving a talk for a campus series on AI. That forced me to pull ideas into an hour-long talk with several distinct themes and a story with a beginning, middle, and end. That became my starting point for the syllabus, the topics we were going to cover, and the examples that I used in the course.
Find more about Dr. Albert’s course, including links to examples and resources, on her blog, Punk Rock Operations Research.
In the first week, you asked your students, “What would you like to learn in this course?” Why did you do that and how did you act on their feedback?
I ask students for input about their goals on the first day of class in other courses I teach. This invites the students to be collaborative partners in shaping the course. And to get students interested in a topic, you really have to make a course relevant to them. I had prepared a lot of examples for the course, and I wanted to also include other applications along the way. Students wanted to hear about manufacturing and supply chains. As I was polishing the lectures, I would search for those examples to tailor the course to their interests.
The course had an interesting mix of content and activities – you brought in experts on AI and medicine and on autonomous driving to give guest lectures. You had some in-class active learning activities. There was a self-directed project. Out of all these things, what is one that you thought worked especially well?
I especially enjoyed the in-class learning activities. They’re hard to design because you have to connect the dots across the concepts in this course and that was challenging, but when it worked well, students really got it. For example, we had a hands-on activity having to do with the various ways that you might evaluate a machine learning algorithm. We learned about metrics, such as false positives, false negatives, and area under the curve. We mapped those onto a business objective and a real application, related to marketing to make a strong connection between machine learning and the real problem. Then, we found the sweet spot that balanced the errors with the marketing costs and potential revenue. Students thought it was really cool to transform machine learning into a direct lever to deliver organizational value.
There’s a lot of concern about AI enabling academic misconduct and hurting students’ learning. You intentionally integrated AI into the course, not just for students, but using it yourself, and you were really transparent about how you were using it. You coached students on appropriate ways for them to use it, how you wanted them to cite it if they were using it. How did students respond to your transparency about how you were using it and how did your efforts to encourage them to use it appropriately and cite it play out?
The culture is changing really fast and professors are playing catch-up – it’s been a real challenge. I was fairly transparent about my use of AI with my students. For the AI in Systems course, I challenged myself to use AI for almost everything, even tasks I did not want to use AI for. I tried to mock up some lecture notes with AI but I found them to be terrible. The students appreciated it when I shared my experiences with them, because it transformed me from a gatekeeper into a guide.
I had a policy of asking students to cite their AI use, but it wasn’t working. I included several writing exercises in the class, and I could tell that students were using AI but were not citing their use. I then approached AI use from a different perspective. Instead of framing AI use from the perspective of maintaining academic integrity, I framed it as contributing to a culture of responsible AI use as professionals. Appropriate AI is just as important in the workplace as it is in the classroom. For example, professionals might not be allowed to enter their data in just any generative AI tool for intellectual property reasons–that would be a breach of their professional responsibility.
I revived the requirement for citing AI usage from this new perspective, and I provided specific examples for citing use AI for their projects. I asked the students to summarize their generative AI use in a single sentence explaining how they used AI and then how they took responsibility for their AI use, such as checking AI-generated code and verifying references.
I had anticipated reading the AI statements at the bottom of all their project reports. But during the presentations in class, the students included a slide summarizing their AI use. They were open with each other and transparent about their AI use. They were very professional. It was better than I had planned – they were very proud talking about how they responsibly used AI and I was impressed with their professionalism.
What advice do you have for instructors who are thinking about developing an AI-related course in their discipline?
It moves fast! A downside is that I may have to remake a substantial part of the course each time I teach it. I have a file open on my computer all the time to save ideas for the next semester. Despite the effort, this keeps me engaged in teaching.