Q&A with José Antonio Bowen, co-author of “Teaching with AI”

Across the country, college instructors are grappling with how to address generative artificial intelligence (AI). AI is rapidly changing many fields and creating new expectations for the skills students will need upon graduation. At the same time, it raises many concerns, including bias, academic misconduct, and inaccuracy.

José Antonio Bowen, a nationally recognized expert and co-author of “Teaching with AI: A Practical Guide to a New Era of Human Learning,” will speak to UW–Madison instructors on Oct. 1 in a webinar sponsored by the Center for Teaching, Learning & Mentoring (CTLM). Bowen chatted recently with CTLM. This conversation has been edited (by a human).

Photo of José Antonio Bowen
Bowen

What “getting started” advice do you have for those who teach college?

The first thing to do is to try a couple of the really good models, like Claude 3.5 or ChatGPT-4omni, both of which are free at the moment. (Editor’s note: Microsoft Copilot, the university-licensed AI tool, is also free and protects user privacy.) Try it for something that you actually need done – like, help me with the department schedule. Design an activity for this class so I don’t have to talk the whole time.

The other thing is, try having it do something and then talk to students about it. Ask it to do one of your assignments, but then show it to students and say, “What would happen if you had AI do your assignments? What’s the benefit to you? What’s the problem? How are you using AI?” The truth is, students are more eager and more accepting than faculty, but they’re not all in on AI. They’re nervous about this, too. They’re very worried about their future jobs. And so step one is, get some experience. And step two is, talk to students about it.

It is like email and the Internet, right? There were some benefits to the Internet and to email, but there also were some terrible things that happened. But ultimately none of us individually can say, ‘Let’s eliminate email, let’s eliminate the Internet.’ AI is a faster growing technology than either one of those things. But we have adjusted to previous things – this is just happening much, much faster.

I do encourage faculty to get involved because the faculty are the people who are the best able to use the technology and need to be asking the questions. If you’re not involved, somebody else will make these decisions. We need to have some skin in the game.

You argue that it’s important for colleges to teach students AI literacy. What are the essential components of that literacy?

AI literacy broadly falls into two categories. The first is before you use it and the second is after you get a response. Before you use AI, you’ve got to understand, which model do I need? I need to understand a little bit about how prompts work. Have I asked for a task? Have I given it a voice? What are the things I could do to get a better response? Do I know what bias and hallucination are and how they’re likely to interfere with the answer I want?

Once you get the response, students have got to understand, do I want to use this? Is it the right answer? What are the ethics of doing this? Do I have to cite this?

There are a lot of questions that students are going to need to think about both before and during the use of AI. And they need that because the workplace has already changed. A faculty member told me a few weeks ago that she did six job recommendations for students at different companies. And all six of them asked the question, “Can this student use AI to do their work better and faster?”

What are you most concerned about in terms of harm related to educational uses of AI? And are there things instructors can do to mitigate those risks?

The biggest harm is that students are just not going to learn. So it means that motivation has become more important – helping students see why they have to do this.

Image of the cover of the book "Teaching with AI: A Practical Guide to a New Era of Human Learning" by José Antonio Bowen and C. Edward Watson. The book title is on a blue background with an image of a computer-generated red apple.

This is analogous to the calculator – the calculator can do addition and subtraction, but we still teach addition and subtraction because it’s not enough to have the calculator do it for you all the time. The calculator can certainly help you, right? But we teach number sense for situations like, here are 50 numbers that you need to add up. Everybody’s going to use a calculator to do that. But anybody who’s good with numbers is going to look at the answer and say, “That’s about right” or, “That’s off by a couple of zeros – maybe I entered a number into the calculator wrong. I don’t trust that answer because it doesn’t look right.” That’s still really valuable even though very few people are going to need to make change with nickels and dimes anymore. That’s changed the skill that we need, but there’s still a skill.

It makes it more important for me as a teacher to explain to students why I need you to learn how to add. Motivation is important, right? It’s why when we go to the gym, fitness coaches say things like, “No pain, no gain” or “This should be challenging.” They’re trying to encourage us to do the work that only we can do.

I think it really starts with, why am I having students do this? Why is it important? And articulate that to the students.

Register for the Oct. 1 webinar “Teaching and Thinking with AI featuring José Antonio Bowen,” part of CTLM’s Exploring AI in Teaching series. And check out CTLM’s guides to incorporating AI into teaching.