What is generative AI?
Generative AI is a subset of artificial intelligence that dates back to the early 50s. It is based on large language models (LLMs) that can create text, images, and even music. Generative AI relies on patterns and structures from existing data to create new content. A large language model is trained on billions of pieces of information and can improve over time as new information is added.
Generative AI text generators (some of which are available only by subscription) include ChatGPT, Microsoft Copilot (free and campus-supported), Claude, Gemini (formerly Google Bard), and Poe. Image generators include Adobe Firefly, DALL-E, DreamStudio, and Midjourney.
These technologies are being rapidly adopted by a wide range of industries and disciplines and are expected to have a transformative impact on individuals both personally and professionally.
Learn more
7 Things You Should Know About Generative AI (EDUCAUSE Review)
Return to top
How is generative AI changing what students need to learn?
Our students need knowledge, skills, and experiences to thrive in a world powered by generative AI. We must be prepared to help students:
- gain literacy in AI tools and learn to use them fluently, creatively, and ethically;
- develop core competencies in conjunction with AI competencies (e.g., critical thinking, creativity, communication, citizenship, cultural sensitivity, ethics, etc.);
- build capacity to live and work in tandem with evolving versions of AI.
Return to top
How might AI be useful in teaching and learning?
Generative AI can be a powerful tool in teaching and learning, offering a range of applications, including:
- Instructional materials: Instructors can use AI to generate drafts of materials such as quiz questions, rubrics, and problem sets, freeing up time for deeper engagement with students.
- Creative work: AI can assist in creative classes by suggesting ideas and generating starter prompts for stories or art projects.
- Content accessibility: AI can promote access for students with disabilities by generating alternative formats of content, such as audiobooks from textbooks or real-time captioning from lectures.
- Learning support: AI-powered tutoring systems can support students outside of classroom and office hours, answering questions about course content and providing opportunities to review and practice.
Return to top
What concerns should be considered?
- Bias: Biases in the data that generative AI systems draw from are inherited and may even be amplified by AI systems. This can lead to output that promotes prejudices and to unfair outcomes in areas like hiring, law enforcement, and loan approval.
- Misinformation: Errors in training data can cause an AI to provide incorrect information, often in a confident tone.
- Disinformation: AI’s ability to generate realistic text, images, and videos can be exploited to create and spread false information.
- Equity: More advanced AI tools are available to those with the ability to pay for them.
- Privacy: AI can scrape data from various sources to create detailed profiles of individuals without their knowledge or consent. Data you enter into an AI may be stored and used by the tool for training and future content generation.
- Intellectual property: Many AI tools are trained on copyrighted material, raising questions of copyright violation. If you enter your own intellectual property into an AI, it may be stored and shared to other users. As AI generates content, it raises questions about who owns the generated material — the creator of the AI, the user, or no one at all?
- Security: AI systems can be vulnerable to attacks that manipulate their behavior, leading to unexpected or harmful outcomes.
- Regulatory challenges: The rapid development of AI technologies can outpace the ability of laws and regulations to keep up, leading to potential gaps in governance that might be exploited.
Learn more
Statement on Use of Generative AI (DoIT)
Guiding Principles for Instructors (Division for Teaching & Learning)
Return to top
What is prompting?
Prompting refers to the process of providing a carefully crafted text input (“prompt”) to an AI system to generate specific outputs, such as text, images, or code. It is crucial in guiding the AI to produce content that aligns with the user’s intentions.
Prompting is a learned process that users refine with practice. Effective prompts typically include detailed instructions or parameters. For instance, when creating images, prompts might specify subjects, styles, and even emotions to be conveyed. In text generation, prompts might outline topics, tone, style, and the desired length of the output.
The skill in prompting lies in the ability to articulate these requirements clearly and concisely, enabling the AI to understand and execute the task with precision. Advanced prompting might also involve iterative refinement, where initial outputs are reviewed and the prompt is adjusted to improve subsequent results. This interaction between the user and the AI through effective prompting is key to leveraging generative AI technologies to their full potential.
Learn more
Getting started with prompts for text-based Generative AI (Harvard University Information Technology)
Return to top
What are UW–Madison’s AI tools and policies?
Current tools
The campus landscape for generative AI tools is rapidly evolving.
- Microsoft Copilot became available in spring 2024 to students, faculty, and staff as the university’s equitable access tool for question/answer AI. Using Copilot while logged in with your NetID provides important data security and privacy protections. Microsoft will not use your prompts or responses to train its AI models.
- The Google Gemini chat web app is also available to students, faculty, and staff via Google Workspace with NetID login. Your data is not reviewed by anyone to improve AI models, used to train AI models, or shared with other users or institutions.
- Microsoft Azure and Amazon Web Services (AWS) may be useful for UW–Madison researchers who want access to application programming interfaces (APIs) for multiple large language models.
Policies and guidance
Although AI offers new and powerful capabilities for research and education, it also poses a potential risk to institutional data that UW–Madison is legally and ethically obligated to protect.
Currently, the only data that should be entered into any generative AI tool or service is information classified as public (low risk). For more details, please see DoIT’s Statement on Use of Generative AI. Instructors are encouraged to consider these guiding principles when using AI in teaching and learning.
What’s ahead
The university is exploring future generative AI tools such as virtual course assistants, transcription/note-taking services, and chatbots for institutional use.