Skip to Main Content

AI in Higher Education: Faculty & Staff Guide

Practical Strategies

Practical tips on Prompt Engineering

What is the best way to approach prompt engineering knowing that "learning to converse with an AI is similar to adapting our communication style depending on our human audience"? ("Adapting Four Popular Frameworks")

Practicing is key in navigating prompt engineering for the classroom.

Get started by exploring AI in Academia's Sample Educator Prompts

How to craft a prompt that works for your class and college?

Read Harvard University's Getting started with prompts for text-based Generative AI tools

Explore Use a template to build a prompt

Use Adapting Four Popular Prompt Frameworks for Education: ALGA, RAAG, RISE, and RRA

 

Remember, if you get stuck, you can always ask the AI tool for help.  "Start with a basic idea of what you want and ask the AI to expand on it for you, such as 'What should I ask you to help me write an assignment?'. Try asking 'Tell me what else you need to do this' to further refine" (Getting Started with Prompts).

 

 

Sample Syllabus Statements

Read Duke University's Artificial Intelligence Policies: Guidelines and Considerations

Explore Harvard University's An Illustrated Rubric for Syllabus Statements about Generative Artificial Intelligence

Consider Eliminating Algorithmic Bias Is Just the Beginning of Equitable AI

Examples of AI Supportive Policies

This course encourages students to explore the use of generative artificial intelligence (GAI) tools such as ChatGPT for all assignments and assessments. Any such use must be appropriately acknowledged and cited. It is each student’s responsibility to assess the validity and applicability of any GAI output that is submitted; you bear the final responsibility. Violations of this policy will be considered academic misconduct. We draw your attention to the fact that different classes at Harvard could implement different AI policies, and it is the student’s responsibility to conform to expectations for each course (Harvard University)

Within this class, you are welcome to use foundation models (ChatGPT, GPT, DALL-E, Stable Diffusion, Midjourney, GitHub Copilot, and anything after) in a totally unrestricted fashion, for any purpose, at no penalty. However, you should note that all large language models still have a tendency to make up incorrect facts and fake citations, code generation models have a tendency to produce inaccurate outputs, and image generation models can occasionally come up with highly offensive products. You will be responsible for any inaccurate, biased, offensive, or otherwise unethical content you submit regardless of whether it originally comes from you or a foundation model. If you use a foundation model, its contribution must be acknowledged; you will be penalized for using a foundation model without acknowledgement. Having said all these disclaimers, the use of foundation models is encouraged, as it may make it possible for you to submit assignments with higher quality, in less time.

The university's policy on plagiarism still applies to any uncited or improperly cited use of work by other human beings, or submission of work by other human beings as your own. (EDUC 6191: Core Methods in Educational Data Mining: University of Pennsylvania)

Examples of AI Flexible Policies

Large language models, such as ChatGPT (chat.openai.com) are rapidly changing the tools available to people writing code. Given their use out in the world, the view we will take in this class is that it does not make sense to ban the use of such tools in our problem sets or projects. For now, here is my guidance on how these can and should be used in our class: First and foremost, note that output from ChatGPT can often be confidently wrong! Run your code and check any output to make sure that this actually works. Such AI assistants will give you a good first guess, but these are really empowering for users who invest in being able to tell when the output is correct or not.If you use ChatGPT or similar resources, credit it at the top of your problem set as you would a programming partner.Where you use direct language or code from ChatGPT, please cite this as you would information taken from other sources more generally (Public Policy course: Georgetown University)

Policy on the use of generative artificial intelligence tools:

Using an AI-content generator such as ChatGPT to complete assignment without proper attribution violates academic integrity. By submitting assignments in this class, you pledge to affirm that they are your own work and you attribute use of any tools and sources.

Learning to use AI responsibly and ethically is an important skill in today’s society. Be aware of the limits of conversational, generative AI tools such as ChatGPT.

  • Quality of your prompts: The quality of its output directly correlates to the quality of your input. Master “prompt engineering” by refining your prompts in order to get good outcomes.

  • Fact-check all of the AI outputs. Assume it is wrong unless you cross-check the claims with reliable sources. The currently AI models will confidently reassert factual errors. You will be responsible for any errors or omissions.

  • Full disclosure: Like any other tool, the use of AI should be acknowledged. At the end of your assignment, write a short paragraph to explain which AI tool and how you used it, if applicable. Include the prompts you used to get the results. Failure to do so is in violation of academic integrity policies. If you merely use the instructional AI embedded within Packback, no disclosure is needed. That is a pre-authorized tool.

Here are approved uses of AI in this course. You can take advantage of a generative AI to:

  • Fine tune your research questions by using this tool https://labs.packback.co/question/  Enter a draft research question. The tool can help you find related, open-ended questions

  • Brainstorm and fine tune your ideas; use AI to draft an outline to clarify your thoughts

  • Check grammar, rigor, and style; help you find an expression

(George Washington University)

The beta release of Dall-E-Mini in July 2022 and ChatGPT in November 2022 are among many tools using artificial intelligence. There is a good possibility that using tools like these are going to become an important skill for careers in the not distant future (https://www.theguardian.com/commentisfree/2023/jan/07/chatgpt-bot-excel-ai-chatbot-tech). In the meantime though, it's going to take a while for society to figure out when using these tools is/isn't acceptable. There are three reasons why:

  • Work created by AI tools may not be considered original work and instead, considered automated plagiarism. It is derived from previously created texts from other sources that the models were trained on, yet doesn't cite sources.

  • AI models have built-in biases (ie, they are trained on limited underlying sources; they reproduce, rather than challenge, errors in the sources)

  •  AI tools have limitations (ie, they lack critical thinking to evaluate and reflect on criteria; they lack abductive reasoning to make judgments with incomplete information at hand)

Given these (important) ethical caveats, some scholars in computational sciences debate if the hype over AI-based tools-- especially as "automated plagiarism" tools-- should be heeded at all (https://irisvanrooijcogsci.com/2023/01/14/stop-feeding-the-hype-and-start-resisting/). For the time being, I'm tentatively, pragmatically augmenting my academic integrity policy with a policy regarding a responsible use of AI-based tools in my class. This policy was developed from a response by ChatGPT-3 (2023) and edited on critical reflection by me:

Academic integrity is a core principle at UMass Lowell and it's vital that all students uphold this principle-- whether using AI-based tools or otherwise. For my class, a responsible use of AI-based tools in completing coursework or assessments must be done in accordance with the following:

  1. You must clearly identify the use of AI-based tools in your work. Any work that utilizes AI-based tools must be clearly marked as such, including the specific tool(s) used. For example, if you use ChatGPT-3, you must cite "ChatGPT-3. (YYYY, Month DD of query). "Text of your query." Generated using OpenAI. https://chat.openai.com/"

  2. You must be transparent in how you used the AI-based tool, including what work is your original contribution. An AI detector such as GPTZero (https://gptzero.me/) may be used to detect AI-driven work.

  3. You must ensure your use of AI-based tools does not violate any copyright or intellectual property laws.

  4. You must not use AI-based tools to cheat on assessments.

  5. You must not use AI-based tools to plagiarize without citation.

Violations of this policy will be dealt with in accordance with UMass Lowell's academic integrity policy. If you are found in violation of this policy, you may face penalties such as a reduction in grade, failure of the assignment or assessment, or even failure of the course. Finally, it's your responsibility to be aware of the academic integrity policy and take the necessary steps to ensure that your use of AI-based tools is in compliance with this policy. If you have questions, please speak with me first, as we navigate together how best to responsibly use these tools.

ChatGPT-3. (2023, January 10). "Write a syllabus policy about the academic integrity of students using ai-based tools." Generated using OpenAI. https://chat.openai.com/ 

(Social Media/Marketing Course: UMass-Lowell)

Examples of AI Restrictive Policies

We expect that all work students submit for this course will be their own. In instances when collaborative work is assigned, we expect for the assignment to list all team members who participated. We specifically forbid the use of ChatGPT or any other generative artificial intelligence (AI) tools at all stages of the work process, including preliminary ones. Violations of this policy will be considered academic misconduct. We draw your attention to the fact that different classes at Harvard could implement different AI policies, and it is the student’s responsibility to conform to expectations for each course (Harvard University)

Students are not allowed to use advanced automated tools (artificial intelligence or machine learning tools such as ChatGPT or Dall-E 2) on assignments in this course. Each student is expected to complete each assignment without substantive assistance from others, including automated tools (University of Delaware)

All assignments should be your own original work, created for this class. We will discuss what constitutes plagiarism, cheating, or academic dishonesty more in class. [...] You must do your own work. You cannot reuse work written for another class. You should not use paraphrasing software (“spinbots”) or AI writing software (like ChatGTP) (University of California - Santa Cruz)

 

Creating Rubrics

Explore Harvard University's AI Pedagogy Project (AIPP), developed by the metaLAB at Harvard, for an introductory guide to AI tools, an LLM Tutorial, additional AI resources, and curated assignments to use in your own classroom.

Read The metaLAB has also published a quick start guide for Getting Started with ChatGPT

Generating Feedback with AI Tools

Read Harvard University's STUDENT USE CASES FOR AI Part 1: AI as Feedback Generator

Explore Harvard University's Better Feedback with AI? A new study explores how large language models can aid instruction in certain learning environments

Use Standford University's workshop on Exploring Forms of Feedback with AI: an interactive workshop focusing on the intersection of artificial intelligence tools and the feedback process.

Creating Quizzes and Assessment with AI

ReadUsing ChatGPT to Write Quiz Questions (UCLA)

Explore: Creating Assessments with AI: videos and resources  (UVA)

Watch Zawan Al Bulushi's CHATGPT PROMPTS that Make Assessment FASTER & EASIER

Zawan Al Bulushi

 

ALGAE Framework for Prompting