Prompt Crafting Tips: define clear objectives, use concise language, provide examples of preferred outputs, and incorporate context to guide AI responses effectively.
What is the best way to approach prompt engineering knowing that "learning to converse with an AI is similar to adapting our communication style depending on our human audience"? ("Adapting Four Popular Frameworks")
Practicing is key in navigating prompt engineering for the classroom.
Get started by exploring AI in Academia's Sample Educator Prompts
Read Harvard University's Getting started with prompts for text-based Generative AI tools
Explore Use a template to build a prompt
Use Adapting Four Popular Prompt Frameworks for Education: ALGA, RAAG, RISE, and RRA
Remember, if you get stuck, you can always ask the AI tool for help. "Start with a basic idea of what you want and ask the AI to expand on it for you, such as 'What should I ask you to help me write an assignment?'. Try asking 'Tell me what else you need to do this' to further refine" (Getting Started with Prompts).
Read Duke University's Artificial Intelligence Policies: Guidelines and Considerations
Explore Harvard University's An Illustrated Rubric for Syllabus Statements about Generative Artificial Intelligence
Consider Eliminating Algorithmic Bias Is Just the Beginning of Equitable AI
Examples of AI Supportive Policies |
This course encourages students to explore the use of generative artificial intelligence (GAI) tools such as ChatGPT for all assignments and assessments. Any such use must be appropriately acknowledged and cited. It is each student’s responsibility to assess the validity and applicability of any GAI output that is submitted; you bear the final responsibility. Violations of this policy will be considered academic misconduct. We draw your attention to the fact that different classes at Harvard could implement different AI policies, and it is the student’s responsibility to conform to expectations for each course (Harvard University) |
Within this class, you are welcome to use foundation models (ChatGPT, GPT, DALL-E, Stable Diffusion, Midjourney, GitHub Copilot, and anything after) in a totally unrestricted fashion, for any purpose, at no penalty. However, you should note that all large language models still have a tendency to make up incorrect facts and fake citations, code generation models have a tendency to produce inaccurate outputs, and image generation models can occasionally come up with highly offensive products. You will be responsible for any inaccurate, biased, offensive, or otherwise unethical content you submit regardless of whether it originally comes from you or a foundation model. If you use a foundation model, its contribution must be acknowledged; you will be penalized for using a foundation model without acknowledgement. Having said all these disclaimers, the use of foundation models is encouraged, as it may make it possible for you to submit assignments with higher quality, in less time. The university's policy on plagiarism still applies to any uncited or improperly cited use of work by other human beings, or submission of work by other human beings as your own. (EDUC 6191: Core Methods in Educational Data Mining: University of Pennsylvania) |
Examples of AI Flexible Policies |
Large language models, such as ChatGPT (chat.openai.com) are rapidly changing the tools available to people writing code. Given their use out in the world, the view we will take in this class is that it does not make sense to ban the use of such tools in our problem sets or projects. For now, here is my guidance on how these can and should be used in our class: First and foremost, note that output from ChatGPT can often be confidently wrong! Run your code and check any output to make sure that this actually works. Such AI assistants will give you a good first guess, but these are really empowering for users who invest in being able to tell when the output is correct or not.If you use ChatGPT or similar resources, credit it at the top of your problem set as you would a programming partner.Where you use direct language or code from ChatGPT, please cite this as you would information taken from other sources more generally (Public Policy course: Georgetown University) |
Policy on the use of generative artificial intelligence tools: Using an AI-content generator such as ChatGPT to complete assignment without proper attribution violates academic integrity. By submitting assignments in this class, you pledge to affirm that they are your own work and you attribute use of any tools and sources. Learning to use AI responsibly and ethically is an important skill in today’s society. Be aware of the limits of conversational, generative AI tools such as ChatGPT.
Here are approved uses of AI in this course. You can take advantage of a generative AI to:
|
The beta release of Dall-E-Mini in July 2022 and ChatGPT in November 2022 are among many tools using artificial intelligence. There is a good possibility that using tools like these are going to become an important skill for careers in the not distant future (https://www.theguardian.com/commentisfree/2023/jan/07/chatgpt-bot-excel-ai-chatbot-tech). In the meantime though, it's going to take a while for society to figure out when using these tools is/isn't acceptable. There are three reasons why:
Given these (important) ethical caveats, some scholars in computational sciences debate if the hype over AI-based tools-- especially as "automated plagiarism" tools-- should be heeded at all (https://irisvanrooijcogsci.com/2023/01/14/stop-feeding-the-hype-and-start-resisting/). For the time being, I'm tentatively, pragmatically augmenting my academic integrity policy with a policy regarding a responsible use of AI-based tools in my class. This policy was developed from a response by ChatGPT-3 (2023) and edited on critical reflection by me:
Academic integrity is a core principle at UMass Lowell and it's vital that all students uphold this principle-- whether using AI-based tools or otherwise. For my class, a responsible use of AI-based tools in completing coursework or assessments must be done in accordance with the following:
Violations of this policy will be dealt with in accordance with UMass Lowell's academic integrity policy. If you are found in violation of this policy, you may face penalties such as a reduction in grade, failure of the assignment or assessment, or even failure of the course. Finally, it's your responsibility to be aware of the academic integrity policy and take the necessary steps to ensure that your use of AI-based tools is in compliance with this policy. If you have questions, please speak with me first, as we navigate together how best to responsibly use these tools. ChatGPT-3. (2023, January 10). "Write a syllabus policy about the academic integrity of students using ai-based tools." Generated using OpenAI. https://chat.openai.com/ (Social Media/Marketing Course: UMass-Lowell)
|
Examples of AI Restrictive Policies |
We expect that all work students submit for this course will be their own. In instances when collaborative work is assigned, we expect for the assignment to list all team members who participated. We specifically forbid the use of ChatGPT or any other generative artificial intelligence (AI) tools at all stages of the work process, including preliminary ones. Violations of this policy will be considered academic misconduct. We draw your attention to the fact that different classes at Harvard could implement different AI policies, and it is the student’s responsibility to conform to expectations for each course (Harvard University) |
Students are not allowed to use advanced automated tools (artificial intelligence or machine learning tools such as ChatGPT or Dall-E 2) on assignments in this course. Each student is expected to complete each assignment without substantive assistance from others, including automated tools (University of Delaware) |
All assignments should be your own original work, created for this class. We will discuss what constitutes plagiarism, cheating, or academic dishonesty more in class. [...] You must do your own work. You cannot reuse work written for another class. You should not use paraphrasing software (“spinbots”) or AI writing software (like ChatGTP) (University of California - Santa Cruz) |
Explore Harvard University's AI Pedagogy Project (AIPP), developed by the metaLAB at Harvard, for an introductory guide to AI tools, an LLM Tutorial, additional AI resources, and curated assignments to use in your own classroom.
Read The metaLAB has also published a quick start guide for Getting Started with ChatGPT
Read Harvard University's STUDENT USE CASES FOR AI Part 1: AI as Feedback Generator
Explore Harvard University's Better Feedback with AI? A new study explores how large language models can aid instruction in certain learning environments
Use Standford University's workshop on Exploring Forms of Feedback with AI: an interactive workshop focusing on the intersection of artificial intelligence tools and the feedback process.
Read: Using ChatGPT to Write Quiz Questions (UCLA)
Explore: Creating Assessments with AI: videos and resources (UVA)
Watch Zawan Al Bulushi's CHATGPT PROMPTS that Make Assessment FASTER & EASIER
Zawan Al Bulushi
Explore Harvard University's A Tale of Two Critiques: Compare and reflect on a primary source, a ChatGPT-generated critique of that source, and a human-generated critique. The goal is for students to build skill and confidence with critical reading
Use Harvard's Close Reading the Terms of Service – The AI Pedagogy Project
Downcity Library:
111 Dorrance Street Providence, Rhode Island 02903
401-598-1121
Harborside Library:
321 Harborside Boulevard Providence, RI 02905
401-598-1466