Offering courses that effectively prepare students today for collaborative work with generative AI in the future is a challenge that few instructors have mastered well. Several key lessons have emerged from the Neeley School of Business, Texas Christian University classrooms, where I teach, that might be helpful to effectively harness the generative AI tools in other educational contexts.
1. Continuous Adaptation
The ever-evolving nature of Generative AI technology compels educators to perpetually adapt their teaching materials. New AI platforms are created daily (There Is an AI For That) and existing AI platforms are refined sometimes week to week, becoming more capable, but requiring instructors to update their AI-related curricula to align with the new capabilities.
Particularly, the Large Language Model (LLM) frameworks are regularly updated to improve their efficacy, e.g., the recent introduction of Custom Instructions in ChatGPT. These updates call for a corresponding revision in teaching materials presented to students. Sending regular generative AI email briefs to faculty interested in adopting the technology might help instructors stay current with the AI platform changes.
2. Effective Prompting
Investing time and thought into creating prompts is instrumental in eliciting meaningful interactions with Generative AI systems. Quality prompts are the product of iterative refinement, significantly affecting the quality of the AI-generated output. This so-called ‘prompt engineering’ serves as a critical intermediary that connects ambiguous inquiries to insightful, nuanced answers.
For example, assignments involving generative AI must be thoroughly tested beyond simple copying and pasting of the project’s content into AI platforms. Educators should provide a comprehensive context in the prompts, which will significantly improve the generated responses’ relevance and accuracy.
A novel tactic involves allowing platforms like ChatGPT to query the user rather than relying solely on preset prompts. This interactive mode enables a more personalized retrieval of information, further assisting in task accomplishment.
3. Experimentation And Critical Thinking
Experimentation serves as the bedrock for successfully incorporating Generative AI into educational frameworks. It offers a conduit to understand, learn, and adjust to the rapidly evolving capabilities of this technology, facilitating its optimized use for students and educators alike.
The key lesson when experimenting is the importance of caution during the interpretation of AI-generated content. Educators should cultivate a mindset of critical thinking among students, enabling them to evaluate and possibly improve upon AI-generated solutions.
Students often tend to accept AI recommendations at face value, which can be detrimental to their learning. Requiring students to generate answers before they interact with a generative AI tool might be key to overcoming this obstacle.
4. Effective Generative AI Platforms
When contrasting different versions of platforms like ChatGPT, it is evident that not all models are created equal. GPT-4, for example, is remarkably superior in its performance compared to its predecessor, GPT-3.5. Emerging competitors like Claude 2 from Anthropic are also noteworthy, suggesting a future shift towards subscription models for gaining access to more advanced versions.
When faculty develop class activities using GPT-4 and students use GPT -3.5, there might be surprisingly different outcomes generated in class, due to the low efficacy of the free LLMs available to students. Requiring students to purchase a subscription to GPT-4 for the duration of the course or the project as a part of their course materials might be a way to alleviate this obstacle.
5. Privacy Non-Concerns
The majority of students appear to be largely unconcerned about privacy implications when interacting with LMs like ChatGPT. This laissez-faire attitude occurs even with explicit discussion about how their data can be used or misused.
Contrary to the recent findings of the Pew Research study, which found that 81% of U.S. adults are concerned about how companies use the data they collect about them, college students are not concerned about privacy when using generative AI.
In the educational context, where the primary focus is often on quick information retrieval or task completion, the urgency of academic demands frequently supersedes considerations for data privacy. Furthermore, the perceived anonymity of interacting with a machine creates a false sense of security, making students less vigilant about the information they input into the AI platforms.
6. Beyond Text Generation
The utility of Generative AI in the classroom extends well beyond text production. The technology can incorporate visual and voice data, music, video, code, and more, depending on the model and its training data, to provide a more comprehensive educational tool.
For example, In the current landscape, practically anyone can become a developer. Platforms like ChatGPT or Clappia have significantly simplified the creation of interactive applications, making technology more accessible and encouraging a diverse range of individuals to participate in technological innovation and problem-solving.
7. Defining Boundaries
A key to eliminating academic dishonesty discussions related to generative AI is setting boundaries in a course on its appropriate use within each assignment, rather than just within a syllabus. Based on assignment objectives, the boundaries may range from no generative AI use at all, where for example, an opinion is required, to a full embrace of the technology to help students take their learning to a new level. An example of that would be building on prior knowledge and elevating learning, just in time, to develop a solution to a complex problem.
The assignment boundaries delineate what is permissible and what is not, thus ensuring the student’s focus remains on educational goals while still leveraging the advantages of the new technology. Educators should draft generative AI rules that foremost prioritize course objectives while allowing students to take advantage of the new technology, where possible.
8. Rethinking Assessment
Traditional methods of assessment, such as written take-home essays or reports, synthesizing information, are becoming increasingly obsolete due to the difficulties in distinguishing AI-generated content. The risk of false positives in academic dishonesty claims is too great to potentially ruin a student’s academic career and thus instructors should avoid them.
This calls for a shift towards controlled testing environments, either paper-based, oral, or digital with live, remote proctoring, such as e.g., Respondus Lockdown Browser. Ultimately, building assessments that require the use of a generative AI tool, perhaps with a critique of the AI-generated solution, and a final student recommendation, might enrich students’ critical thinking and better prepare them for their career after graduation. It is in the testing of knowledge that students make the biggest improvements in grasping the material.
The marriage between Generative AI and academia is more of a dynamic dance than a set path—where both teachers and students are perpetually learning new steps. Consider the lessons we’ve gathered as new dance moves, each one improving our educational ballet.
Credit: Source link