I wouldn’t want the 1st-year students in my program to use AI indiscriminately, because they need to get to grips with the concepts. But in the 3rd year, students apply the concepts. In that context, someone who uses AI will move faster than someone who doesn’t.
For example, in a course where the competency is presenting and formatting information, AI can help students gather information quickly. They can then analyze it and organize it into a slideshow for presentation. This allows students to spend more time actually working on developing the competency.
Mastering the tools of efficiency is certainly an asset in the job market. But it’s important to know where to draw the line. Students must distinguish between situations in which it is ethical to use AI and those in which it is not.
Supervising students
I support and monitor my students’ use of AI. First of all, I want students to tell me how they use it:
- Name of the conversational agent (ChatGPT, Bard, etc. — At the moment, ChatGPT seems to give the best results, so I encourage my students to use it.)
- Date of the conversation
- Question asked
The students keep a log of all their interactions with the AI.
What’s more, instead of having them copy and paste the AI-generated text (with or without quotation marks), I ask them to paraphrase it, to rephrase the relevant ideas in their own words, citing the AI as the source. This forces them to think about the information rather than just copy it.
The student logs allow me to informally evaluate the quality of the questions posed to the AI. With generative AI, when a person is able to express their needs in a detailed and precise way, they get more interesting results.
Better results thanks to AI
In recent years, when writing a business plan, my students might have needed an hour in class to create an organizational chart. Today, with AI, it takes them 5 minutes. But I haven’t reduced the amount of time spent on the activity in class. Students take advantage of the time they save to refine and improve their work.
Their business plan writing is richer, more interesting and more in tune with the real world. The students are proud of their results. For my part, I enjoy evaluating their work even more.
Distinguishing truth from falsehood through AI hallucinations
One of the persistent challenges with generative AI is distinguishing the true from the false. AI doesn’t just give correct information, far from it…
I think students need to learn how to navigate that reality. They can use some of the time saved by AI to do additional checks. My students fall into traps, but they learn not to get caught again. Several of the business plans produced by my students mentioned the importance of obtaining an online business license (which doesn’t exist) or applying for other imaginary permits… Of course, such mistakes penalize their grade on the assignment…
There’s no point in trying to protect our students by putting them in a bubble. Otherwise, when they come out of it, once on the job market, it’s going to be a shock.
I think companies will eventually no longer want to hire people who can’t use AI effectively anyway.
Students need to be trained
One might tend to think that CEGEP students would naturally master the use of generative AI, but that’s not what I observed. You’d think that students would spontaneously try to use AI in all contexts, for all their schoolwork, but I get the impression that many are afraid of it.
We need to work with our students and train them to use AI. Personally, in the fall of 2023, I briefly explained to my students how to use AI, but I didn’t spend much time on it. (In the winter of 2024, I’m not teaching because I’m in charge of the Défi OSEntreprendre [in French] Saguenay-Lac-Saint-Jean.) I think it would be better to do this through lunchtime experimentation workshops in the library, for example. This could benefit students in all their courses. They could learn to:
- formulate a prompt (which words to choose, which adjectives, etc.)
- discover the possibilities offered by generative AI (many students don’t know that AI can be asked to rewrite a text by changing the verb tenses or the person, to adapt a text to the Quebec reality, etc.)
- etc.
In the fall of 2023, I thought my students knew more about AI than they actually did. I thought AI would put everyone on an equal footing, but it actually increased inequality. Those who knew how to use it effectively were far ahead of the others…
A playground awaiting institutional guidelines
In my opinion, institutions will have to regulate the use of AI in courses to avoid ethical abuses. But right now, there aren’t really any rules at my college. Fall 2023 was an opportunity to experiment, a playground. Previously, my course plan stated that the student had to be the sole author of their assignment, or risk getting a 0. This fall, my course plan stated instead that the use of ChatGPT was allowed if I explicitly authorized it.
AI can’t (and certainly shouldn’t!) replace teachers. But we can be coaches in how our students use it.
I’m aware that the situation varies from program to program, from discipline to discipline. In my field, the notion of efficiency is key. I know that the situation is not the same in literature or philosophy. AI can be used to brainstorm or to formulate arguments for any position. There’s an interesting pedagogical potential there.
In any case, I think the current situation requires us to rethink our posture. What about you? How can AI transform the reality of your program? How have you adapted your practices and integrated AI into your courses?