May 1, 2023

From ChatGPT to Academic Integrity — Highlighting the Diversity of Approaches 

Ensuring academic integrity in the era of ChatGPT has been a hot topic in higher education. Even before the workshop on academic integrity in an age of artificial intelligence (AI) at the CEGEP Champlain St. Lawrence (co-developed by IT Rep Quinn Johnson and Eductive’s technopedagogical advisor Alex Enkerli), many teachers have raised concerns regarding the potential risks of using ChatGPT in the classroom, sparking reflection on how to approach generative AI. This article attempts to define academic integrity and offers ways to promote it, regardless of the current ambiguous context. Professionals and teachers from all disciplines shared their concerns, possible solution paths, and insights based on their own classroom experiences.

Dimensions of ambiguity

During the workshop, the issue of reconciling generative AI with academic integrity was discussed from multiple angles. One participant noted that a student had attempted to consult ChatGPT with a mobile device during an in-person exam. It was stated that not only is this kind of cheating unfair to other students but also to those who themselves cheat. Others mentioned that this is just another chapter in the cat-and-mouse game of countering tools like Sparknotes, Cliffnotes, etc., whereas there were many who perceived the necessity to change the type of assignments and evaluations susceptible to being undermined by this tool.

Many of the concerns that have arisen in conversations at the college prompted much reflection on Quinn’s part. The key word for him is ambiguity as countless questions seem to remain unanswered…

  • It seems that what ChatGPT can do and what it cannot do is constantly changing, as is the case with many of the tools we use. As teachers prepare evaluation criteria that correspond to course-related tasks, how can these tasks be contextualized when the evolution of tools available to students (and their digital environments for that matter) is in constant flux?
  • We can easily imagine that a student who cheats with ChatGPT in an asynchronous assignment can complete this assignment in significantly less time than their peers. What’s more, students’ ability to exploit generative AI might vary depending on the level of access to computers and Wi-Fii at home, of technological competence, of awareness of what ChatGPT can and cannot do (at a given moment), of ability in writing good prompts, of methods in masking one’s use of the tool, etc. As such, to what degree could the above dimensions represent threats to digital equity?
  • Considering the difference in context between a student writing an essay in winter 2022 versus winter 2023, we can appreciate that our students’ future contexts (academic or professional) could evolve in ways that are difficult to predict. As such, how can today’s competency-based evaluations be linked to skills students will need in the future?
  • Given that it is the 1st full semester in which ChatGPT is available, it is difficult to grasp the student experience in the scheme of things. As such, to what extent are students using the tool, and to what degree are their own conceptions of academic integrity changing?
  • ChatGPT draws from a large quantity of internet material under copyright. Furthermore, the Institutional Policy on the Evaluation of Student Achievement (IPESA) generally states that plagiarism consists of using “someone” else’s work. What is the copyright status of AI-generated material, and could the term “someone” used by IPESA be applied when ideas are taken from something?

Open Education specialists know a lot about respecting copyright. Some provide insight about proper uses of AI tools which respect copyright.

There are philosophical and political undertones to the questions above that our societies must address in order to define what kind of world we want our learners to inherit, which in turn will define future pedagogical programs which in turn will define evaluation activities that will be nested within said programs. But alas, since it is our society as a whole that must address these questions, we stop our reflection there and limit ourselves to the scope of academic integrity in the current ambiguous context.

What is academic integrity and why do we care about it?

Simply put, academic integrity refers to honesty in academic work (REPTIC, REBICQ 2020).

In spring 2020, college librarians and IT REPs produced a Guide to Academic Integrity and Avoiding Plagiarism. This guide is available as an extended slide presentation along with information about the project behind it, in a previous Eductive article.

Learners demonstrating academic integrity are those who accomplish their work according to prevailing norms and rules, particularly in terms of the proper use of others’ work. Cheating and plagiarizing constitute breaches of academic integrity. Those breaches affect everyone involved, from the students themselves to their future colleagues, and from the institution’s academic staff to the whole of civil society. Indeed, Quebec’s Digital Competency Framework is evidence that some concerns have existed in the ministry since its release in 2018. Exercising ethical citizenship in the digital age is a core dimension in this framework. Academic integrity is an accurate description of key outcomes from developing this dimension of the competency. If learners exercise ethical citizenship in the digital age, it implies that they demonstrate academic integrity. Thus the “exercising” dimension described in the framework gives a very broad level of guidance on promoting academic integrity in an age of Artificial Intelligence.

Not only do we care about students remaining honest, but the promotion of academic integrity is also part of the research-based learning process that is central to higher education. In this sense, academic integrity parallels with critical thinking and metacognition, other distinctive features of higher education. Academic integrity among students leads them to make important contributions to the knowledge society, even when learners leave our colleges.

The shift from plagiarism to academic integrity described by Eaton (2022, 2023) and used in the REPTIC-REBICQ guide mentioned above has been a significant one. Our role as pedagogues is to promote proper behaviour in the long run, not to police behaviours while people are registered as students at our institutions. Further, the long-term strategy to promote academic integrity helps prevent plagiarism and cheating instead of sanctioning actors after they have misbehaved.

Some of the best practices in the promotion of academic integrity map important pedagogical principles such as giving authentic assessments, providing adequate feedback on learners’ work, offering “pre-flight” tools (as Open Educational Resources), so students can tell if their assignments are ready for submission… As some experienced teachers might say at the end of a workshop: “Well! That’s just good pedagogy.” Precisely.

How does academic integrity relate to the current situation with tools based on diverse technologies within the domain of AI? Since easy and widespread access to these tools decreases the effort needed to cheat or plagiarize, many people across the network perceive a pressing need to counteract a negative trend among students, and the promotion of academic integrity is such a counterbalance. Perhaps, academic integrity is an even more important factor in human behaviour when machines can accomplish an increasing number of tasks. If robots can do most of the work, even the one done by professionals, should we give important human responsibilities to people who have developed a strong sense of honesty and are able to demonstrate it? Or are we comfortable with people who cut corners whenever possible? A computer is neither honest nor dishonest. Dishonest humans using computers may exert undue influence on others.

Those competencies relate to professionalism.

School curricula could build in those skills within the programs described in the devis ministériels.

Pedagogues have a responsibility to promote academic integrity. However, it can prove difficult to carry out our responsibilities as pedagogues given the ambiguity of concepts, such as copyright and equity in current and future contexts, as well as the development of new technologies. When learners work on laptops in college, ChatGPT may have different affordances from those present in their future workplace. To which degree is digital equity threatened when new digital divides become entrenched? What is the copyright status of AI-generated material? What student experiences may we design, in light of assigned work known to be easily doable, in whole or in part, with those tools? Is there a double-messaging when some teachers advocate for appropriate uses of tools while others ban them outright? We have been there before, though the situation is different.

To answer several of our questions, Quinn has asked teachers and professionals at CEGEP Champlain St. Lawrence for their insight based on their own classroom experiences.

Classroom practices

Although the exploitation of digital technology to cheat and plagiarise has a long history, generative AI has only recently garnered widespread media attention. What theorists in educational technology might describe as the perceived and actual affordances of generative AI for students can be summarized in the following metaphor. The students always had this Swiss army knife at home, but it was hidden under the couch. Now, it is most certainly not hidden under the couch anymore, and is out in the open, in the hands of our students. Academic integrity has sprung to the forefront of discussion amongst teachers. And while we can talk for days about the broader implications of these tools, the question for educators becomes What do we do about this semester? At CEGEP Champlain St. Lawrence, teachers and staff have expressed various concerns regarding ChatGPT and voiced related solutions.

Martin Theriault (Tourism)

I have limited concerns about the use of ChatGPT by our students. Tourism is a technical program with a limited number of students. In courses related to product development, event management or marketing, students are often working on semester-long projects. These projects are monitored on a weekly basis and students have to justify each step by making links with grids or models seen in class. In that context, I hardly see how students could use ChatGPT for cheating purposes.

For introductory courses, it might be more worrisome. For instance, I have no doubt that ChatGPT could easily suggest valid descriptions of the current distribution channels used in our industry, or that it could create appropriate packages to different clienteles. Again, I think that academic integrity could be ensured through a proper monitoring of student progress and an appropriate choice of evaluation methods.

Jeffery Aubin (Humanities)

In my Philosophy courses, there is an emphasis on reading and comprehension skills. Instead of beginning my courses with class discussions based on simple texts, we dive straightaway into texts with more difficult language. The idea is to build reading competencies and foster the assimilation of complex ideas from the beginning so that later conversations can address the deeper issues that philosophers evoke. At first, I encourage students to build their ideas, and understand whatever they can from the text, without consulting external sources. When any student (or any reader for that matter) does not understand a word or phrase in a book, article, or magazine, they have 2 options:

  1. to find the answer immediately from an external resource
  2. to read further and build their own understanding progressively through contextualization

Though I wish to promote the latter option, I worry that the temptation to consult ChatGPT will cause more students to take the former. I am currently exploring strategies that will promote the added value of “making the text your own”. Not only do I hope to help students build a better understanding of the content, but I hope to better foster their growth as individuals.

Philippe Blouin (Humanities)

As a Humanities teacher, I found the sudden arrival of ChatGPT daunting, not to say downright terrifying, given that the core part of my assessments are written assignments done at home, on questions that ChatGPT typically answers quite well. A simple solution would be to do the assignments in class, but that would be done at the expense of the research component of the assignments, which I deem crucial. I want students to learn not just to think for themselves, but along with other reliable authors, that speak relevantly to the question at hand. Thus, after much reflection, I decided to conduct a small experiment and see what would happen if, instead of pretending it did not exist or that they would not use it, to test ChatGPT in class with them, by inputting the very question I had just given them for their latest assignment (“Do we live in a simulation?”). My purpose was to show them its strengths as well as its weaknesses. It was quite good in summarizing the key arguments for each side, as well as to point to some reliable sources in the field. But then, I asked it to quote directly from these sources by pinpointing the crucial arguments, as well as to give the full references for these quotes. This led me to 2 “teaching opportunities”:

  1. The quotes were interesting but did not consist in actual arguments. Thus, the students still needed to read the source fully to make sense of the author’s reasoning, and to identify her key arguments, before using it in their paper.
  2. ChatGPT gave me, surprisingly, a page number for the quote it had identified. I had not expected this initially, but then I noticed it did not seem right. So I verified in class and indeed, the page number was wrong. Thus, the students, again, could not rely uncritically on this tool.

These are valuable and important points: ChatGPT should be used as a tool to facilitate research, not as a substitute for one’s own critical judgment. Finally, when the essays came in, I was afraid that my little experiment would lead to a great reduction in the diversity of opinions or similarity in overall form and content. That was not the case. The papers came with their same usual mistakes, and I only had 2 clear cases of plagiarism from ChatGPT (which is probably the new norm, regardless of whether we introduce ChatGPT or not in class).

In short, I conclude from this small experiment that it is better in this case not to play the ostrich, but to be proactive and transparent with our students, and help them see the value and limitations of this new tool that is there to stay.

Patrick Savard (English)

As a teacher of literature, I am afraid ChatGPT and the like will undermine the students’ capacity to assess a text critically in an era when just reading something that exceeds the length of a text message is a challenge already.

I have been testing ChatGPT’s potential for weeks now, and I am somewhat less concerned than I was when I first heard of it. It proves to be incapable of quoting primary texts, of specifically comparing different texts, and of going further than the surface of literary works. Those are things I keep in mind as I design my evaluations now.

Notwithstanding this, I believe we must try to use ChatGPT in the classroom (there are some interesting uses to it depending on the subject) for 3 reasons:

  1. to show how it can be used effectively
  2. to start a discussion with students on academic integrity
  3. to show them that we know it exists, which may discourage them from using it later on

Another concern I have with AI is that I have spent much time and energy doing research on student-centered pedagogy (inclusive pedagogy). With the transformations one must make to guarantee academic integrity in the course, such as increasing the number of in-class evaluations, I am afraid that I may be leaving some student-centered practices behind. Hence, I wonder how much of an impact ChatGPT has on Adapted Services.

Academic integrity issues are becoming more and more complex. Because of this, dialogue is the key. Emerging issues do not rest on the shoulders of teachers alone as they are of concern for the entire institution. As teachers adapt to more evolving tools and corresponding threats to academic integrity, they need to feel supported and have individuals or groups at their college who can help them deal with these issues. In addition, institutions need to reassess policies and parameters relative to testing. For instance, should students be allowed to leave a test room? If so, for how long? Hence, support needs to come both in the form of strong trust and collaboration between faculty, staff, and administration and in the form of strong policies on which all employees can rely.

Shanelle Guitard (Librarian)

From a librarian’s perspective, my concerns are related to how generative AI will expand in the future. Students know that AI will play a role in the future, so we must accept that it is here to stay. Because, to my knowledge, access to ChatGPT is open, and because there is no sure-fire way to detect work produced by AI (without background knowledge of the students) anyone could submit an essay they did not write. I feel that this is like when we started including internet websites and research as a source of information. Instead of blocking access to it completely and asking students to only use physical books as sources, we integrated it into research practices. Ultimately, my concern is that new types of cheating will take form as new resources are at the students’ disposal.

To counter this, instead of banning these tools, we should integrate them as sources of information. Members of REBICQ (Regroupement des bibliothèques collégiales du Québec) [in French] have suggested that we include methods for citing ChatGPT as a source of information in written work. As such, there has been some discussion in REBICQ on how to cite ChatGPT in MLA format. In short, librarians are preparing ways to integrate these technologies with their institutional policies on academic integrity.

Other teachers

Other teachers from CEGEP Champlain St. Lawrence expressed concerns about how our reliance on generative AI may water down the content that we produce. The content produced by ChatGPT seems to be fairly generalized and follows a relatively narrow trend. The tool fails to replicate the evolution of ideas, creativity, or spontaneity that one would experience as they draft and re-draft texts. The appeal of these tools, as they become increasingly practical for students and teachers, may lead us to produce and accept work that follows the norms and omits the outliers.

Though this kind of tool may become useful in some circumstances, it is important to emphasize the value of following one’s own thought processes without consulting generative AI. One teacher’s idea is to have students complete short writing assignments in which they reflect on the course material they have covered. Guidance and support from the teacher during these sessions may build the confidence students need to follow their own ideas and instincts. Teachers may also offer options related to the media format students choose to submit their work. Not only does this encourage students to think outside the box, but it dovetails with the guidelines of UDL (Universal Design for Learning). Teachers may also encourage or require students to work in stages to avoid the last-minute appeal of ChatGPT. This would have the added effect of scaffolding the development of organizational skills in classwork. Finally, the ambiguity of copyright with regards to AI and the difficulty ChatGPT has in its own academic integrity, can be exploited by requiring students to validate their references and the use of tools such as Zotero.

A Pedagogical heritage of academic integrity

To summarize, these concerns have been related to:

  • plagiarism and cheating, as the tool permits students to have written works produced immediately for them according to their requests
  • students not doing the research they are asked to do or misunderstanding the foundations of their own arguments
  • the reduced ability for individuals to think for themselves, as the tool greatly increases the ability for students to complete tasks without developing the targeted competencies.

Solution paths are offered at different levels.

At the classroom level:

  • Offering supportive monitoring to the students
  • Being proactive and transparent with students
  • Encouraging ChatGPT to be used as a tool for research
  • Encouraging students to appropriate information for themselves
  • Fostering confidence in one’s own ideas
  • Having students working in stages
  • Choosing appropriate evaluation methods

At the inter-departmental level:

  • Having a Ped Day discussion on academic integrity

At the institutional level:

  • Updating policies to address threats to academic integrity
  • Creating a committee on academic integrity

The above ideas represent several possible solution paths, as this article addresses teachers from all disciplines. Thus, the challenge is to apply these ideas to one’s own teaching context (all ideas are improvable).

Our purpose and agency are in helping the whole community.

We are all in the same boat. And we must remember that not everything rests on our shoulders. Some of these ideas, especially the question of equity, might require discussion at pedagogical, philosophical, and even political levels. We can ask ourselves how we can influence Silicon Valley startups to design tools that promote academic integrity more effectively. After all, UNESCO adopts recommendations that target equity and related principles. Canada is a signatory. Considering the evolving and ambiguous challenges to academic integrity, we must stay tight-knit, we must develop our insights together and share experiences with one another (community knowledge, collective responsibility).

As such, we hope that this article and the teachers’ testimonials offer food for future discussion and collaboration.

What about you? How could the above solution paths be applied to your own teaching context? What other concerns do you have regarding generative AI and what solution paths could address them? We invite you to share your answers to these questions in the comments section below!

About the authors

Alexandre Enkerli

Alexandre helps learning professionals make technology appropriate for their contexts, just like he did as a technopédagogue for Vitrine technologie-éducation from 2014 to 2016 and as a Technopedagogical Advisor for Collecto from 2021 to 2023. Alex comes back to this role after a few in Ottawa (creating cybersecurity learning pathways and a Massive Open Online Learning Experience on public engagement), and in Saguenay-Lac-Saint-Jean for participatory-action research at COlab.

Quinn Johnson

Quinn Johnson is a former ESL teacher and Université Laval alumni in educational technology. He has worked with la Vitrine technologie éducation on its project to develop a portal for Open Educational Resources. He currently works as a pedagogical advisor at CEGEP Champlain St. Lawrence in Quebec City.

Jeffery Aubin

Jeffery Aubin is a Humanities teacher at CEGEP Champlain St. Lawrence. After graduating in classical studies, he obtained a master’s degree in Ancient Literature and a Ph.D. in Religious Studies in the philosophy of religion. As a philologist and a teacher, Jeffery believes that developing excellent reading skills is an essential requirement for critical thinking and independent thinking.

Martin Thériault

Martin Thériault is a teacher in the Tourism program at CEGEP Champlain St. Lawrence in Quebec City. For many years, he has been conducting various courses related to tourism product development and sustainable tourism.

Patrick Savard

Patrick Savard has been teaching English at the college level for 20 years. He has taught all the core courses as well as developed a number of complementary courses in the Arts and Literature program at CEGEP Champlain St. Lawrence.

Philippe Blouin

Philippe S. Blouin is a Humanities teacher at CEGEP Champlain St. Lawrence in Quebec City and a researcher in philosophy, specializing in Husserlian phenomenology.

Shanelle Guitard

Shanelle has been working as a documentation technician since 2018. She has worked at the CEGEP Champlain St. Lawrence library since 2022.

Notify of

0 Commentaires
Inline Feedbacks
View all comments