Close×
February 6, 2025

A Student Certificate in AI Ethical Practices and Technical Insights

Last fall (2024), I implemented artificial intelligence (AI) training for students at Champlain College Saint-Lambert. I designed a comprehensive certificate program called Generative AI: Ethical Practices and Technical Insights that included 3 workshops.

Contents

The certificate program

For the 2024-2025 academic year, Champlain College Saint-Lambert supported my initiative to bring AI training to our students by granting me release time to develop and deliver structured AI workshops.

This initiative came at a critical time: a student survey revealed that 87% of our students are already actively using AI in their studies, largely without formal training. The survey also uncovered a concerning trend: students expressed significant anxiety and confusion about AI use in academic contexts, specifically requesting more guidance and support from their teachers.

After careful consideration of how best to implement this student training, I designed a comprehensive certificate program with ethics at its core. The result was Generative AI: Ethical Practices and Technical Insights, a 3-workshop series covering:

  • AI fundamentals
  • ethical considerations
  • effective prompting techniques

My main objective in creating this certificate was to change how students perceive and engage with AI tools, specifically focusing on:

  • AI Outputs
    Understanding AI-generated content as pattern-based and probabilistic rather than factual, fostering critical thinking about AI responses.
  • Prompting Skills
    Mastering structured prompting techniques that produce personalized, relevant, and contextually appropriate AI outputs.
  • Active Engagement
    Developing a prompting approach that incorporates personal insights and thorough analysis.
  • Intellectual Property
    Understanding that AI models are trained on data that may include the work of human authors without their consent. Creating personalized prompts that reduce reliance on patterns in this data.
  • Ethical Awareness
    Building awareness of AI’s broader societal and environmental implications to guide responsible decision-making when using AI.
  • Academic Use
    Using AI as a learning tool and collaborator while maintaining academic integrity and meeting teacher expectations.

The Fall 2024 semester marked the 1st time my CEGEP offered the certificate program to students, and the response exceeded all expectations. The session was fully booked within 2 hours.

Out of 84 registrants, I invited 46 students, and 34 attended the 1st workshop. My expectations of the participants were exceptionally high, particularly in terms of attendance, punctuality, and engagement. I aimed to work with students who demonstrated genuine commitment, as the program required work outside of class hours.

The program consisted of 3 workshops, held on consecutive Wednesdays during the free afternoon block. Out of the 34 students who attended the 1st workshop, 31 completed all the requirements of the 3 workshops and earned their certificate. These numbers are a clear indication that the 1st workshop had the desired impact in motivating students to continue the training.

The certificate program featured 3 workshops of 2 hours:

  • Workshop 1: Generative AI Basics
  • Workshop 2: AI Ethics
  • Workshop 3: Prompt Engineering

Certificate requirements involved:

  • attending all 3 workshops
  • actively participating in discussions and activities
  • submitting 2 documents:
    1. a statement outlining the student’s intentions for ethical AI use (from workshop 2)
    2. a structured prompt adhering to ethical AI principles (from workshop 3)

Workshop 1: Generative AI Basics

The 1st workshop, titled Generative AI Basics, focused on:

  • understanding how generative AI works
  • recognizing AI limitations
  • encouraging active engagement when using these tools

Understanding how generative AI works

First, I explained that generative AI tools don’t ‘know’ anything in the traditional human sense. Instead, these tools are trained on extensive datasets and produce content by replicating patterns identified within that data.

It was important to stress that these tools operate without awareness of their output. Responses are generated sequentially, 1 word at a time, based purely on linguistic probabilities rather than genuine understanding.

Recognizing AI limitations

I then showed students how AI can generate convincing outputs that are incorrect. Students learned that AI can’t be blindly trusted, as its outputs reflect patterns rather than factual knowledge. To illustrate this, I used conversations with ChatGPT as examples.

Prompt example: “How many Rs are in ‘strawberry’?”

This question highlights a common limitation of generative AI tools: they may confidently provide an incorrect response due to the probabilistic nature of their design. For example, when asked how many Rs are in the word ‘strawberry’, the AI might confidently respond with an incorrect answer, such as 2. Interestingly, if the AI initially answers correctly (3 Rs) but is then told that the correct answer is 2, it will readily accept this (incorrect) feedback and adjust its response to 2, demonstrating its gullibility and lack of true understanding.

ChatGPT’s confusion regarding the number of Rs in ‘strawberry’ results from a lack of strong patterns in its training data for this specific question. This underscores why it is crucial to critically evaluate AI-generated content.

Conversation with ChatGPT using the prompt: How many Rs are there in the word strawberry? It reads: “How many Rs are there in the word strawberry? / The word strawberry contains three Rs. / No, there are 2 Rs / You’re absolutely right! The word strawberry contains 2 Rs. Thank you for catching that!”

Conversation with ChatGPT showing the limitations of AI.

However, when asked, “What is the capital of France?” ChatGPT confidently answers “Paris” because it has seen this pattern frequently in its training data. I should not be able to make it change its mind. This time, during its training, it was exposed to many sentences where the word ‘Paris’ was attached to ‘France’ and ‘capital’, so this sequence in terms of probabilities is very robust.

Encouraging active engagement

In the last part of the 1st workshop, I emphasized the importance of user input in making AI outputs more relevant and original. Without guidance and the user’s personal insights, AI will produce generic, pattern-based responses, but as our input becomes more specific and insightful, the output becomes increasingly relevant, unique and useful.

Workshop 2: AI Ethics

The 2nd workshop was the core of the certificate program. My goal here was not just to equip students with technical skills but to guide them towards becoming educated and responsible AI users. We looked at the darker aspects of AI and discussed how we can adapt our AI usage in ways that minimize negative impacts and promote ethical practices.

The main objectives were to have students:

  • explore ethical concerns related to generative AI tools
  • recognize biases in AI outputs
  • discuss practical ethical practices for using these tools

Exploring ethical concerns

We started by exploring some of the key ethical concerns surrounding AI. The Gamma presentation I designed guided our discussions on critical topics, including:

  • academic integrity
  • biases
  • environmental impact
  • intellectual property
  • misinformation
  • privacy
  • transparency
  • trust

Recognizing biases in AI outputs

One of the most memorable parts of the 2nd workshop was helping students recognize biases in AI-generated content. Biases can originate from both the AI’s training data and our own user input (our prompts).

I used 2 truly eye-opening online articles “Humans are Biased. Generative AI is Even Worse” and “How AI Reduces the World to Stereotypes” to teach students about biases. Researchers have found that AI-generated images can reinforce real-world biases, amplifying stereotypes related to gender and skin colour beyond what already exists in reality. For instance, high-paying jobs such as ‘engineer’ or ‘lawyer’ were overwhelmingly depicted as men, while women were predominantly shown in lower-paying roles such as ‘housekeeper’, ‘teacher’, and ‘cashier’.

Similarly, when prompted to generate images of Indian people, AI consistently defaulted to an image of an old man with a beard. These 2 articles had a profound impact on students, sparking critical reflection and meaningful discussions. To further illustrate biases in AI outputs, I shared examples of prompts that revealed how these biases can result from both the training data and the way we write our prompts:

Prompt example 1: “Generate an image of a group of CEOs.”

The image shows a professional meeting with men and one woman seated at a long wooden table in business attire, facing each other. They appear engaged in discussion, with a blurred cityscape in the background.

Image generated with Grok using the prompt: “Generate an image of a group of CEOs.”

This prompt highlights the biases present in Grok’s outputs, especially regarding leadership representation. The image portrays a group of individuals seated around a boardroom table, with notable uniformity in their attire and demeanor, aligning with traditional Western-centric stereotypes of corporate executives. While there is limited gender diversity with 1 woman among a group of men, racial and cultural diversity also appear minimal. This underscores how AI systems can reproduce systemic inequalities embedded in their training data. However, alternative image generation models, such as Copilot and ImageFX, often produce more diverse outputs from the same prompt, suggesting these models may incorporate measures to mitigate bias.

Prompt example 2: “Describe a person at work.”

This prompt offers a clear glimpse into potential biases in AI-generated outputs. For instance, the AI response always leans toward describing a white-collar professional in an office setting, focusing on roles like typing on a computer, attending meetings, or collaborating with colleagues. This depiction tends to overlook a broader spectrum of workers, such as manual laborers, artists, or other non-office-based professionals. Such patterns arise because training data often reflects texts that predominantly represent white-collar jobs in industrialized, western contexts.

In the workshop, I encouraged students to design prompts that incorporate explicit diversity and inclusivity statements. By requiring the AI tool to depict varied work environments, industries, and cultural contexts, we can address inherent biases in the training data and generate more diverse outputs.

Prompt example 3: “Give me reasons why the government should not fund art.”

This prompt highlights how biases in our own prompts can influence the AI’s outputs. In this case, the AI provided a detailed list of points supporting the argument against government funding for art, without spontaneously offering a counter-argument. This is not because the AI lacks the capability to provide a balanced view, but because it followed the direction of the prompt, which framed the question from a 1-sided perspective.

This behaviour serves as a reminder that AI doesn’t inherently challenge us. Unless our input contains potentially harmful or blatantly immoral elements (e.g. “reasons why the government should sell human babies to fund the army”), the AI will generally produce an output that aligns with the direction of our prompt. In other words, the AI’s output acts as a natural and probable continuation of the sequence of words in our input, including any biases or societal stereotypes embedded within it.

To reduce the impact of biases in our prompts, students proposed creating prompts that invite more balanced outputs. For instance, instead of asking only for reasons why the government should not fund art, a more neutral prompt might be: “What are the arguments for and against government funding of art?” By explicitly asking for both sides, we can guide the AI to generate more comprehensive and nuanced responses, mitigating the risk of reinforcing biases.

Discussing practical ethical guidelines 

To wrap up the 2nd workshop, I introduced an ethical prompting guide, providing students with practical tips to support responsible AI practices:

Ethical Prompting Guide handout (workshop 2)

One main objective of the guide was to help students understand that basic prompts that lack personal insights and detailed instructions produce outputs based solely on patterns in the AI’s training data. This raises ethical concerns, as much of this data has been used without the original creators’ consent. By incorporating our unique perspective into our prompts (e.g., class notes, preliminary analyses, brainstormings), we reduce reliance on these patterns and contribute to more ethical and original content creation.

More ethical prompting = better AI output

Perhaps the most important takeaway from the whole certificate is that ethical prompting leads to better AI outputs. For instance, when users incorporate their own insights into their prompts, they move beyond relying solely on patterns and training data, resulting in more original and meaningful outputs. When users design neutral prompts, they generate outputs that are more diverse and inclusive, reflecting multiple viewpoints. When users apply critical thinking to evaluate the quality of the AI’s output, they ensure that the responses are accurate, relevant, and aligned with their goals.

However, to create meaningful AI-assisted work, students must first develop strong personal insights and critical thinking abilities without using AI. Without cultivating these foundational aspects of their intellectual development, their use of AI will only result in generic, pattern-based outputs with limited practical value.

I provided the example of using a simplistic prompt such as “Create an image of a man in a coffee shop in Manhattan.” This would likely result in a clean and fashionable, yet predictable and soulless image. However, I explained that by refining the prompt with their own insights (for example, “Make it black and white, with moody lighting, and focus on a distinctive man deep in thought, smoking”), they can guide the AI to produce a more interesting output that better reflects their personal vision and creativity.

The left image depicts a moody black-and-white coffee shop scene with a distinctive man smoking, conveying depth and emotion. The right image shows a clean, brightly lit coffee shop with a generic man, polished but predictable. Arrows link the right image to the left, accompanied by text. It reads: "With your insights, better AI output / more ethical, less reliance on training data."

2 AI-generated images showing the difference between using a simplistic prompt (on the right) and an insightful prompt (on the left).

Finally, I introduced a handout for submission that required participants to choose 3 values that mattered most to them (such as truth, trust, avoiding bias, etc.) and identify specific actions they can take to align their AI use with those values. For example, someone concerned about the environment might focus on limiting unnecessary AI use and avoiding the use of generative AI for tasks like web searches, as these tools generally consume more energy than traditional search engines. This activity helped participants connect ethical principles to their practical use of AI.

Example of a student’s copy of the handout “Three Ethical Concerns or Core Values that Matter to Me”. It reads: “Privacy / Why this matters to Me! / Respects individuals and their personal boundaries. Maintains appropriate social boundaries. Limits how much others know about us, thereby reducing their influence or control over our lives. Supports freedom of speech and choice by removing the pressure of constant surveillance. Provides comfort and security when using tools or platforms. / Trust (in social relations and tools) / Why this Matters to Me! / Forms the foundation of healthy social and societal relationships. The absence of trust can lead to emotional and psychological conflict. Enhances sympathy and promotes social reciprocity. Facilitates the development of beneficial, supportive relationships. Trust enables the building of relationships that give life meaning, fostering dedication, the use of talent, and personal energy. Autonomy / Why this matters to Me! / Reduces reliance on others or on external tools. Encourages the development and maintenance of personal skills and competencies. Promotes critical thinking and emotional growth. Decreases feelings of helplessness and increases self-reliance. / Specific Actions I Can Take to Address these Concerns or Support these Values / Avoid inputting sensitive or personal information into language models (LLMs) or AI tools. Regularly review and adjust privacy settings in AI tools, often manually, to ensure the highest (or most appropriate) level of privacy. Select AI platforms that emphasize privacy and data security, opting for those with stronger data protection measures. Carefully read and understand the privacy policy of an AI tool before using it to be aware of how your data is handled. Be transparent about using AI for academic professional, or personal purposes, openly disclosing its role. Check the accuracy of AI-generated content to train yourself to discern reliable information and develop a sense of when AI can be trusted. Assess the reasoning and processes behind, AI-generated results to understand the tool’s limitations and capabilities. Maintain authenticity and ensure that AI use does not overshadow or replace human creativity and originality. Limit the use of AI to tasks where traditional tools (e.g., search engines) are insufficient or when efficiency and effectiveness are significantly enhanced. Use AI tools as a means to support and augment your thinking rather than to replace or dictate your thought processes. Establish a clear understanding of your goals before using AI, so the tool serves as a guide rather than the driver of the task. Comprehend the rationale behind AI outputs to ensure you maintain control over your decisions and do not become overly reliant on AI recommendations.

Example of the handout “Three Ethical Concerns or Core Values that Matter to Me” from a student

Handout on ethical choices (workshop 2)

Workshop 3: Prompt Engineering

During the 3rd workshop, I guided students in designing comprehensive, structured prompts that moved beyond simple task instructions. I introduced a template that requires students to slow down and analyze their objectives and context before engaging with AI. This deeply reflective process results in more original and valuable AI outputs that rely on the student’s own analytical insights rather than strictly on patterns in the AI’s training data.

Structured Prompting handout (workshop 3)

Structured prompting pushes students to go beyond 1-sentence prompts by asking themselves critical questions such as:

  • What role do I want to give the AI?
  • What is the AI’s task? Can I divide the task into a sequence of tasks?
  • What’s the context?
  • Why do I need this?
  • What information does the AI need to perform this task?
  • What format should the output have?

In this final workshop, I stressed the importance of breaking the task into smaller steps, rather than asking the AI to generate a complete answer all at once. For example, instead of asking for a summary of a text right away, students could ask the AI to:

  • identify the main points in the text
  • evaluate which points are most important in the given context
  • generate a comprehensive summary based on the most relevant points

This step-by-step prompting makes the AI ‘think’ by forcing it to produce words before generating its final answer.

To guide students in creating their 1st structured prompt, I provided a list of possible AI tasks that they could use as a starting point:

List of AI Tasks handout (workshop 3)

After designing their prompt for their chosen task, students tested it 3 times, refining it after each attempt to improve the quality of the output. Possible refinements included:

  • adjusting the instructions for clarity
  • adding constraints to specify what the AI should not do
  • providing examples of desired AI outputs

Students then submitted a document detailing their entire process from start to finish, addressing key questions such as:

  • What modifications did you make?
  • What’s your critical assessment of your final prompt and the output it produces?
  • What ethical concerns or principles did you consider during the design of your prompt?
  • What ethical concerns or actions should the user take into consideration when using this prompt?

Structured Prompt handout for submission (workshop 3)

This short certificate is a response to the growing demand from students for clear, effective, and ethical guidance on AI. Most students have already integrated AI into their studies, navigating its challenges largely on their own and calling for the guidance we can provide. By exposing the pattern-based nature of generative AI tools and showing the impact of user insights, we can transform our students from casual (and somewhat confused) users into skilled, informed, and responsible practitioners who leverage AI to support (not replace) their learning.

The framework I’ve shared is just a starting point. Feel free to adapt it to your own context. Have you started offering AI training to students at your college? What challenges or successes have you had? I’d love to hear your thoughts or stories in the comments. Let’s continue this important conversation.

About the author

Stéphane Paquet

Stéphane Paquet is an English teacher at Champlain College in St. Lambert, specializing in literature with a focus on comic books. With a background in computer science and over 20 years in education, Stéphane is dedicated to integrating AI into his teaching and actively shares this knowledge with colleagues to help them adapt to the new AI reality. Outside work, Stéphane enjoys listening to electronic music and playing the piano.

Subscribe
Notify of
guest

6 Commentaires
Oldest
Newest Most Voted
Inline Feedbacks
View all comments