Recently, I discovered a new term: high structure course. It is an expression that has been around for about 10 years, but I hadn’t come across it yet.
A high structure course is a course that incorporates active learning. In fact, it can also integrate flipped learning (which is often the only way to free up class time needed to substantially engage in active learning).
The difference between a high structure course and any other course that puts students into action is the frequency of formative or summative assessments and feedback from the teacher or peers. In a high structure course, there are typically assignments or tests every week or every class:
- before the class to check if students have completed the pre-class preparation activities (reading texts or watching videos)
- after the class to check their understanding
The intention behind the high structure course is that by providing more support to students, we increase the chances of success for those who lack the necessary tools to structure their studies independently.
Frequent assessments
Even though the expression “high structure course” is new to me, the concept isn’t. However, this expression allowed me to think about active learning specifically from the angle of assessment activities (formative or summative) that precede or follow each class session, somewhat like entrance and exit tickets.
I find it hard to see how there could be too many formative assessments in a course (except maybe for the grading workload it imposes on the teacher!). Still, I’m not drawn to the idea of multiplying summative assessments that count for 1% or 2% of their final course grade.
I feel like grading an assignment worth 1% devalues the activity in the eyes of many students. Inversely, introducing an activity as formative seems to promote its intrinsic value: we don’t do the activity to get 1 point but to learn and develop the competency targeted by the course. However, I know very well that it doesn’t always work and is not for all students. Just as some people choose not to do assignments that count for less than 5% because they “are not worth the effort” (!), others do the minimum amount of formative assessment activities.
In addition, it seems that summative assessments with very low weighting often focus on memorization and basic understanding of concepts (through reading quizzes or short multiple-choice quizzes, for example) rather than on authentic demonstration of competencies. Although this seems like a drawback, one of the studies I read on high structure courses rationalized it:
If the benefit of highly structured courses is to help students gain higher-order cognitive skills, what role do reading quizzes play? By design, these exercises focus on Levels 1 and 2 of Bloom’s taxonomy—where active learning may not help. We concur with the originators of reading quizzes (Crouch and Mazur, 2001): Their purpose is to free time in class for active learning exercises that challenge students to apply concepts, analyze data, propose experimental designs, or evaluate conflicting pieces of evidence.
Thus, the authors of several studies I consulted are satisfied with the implementation of a large number of “low stakes” summative assessments, sometimes focusing only on pre-class preparation activities for weekly class sessions, sometimes on reviewing concepts covered during the week, and sometimes on both.
Occasionally, summative assessments even take place during active learning (points assigned for the accuracy of responses using clickers, for instance, or participation points for using clickers regardless of the accuracy of responses). Personally, I would not feel comfortable awarding points for questions students have to answer very quickly during the class session where they are supposed to be learning the concepts. Sure, the weighting can be very low, but I still think it adds stress that takes away some of the fun of learning… As for participation points, I tend to agree with those who believe the grade should (at least aim to) reflect the achievement of the competency, not “good behaviour” in class.
In fact, I believe that alternative grading practices could be a solution to this whole problem (how to value essential learning activities for student progress without having to turn them into summative assessments). With specifications grading, these activities could be included in the list of tasks to complete to master a specification. With ungrading, they could be evaluated holistically, with their weighing being adjusted according to each student’s reality. Using a multiple grading scheme could be another interesting approach.
What do you think? If you implement flipped learning, how do you make sure students arrive well-prepared for class? Do you have trouble conveying to students the value of activities that are essential to their learning? Share your ideas in the comments section; I’m interested in learning more!
Adapting the approach to student needs
Among the readings I did to learn more about high structure courses, I specifically appreciated an article by Anne M. Casper, Sarah L. Eddy and Scott Freeman released in 2019. It discusses how Casper, the lead author, attempted to improve the success rate of her students in a biology course at a level relatively similar to biology courses offered in Natural Science in the Quebec college network.
- Her 1st attempt was to maintain a lecture format but to require the students to complete an online “practice exam” each week to ensure they were up to date with their studies and became familiar with the exam question format. It was a failure: the success rate and exam grades decreased.
- Her 2nd attempt involved active learning in her class, requiring the students to do pre-class readings followed by an online quiz before class. Even if Casper’s approach now mirrored her tried-and-tested coauthors’ approach, it proved to be disappointing: the success rate and exam grades returned to levels similar to those in a traditional course before Casper’s experiments began.
- Her 3rd attempt was the right one! Casper replaced the pre-class readings with preparation videos (where she explained course concepts in front of a whiteboard, as she would do in a traditional lecture course). The success rate and exam grades significantly increased.
Although Casper provided reading guides to students (reflection questions to answer while reading), a tool that had been shown to be effective in other studies (Lieu & al. (2017), and Eddy & Hogan (2014)), she obtained negative results. She hypothesized that students from her institution were less skilled in reading than those from her coauthors’ institutions, limiting the effectiveness of pre-class readings.
Giving up on reading?
This experiment resonated with me since I had just read several texts about declining reading skills among students, including an article from The Chronicle of Higher Education. This text addresses some issues specific to U.S. schools, but others that certainly cross the border. In several articles, some individuals deplored the “levelling down” associated with “giving up on” the reading requirements we may have for students. I believe that if we expect our students to read texts they don’t yet have the skills to understand, we have to teach them how to do so [in French]. But it takes time.
At the end of the winter semester of 2024, my colleagues shared many anecdotes about students struggling to decode simple sentences in exam contexts… It’s difficult to know where to set the bar in such a context.
I would also like to hear your opinion on this topic: Do you require specific readings from your students? Do you allow yourself to opt for “difficult” texts? Why? In either case, what are your strategies and successful approaches to male sure your students fully benefit from their readings?
Please share your experiences and suggestions in the comments section!