Next Article in Journal
Maximize the Impacts of Forgiveness Education with Moral Agency Development
Previous Article in Journal
Teachers and Teaching in Teacher Education: Editorial
Previous Article in Special Issue
Massive Open Online Courses in Higher Education Institutions: The Pedagogical Model of the Instituto Superior Técnico
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Past, Present, and Future of Clickers: A Review

by
J. Bryan Henderson
* and
Elijah L. Chambers
Department of Physics and Mary Lou Fulton Teachers College, Arizona State University, Tempe, AZ 871811, USA
*
Author to whom correspondence should be addressed.
Educ. Sci. 2024, 14(12), 1345; https://doi.org/10.3390/educsci14121345
Submission received: 25 October 2024 / Revised: 18 November 2024 / Accepted: 3 December 2024 / Published: 9 December 2024

Abstract

:
Classroom response technologies commonly called “clickers” have been a popular tool for teaching in many disciplines, even required by some courses. Despite this excitement and corresponding investment in clicker technology, scholars disagree on the value of clickers. To help support teachers who utilize or are interested in using clickers, we explore the past, present, and future of clickers in education. This manuscript provides a literature review of how clickers are used, the benefits and challenges, and suggestions on the implementation of clicker technologies. Utilizing five research databases and a wide range of search terms, two general trends for clicker use became apparent: traditional classrooms that use clickers to enhance them and classrooms integrating clickers with more novel pedagogical approaches. After separating the papers into groups based on the trend they follow, the benefits and challenges were identified and recorded. In turn, we summarize what research has to say regarding both teachers and students for each of these primary outcomes. Building off clicker research both past and present, this review then looks toward the future by providing suggestions for overcoming the challenges faced by students and teachers when using clickers. Furthermore, we recommend important directions to consider for future research on clickers, including the need for more empirical studies of how different uses of clickers can benefit different learners in increasingly equitable ways.

1. Introduction

“Clickers” are a classroom response technology allowing teachers to collect student responses en masse. This can be in response to polls or questions. New technology has provided new ways to use clickers: voting, written responses, gathering numeric responses, gathering drawings, and more. Yet, the basic definition holds; a prompt is given and student responses are collected electronically. In turn, clicker technologies often provide teachers with different ways of displaying tabulations and summaries of the student responses collected electronically. Showing students these results can be used as a context for class-wide and/or peer-to-peer discussion about the polling results (e.g., if there are disagreements, students can be asked to explore these differences with the goal of trying to reach a consensus). Traditionally, these devices and, by a similar token, much of the literature reviewed in this work focuses on in-person learning. While in-person learning has been limited by recent events, clickers under this broad definition persist in the classroom in a variety of forms. Web-based systems like Kahoot! have grown during this pandemic [1,2]. These systems still use the model of posing a question and eliciting responses from students. Identifying best practices or models to use is crucial, irrespective of the format of the class and type of clicker used: physical clicker, app-based, or web-based. Clickers are not a new technology [3], but educational institutions are giving them increasing attention as a teaching tool. Initially, instructors used clickers as a direct substitute for hand-raising, but in more recent years, instructors use clickers to promote interaction and discussion in the classroom setting [4,5]. Instructors currently use clickers in two main ways: (1) to enhance traditional classrooms or (2) in combination with novel pedagogies. In enhanced traditional classrooms, instructors add clickers but not novel pedagogies. Clickers have administrative value when used in this fashion (grading, collecting, and distributing assessment material, attendance, etc.); they can ease the instructor’s burdens [4,6,7].
The use of clickers with modern pedagogies does not only bring administrative value, but it can also increase student learning [8,9,10,11,12,13]. Many novel pedagogies use clickers to facilitate discussion. Research shows learning gains for students and improved student perception with novel pedagogies. Still, students and instructors struggle with clickers in both categories of use. In enhanced classrooms, instructors are concerned about technology, clicker questions, content, and the usage of time. Students may struggle with technology, financial burdens, and reticence for participation. Clicker research, in general, needs to build upon surveys of student/teacher perceptions toward understanding the effects of clickers. Perception data will aid in providing a fuller understanding of how teachers/students are impacted when working with clickers. Preliminary work has shown differential outcomes for specific student cohorts. Fostering more inclusive clicker classrooms demands a better understanding of this impact. This manuscript reviews the past and present of clicker use to explore potential best practices between the two discussed trends, and it concludes by discussing recommendations for future clicker use and research.

Different Names for Similar Technologies

Clickers are an educational technology for eliciting student responses (anonymous or identified) and then compiling, displaying, or recording those responses in real-time. The manufacturing and distribution of clickers has become a multi-million-dollar industry with a growing presence in many classrooms [14]. Two decades have passed since the National Resource Council’s How People Learn [15] identified clickers as a promising new trend in education, and with clickers being more affordable than ever before, millions of students have now been exposed to the technology.
Kay and LeSage [4] found 26 different names for technologies that allow for the real-time recording and display of student response data. These names include student response systems [16,17], audience response systems [18], personal response systems [19], electronic response systems [3,20], electronic voting systems [21,22], and perhaps most commonly, clickers [23,24]. Hereinafter, we use the term clickers to refer to these systems generally. In addition, while the bulk of the studies we discuss were conducted before the advent of cloud-based clickers, which allow students to participate via web-enabled devices, many of the findings we present also apply to these newer clicker technologies.

2. Methods

The method for this literature review began with three highly cited publications involving clickers: Eric Mazur’s Peer Instruction [25], Kay and LeSage’s review of clicker literature [4], and David Banks’ Audience Responses Systems in Higher Education: Applications and Cases [26]. These works were utilized to establish a set of keywords that would serve as the basis for the first search round (e.g., efficacy, learning gains, impacts, pros/cons, impacts, negative impacts, positive impacts). These keywords were entered with Boolean OR (|) and AND (+) operators into multiple Internet search engines and databases: Wiley Online Library, Arizona State University Library OneSearch, EBSCO Online Library, and Google Scholar. Starting with more recent articles, different names used in the literature for clickers (e.g., classroom response systems, audience response system, student response systems, educational response systems) were identified and used in different search combinations with the original set of keywords. Again, starting with the most recent articles first, searches with these new keyword combinations revealed recurring keywords and phrases related to pedagogy (e.g., clickers active learning, clicker pedagogy, classroom response system pedagogy, clicker best practices). In turn, new literature searches with these keywords related to pedagogy took place in various combinations with the keywords utilized in previous search rounds. This resulted in a new round of search results with recurring themes that included ethnicity, gender, equity, culture, differential impact, inequality, and negative impacts.
Throughout the iterative search of the clicker literature, two general trends for clicker use became apparent: (1) traditional classrooms that are enhanced with clickers and (2) classrooms integrating clickers with more novel pedagogical approaches. For both these general trends of clicker use, articles focusing more on the benefits of clickers and articles focusing more on the challenges of using clickers were found. Table 1 in the Results Section presents a matrix organizing the results of this multi-round literature search.

3. Results

3.1. The Past: Clickers as a Passive Response Tool

3.1.1. A Brief History of Clicker Development

The introduction of clickers into higher education began when Stanford University secured funding to build the Stanford Center for Research, Development, and Teaching in the 1960s. This building was equipped with the latest technologies, including a clicker at every desk [27]. While many of the technologies aimed to aid the instructor in presenting material, clickers offered a novel method for students to respond during class, receive feedback, and gauge their own understanding. The clickers also provided instructors with insight into students’ grasp of a subject [3]. At their inception, clickers replaced other modes of student responses (e.g., polling, raising hands, cold calling). Using clickers was an innovation in classroom dynamics. The classroom was becoming more interactive, moving away from the traditional lecture format where instructors only presented to students during the class period. As the use of clickers began to spread, scholars began studying the effects. Early work showed that there were no additional learning gains when compared to a normal lecture classroom [28,29,30,31,32,33]. However, researchers found that students had a positive perception of clickers and their use in the classroom [28,31,32,33,34]. Researchers also commented on student engagement in class since clickers allowed participation even in large group settings. Having class be more interactive and having students more engaged is part of the reason researchers saw potential in the clickers.
Yu-Kuang Chu [33], in a report to the National Science Foundation, noted how clickers provided instructors the ability to generate discussion among students. They gave a chance for the students to meet new people and collectively discuss their ideas. Chu also noted their use for collecting information on student opinion, deducing comfort levels with the material, and monitoring students’ personal study. With positive findings, more institutions bought and incorporated clicker devices into classrooms and learning environments [35].
Despite the excitement surrounding clickers, research on clickers sharply declined after their initial implementation. Judson and Sawada [3] attribute this “cool-down” to universities losing interest in clicker technology. Louis Abrahamson [35] recounts his experience in designing clickers and the struggles related to early clickers. His account confirms that although clickers had promise, they were prohibitive in cost, required too much maintenance, and were limited in capability, and he cites early users saying that “[the clickers] never worked and [were] a total pain to use”. Despite these issues and the temporary decline of publishing on the topic of clickers, studies continue to show benefits from using clickers [21]. Modern technology has made clickers more cost-effective and accessible to students. Due to these gains, a renewal of excitement took place in the 1990s [35]. As instructors began to reintroduce clickers into classrooms, researchers, instructors, and students observed similar benefits. Furthermore, instructors and researchers began to look for ways to use clickers beyond simple polling.

3.1.2. Similar Technology Can Use Dissimilar Pedagogy

A key feature of clickers is instant feedback, providing instructors with a real-time gauge of student understanding [12,21,22,23]. Clickers can make both teachers and students privy to immediate feedback, which can be used for formative assessment [36,37]. This immediate feedback, provided by clickers, can be used to adjust teaching in the moment. Rather than waiting for a more formal assessment of knowledge at the end of a teaching cycle, clickers collect that data, and teachers can reteach as soon as a misconception or gap in knowledge is identified. This mode of teaching is not inconsequential. Yeh [38] argues for the implementation of formative assessment systems to track students’ math and reading performance two to five times per week, which was found to be 124 times more cost-effective than class size reduction in promoting student learning.
However, as with many interventions, different instructors use feedback generated by clickers in different ways. When the first uses of clickers were documented in the 1960s, behaviorist stimulus-response principles were at their peak and, hence, clickers were used primarily to provide immediate response feedback on student performance [3]. More than half a century later, there are a growing number of constructivism-influenced uses of clicker feedback, the most common of which is the use of clickers to promote student discussion with both their instructors and their peers [3]. The difference in clicker pedagogy is not just a historical one. In today’s clicker classrooms, there is variation in how instructors pose clicker questions, the degree to which they interact with students during peer discussion sessions, and the norms of discussion [39].

3.2. The Present: Two Prevalent Trends in Clicker Use

3.2.1. Trend 1: Using Clickers to Enhance More Traditional Classrooms

With technological advances in clicker technology, companies, researchers, and instructors have developed more ways of using clickers to save time and effort. These include methods for administering tests, recording attendance, quizzing, grading, verifying reading completion, and tracking participation. These practices benefit instructors by decreasing the time spent on collection and grading, increasing efficiency in classroom practices, and decreasing paper waste. Moving forward, when we discuss this trend as the enhancement of a more traditional classroom, we are referring to instructors using clickers for increased efficiency with the more administrative aspects of teaching. This is in contrast with utilizing clickers to transform the pedagogical orientations of the teaching itself, which we will discuss later as a second major trend in contemporary clicker use.
  • Student Perception of Clickers in More Traditional Classrooms
When instructors implemented clickers in traditional classrooms, researchers identified and measured the benefits. Kay and LeSage [4] examined sixty-seven peer-reviewed articles and book chapters assessing clicker use, with sixty-four from the year 2000 or later. Thirty-eight of these articles examined attitudes towards clickers, and thirty-six of those articles reported generally positive attitudes towards clickers by students and/or teachers. Kay and LeSage point out a wide range of speculation in the literature as to the potential benefits of using clickers in the classroom. We summarize some of those benefits now.
Attendance and Active Participation. Attendance increases as students become an essential part of the learning environment [40]. By assigning points to attendance, instructors encourage students to come to class through a grade-based reward for attending. Clickers have also proven useful in dealing with the large class sizes common in larger universities by allowing more students to be a part of classroom activities [40]. With more chances to participate, students tend to have increased attention during lectures [21,41,42]. Clickers help in creating interactivity or active learning [43]. This dynamic is a highly discussed benefit of clickers as students become contributors to the lecture, rather than idle participants. An important note, participation and students’ engagement are a complex part of the classroom: many factors influence them. Clickers are not a “catch-all” solution for engaging learners. When used well, clickers assist in creating this environment by utilizing discussion, instant feedback, and thought-provoking questions to engage learners [7,12,21,23,40,41,42].
Anonymity Can Mitigate Stereotype Threat. Anonymity is a benefit irrespective of class size. When instructors poll or call on students, they can experience social pressure. By using clickers, students maintain anonymity from peers and can contribute without the worry of this social pressure [20]. There are options in some clicker systems to also have anonymity from the instructor, but the effects of this were not explored in this work. It is likewise important that while students maintain anonymity from peers, responses can still be tied to students for the logistical purposes of the instructor.
Like social pressure, stereotype threat is another challenge that students may face in the classroom [44]. Stereotype threat is the risk one feels of conforming to a negative stereotype held about them in an activity they care about. When faced with a task in a domain where success is important to them, students belonging to groups with negative stereotypes in that domain perform worse than their counterparts not under the threat of stereotype [45]. Even if a student does not believe a stereotype about themselves to be true, the mere cognizance of not wanting to fall under that stereotype will produce unnecessary anxiety and cognitive effort directed away from success at the specific task at hand. Groups susceptible to stereotype threat in academia include minorities and women [46]. Researchers have found that some students tend to prefer clickers over the traditional methods of classroom response because the anonymity of clickers can reduce the anxiety and potential stereotype threat that comes from having their contributions judged by their classmates [4,6,20].
Clickers Can Streamline Testing. Clickers can be particularly useful in testing scenarios. Clickers as a testing tool can eliminate the need for using scantrons and other paper instruments. This change helps move toward sustainable testing practices and reduces the time needed to distribute materials. Clickers can provide an instant grade and instant feedback on performance to students as well. Instant feedback allows students to better prepare for future exams, as they know instantly what areas they should focus on during future study [9].
  • Challenges Using Clickers in More Traditional Classrooms
Despite all the benefits of instructors enhancing a traditional classroom by using clickers, instructors and students face issues in these classrooms. Instructors have difficulties setting up the technology, using technology during class, and developing activities. Students also face technological challenges and have concerns related to certain practices of clickers [4,23]. While some practices have benefits for instructors, not all these practices benefit students. We must sift through these practices and find the best way to aid instructors while minimizing harm to students’ perception of a technology that has other beneficial uses. Predictably, instructors face a learning curve with novel technologies, and instructor training and capability with clickers can be a concern [47]. A lack of technical training can hinder the ability of instructors to use clickers effectively in the classroom [40,43,48]. A lack of knowledge about the implementation of educational technology leads to wasted time and effort while not reaping the benefits of clickers [49]. Aside from training, several other concerns are prevalent in the literature.
Content and Questions Balance. A common concern related to clicker technology is how instructors cover less overall content when using clickers in their courses [20,48,50,51]. This can be a major challenge for implementing clickers when instructors may not have the option of reducing the amount of material covered by their courses [52]. In any system or classroom that strays from the lecture-only model, there will be a logical decrease in delivered material as time is spent on activities other than content delivery. The benefits of these different modes warrant the tradeoff.
Question Development Time. Instructors view the process of developing questions for clicker activities to be time-consuming and laborious [52,53,54,55,56]. While this may seem like a menial task, questions are a key ingredient for effective clicker implementation. Even if the questions are used simply to engage students and collect participation points, poor questions will yield poor results [57].
Student Challenges Using Clickers in More Traditional Classrooms. A study discussing student concerns with clickers found that “Students mentioned technical problems (25%), poor use of the technology (15%), and wasted class time in fixing problems (12%)” (as cited in Lantz, 2010, p. 557) [49]. Of those who responded positively to clickers, 47.8% reported issues with the technology. These concerns are reminiscent of the concerns raised by teachers who first worked with early clickers [35]. It takes time for students to become familiar with clicker technology, just as instructors need that time [43,52]. Even after students become accustomed to clickers, about 30% still do not approve of them [23]. We now summarize some of the challenges that might play a role in why nearly one-third of students may not approve of clickers.
Clickers Can be Associated with the Anxieties of Testing. The ease of collecting student responses may be a prime reason for instructors to use clickers to administer exams. Students, however, dislike using clickers on tests [23]. Among some students, test anxiety contributes to negative views of clickers. Such anxiety may lower students’ levels of achievement, decrease social functioning, and lower self-worth [58]. Associations are a factor in test anxiety [57]. When instructors use clickers for tests, students associate clickers with tests, even when instructors are using clickers for non-assessment activities. Basing a large percentage of the class grade on clicker participation can exacerbate this association [23].
Manipulating Attendance Numbers. Some students do not like instructors monitoring attendance via clickers [23]. Since students dislike using clickers for attendance, they find ways to undermine the data produced by clickers. Popular media threads contain multiple students openly admitting they use other peoples’ clickers [59]. Clicker abuse has even made headlines. At Dartmouth College, instructors caught students using absent students’ clickers to falsify attendance. Clicker points represented 15% of students’ final grades, so students would use other students’ clickers to make them appear present and receive the points. Out of 272 students in the class, 64 (24%) were accused of cheating using this method [60]. The negative perception of clickers when used for attendance is particularly concerning as preliminary research shows that negative views may generalize to other negative uses of clickers [23].
Clickers Can Add to Financial Burden. Clicker costs add to rising educational expenses [61]. While clickers have educational potential, instructors and institutions should consider the financial burden on students. Universities can distribute the costs of clickers in multiple ways. For example, some companies roll the expenses for set-up, maintenance, or upgrading of classroom components into the cost of clickers and student subscriptions. Schools and students could split these charges, but in many higher education institutions, students pick up the full cost. A common clicker device can cost upwards of USD 50, while a subscription can cost USD 24 a year [62]. Based on these costs, at USD 74 per student, the start-up cost for a 30-student class would be USD 2220. These calculations are based on a single class using the clickers for the first time. If students use clickers in multiple classes, the cost per student per instructional hour decreases. Still, financial burdens may prohibit some students and institutions from purchasing clickers. There are options for clickers, such as web-based clicker-type programs, that are free to use. These offer a valid alternative, but the decision to utilize these rests on the teachers, schools, and administrators. Some major universities require students to purchase approved devices, which negates the benefit of these free alternatives.
Greater Reticence to Ask Questions Out Loud. Carnaghan and Webb [63] compared the number of questions and answers for two groups of students before and after the introduction of clickers. When averaging individual responses in a class session, both groups of students asked fewer questions during clicker use. This decrease may stem from students preferring the anonymity of clickers and not wanting to leave that safety by asking questions out loud. Some clicker technologies include the option for students to ask questions with their devices, thereby maintaining the safety of anonymity while still promoting questioning.

3.2.2. Trend 2: Using Clickers with Novel Pedagogies

  • Four Novel Clicker Pedagogies
Cooperative Learning. We focus on four pedagogies that use clickers in novel ways, as opposed to merely improving the administrative efficiency of a more traditional classroom. This is by no means an exhaustive list and many other pedagogical approaches can be combined with clickers. These are simply four possibilities that we identified as working well with clickers and have studies that show the benefits of the pairing.
The first of these is “cooperative learning”. Cooperative learning is predicated on the idea that learning is a group effort and working together with other students can improve learning and the learning environment [10,64]. Students engage in group activities where they must complete structured tasks and assignments. This pedagogy has shown benefits to students in terms of retention, engagement, higher-level thinking, and increased grades [10,64,65]. One example of cooperative learning with clickers is cooperative quizzing. Cooperative quizzing has students complete both individual and group assessments. For group assessments, the instructor splits the class into small groups and encourages them to discuss answers and examine why some answers are right and others are wrong. Students work cooperatively to achieve the highest group score, rather than against each other. Students can give and receive help from their peers. Although this may be unfamiliar to instructors and students alike [66], discussing and working together on quizzes helps students develop a better understanding of the topics. It is more in tune with how society works outside of school. Students start to learn skills they can implement as they later participate in teams, organizations, and groups in their professional careers. Outside of social learning, Zeilik and Morris [67] found that students had a 10% overall increase in average quiz scores after implementing cooperative quizzing. They cite a cooperative quizzing study by Byrd et al. [68], where quiz scores increased from 57% to 80% and the class GPA increased from 3.80 to 4.33. Pairing this pedagogy with clickers helps combine the benefits of clickers with the benefits of cooperative learning to improve the learning experience and corresponding learning gains [11,66,67,69].
Peer Instruction. Peer Instruction is another pedagogy that utilizes clickers to help improve student learning. Eric Mazur developed Peer Instruction at Harvard University in the 1990s [25]. In the basic method, the instructor presents a question for students to answer individually. The instructor uses clickers to poll students on their answers. Depending on the percentage of students responding correctly, the instructor repeats the question and encourages students to discuss their thinking with fellow peers. Following peer-to-peer discussion, the instructor asks students to use their clickers to vote again on the same question. Clickers fit well with Peer Instruction as they speed up polling and ease the burden of recording student responses. Research suggests that Peer Instruction can improve student benefits and learning, sometimes dramatically [12,52,70,71].
Question-Driven Instruction. Beatty et al. [72] present Question-Driven Instruction, which also relies heavily on discussion. Question-Driven Instruction operates on the premise that students receive exposure to material completely outside of class and the only purpose of the class is to follow the “question cycle.” To begin the question cycle, instructors present students with a question at the beginning of class. Students then break into small groups and argue over their various approaches to the question, utilizing the information they gleaned from the out-of-class materials. The instructor utilizes clickers to poll the students and display the responses. Then, the instructor moderates a class-wide discussion in which students volunteer to explain their reasoning behind differing responses. Depending on the outcome of this interaction, the instructor provides observations, feedback, a micro-lecture, or presents another question. A class can go through three or four question cycles in a 50-min class period [72].
Gamification. Gamification provides students with an opportunity to compete, discuss, score points, and be challenged while learning objectives still dictate the goals of the class [73]. Gamification includes game show-style formats: being first to answer, game sounds, countdowns, scaling answers, point-based question selection, leaderboards, etc. [73,74,75]. Students can play by themselves or be a part of a group. Students say that they enjoy gamification in class and that it engages them [74]. Gamification has been shown to increase learning, motivation, participation, and attention [8,73,74,75]. Much of the innovation with gamification already includes clickers because they fit the pedagogy so well. Scoreboards, game-type question structures, recording answer selections, and countdowns are already a part of many clicker programs. Discussion is a prevalent part of this pedagogy as well, suggesting that student discussion is something successful clicker pedagogies have in common.
  • Student Discussion is a Common Thread in Successful Clicker Classrooms
Judson and Sawada [3], in a review of over three decades of clicker use, concluded that clickers are positively associated with student review only when student discussion took place. In a review of 76 papers surrounding clicker use, MacArthur and Jones [76] found student collaboration to be a common feature in all studies, detecting statistically significant learning gains. What is it about these student discussions in clicker classrooms that manifest in learning gains?
Crouch and Mazur [12] analyzed clicker votes for an entire semester of introductory college physics. In cases where students voted on the same clicker question both before and after peer discussion about that question, they found that students answering correctly prior to peer discussion tended to maintain their correct position while “the vast majority of students who revise their answers during discussion change from an incorrect answer to a correct answer”. Knight and Wood [77] reported a similar result with the use of clickers in an upper-division course in developmental biology over two consecutive years. They found that, “almost inevitably, when a second vote is taken after 3–4 min of discussion, more than 75% of the class chose the correct answer”. For reference, we will define this phenomenon as convergence.
Given convergence on the correct answer when student discussion takes place between clicker responses, why do many students who first answered incorrectly move to the correct response after discussion? Mazur [25] describes the discussion between clicker votes as an opportunity for students knowing the correct answer to convince those who do not. Furthermore, Mazur advises that, if the percentage of students voting correctly on a conceptual question is less than 50%, “there are too few students in the audience to convince others of the correct answer” (p. 12). Here, students who determine the correct answer impart their understanding to others. This “truth wins” perspective—which essentially characterizes the upper limit of group performance as set by the most able members of the group [78]—is a possible explanation for convergence. Under this view, if students change to a correct answer after being convinced by a peer who answered correctly, they may adopt the position of their discussion partner with little thought. This dynamic could be particularly common if they perceive their peer discussion partners to be knowledgeable on the topic at hand. For simplicity, we define this possible scenario henceforth as imitation, i.e., students in peer discussion groups passively and superficially changing their answer to that of a discussion partner they deem to be knowledgeable and trustworthy.
Smith et al. [79] provide evidence that convergence in peer discussion groups was not the result of imitation. In an introductory genetics course where students voted with clickers, instructors asked students to vote independently on a conceptual question (Q1) and then discuss that question with peers before individually re-voting on the same question after discussion (Q1ad). Then, instructors presented students with a second question (Q2) testing the same concept as Q1, only with different surface features. Smith et al. call these pairs of problems purporting to test the same concept “isomorphs”. Smith et al. found that when students were first incorrect on Q1 but then changed to the correct vote on the same question after discussion (Q1ad), 77% of these students went on to answer the isomorph question (Q2) correctly.
As Q2 had different answer choices and students answered Q2 without peer discussion, this result is not consistent with imitation. More specifically, if students changing to a correct answer on Q1ad were merely memorizing the number or letter of the answer selection suggested by a peer, then the percentage of the time they also answer correctly on Q2 should be more comparable to random guessing (e.g., 25% if there are four answer choices). Answering Q2 at a rate of 77% after correcting from a wrong answer on Q1 to a right answer on Q1ad suggests something deeper is taking place during the discussion between Q1 and Q1ad. Furthermore, Smith et al. [79] found that even when a student answered both Q1 and Q1ad incorrectly, 44% of the time they went on to answer Q2 correctly. This success rate is greater than what would be expected from random guessing, which is particularly noteworthy as the researchers deliberately withheld any feedback on classroom voting on Q1 and Q1ad.
Smith et al. [79] found when 328 students participated in an end-of-year survey, nearly half of these students disagreed with the following statement: “When I discuss clicker questions with my neighbors, having someone in the group who knows the correct answer is necessary in order to make the discussion productive”. This result is consistent with the Schwarz, Neuman, and Biezunger [80], who found that groups are capable of producing correct answers even when all of the group members were initially incorrect. Hence, when it comes to student discussion being a common thread in clicker studies that demonstrate learning gains, it is hard to say that the result is due merely to truth winning over students who imitate, especially considering evidence from multiple studies showing that truth can emerge even if no discussion partner originally understood that truth.
  • Challenges with Novel Clicker Pedagogies
Instructor Challenges with Novel Clicker Pedagogies. Kay and Lesage [4] mention that teachers, due to inexperience, may struggle with fully utilizing the student feedback provided by clickers. While clicker companies and academic institutions may provide suggestions and training [9,81,82,83], there will still be a learning curve. Specifically, teachers must learn how to use the new feedback and results of the formative assessment to alter instruction actively throughout the class. This method is foreign to many instructors, and some have expressed concern over this instruction technique [35,84]. However, with appropriate execution, there is a gain in teaching efficacy [84].
Student Challenges with Novel Clicker Pedagogies. Some students might feel that the discussion stemming from clicker questions is confusing and uses more class time than necessary [21,85]. James and Willoughby [86] observed a large proportion of discussions surrounding clicker questions to be underproductive or off-topic. This detracts from the usefulness of the discussion-based pedagogies and would lead students to think of the activities as a waste of time. While students may be encouraged to discuss a topic during a clicker exercise, this does not guarantee what the conversation will look like; some students may dominate conversations, others may not be able to participate, or students may be off-topic during the discussion portion of an activity [49,77,85,87,88]. In smaller classrooms, there may be an opportunity for an instructor to redirect the conversation back to the main topic. In a classroom of 300, an instructor has no way to oversee all interactions. This poses a serious question about the efficacy of clickers and their subsequent discussions. With practice and experience, however, student discomfort with the discussion that accompanies clicker use does appear to diminish [89].

3.3. The Future: Recommendations for Future Research and Development of Clickers

3.3.1. The Four General Outcomes of Clicker Use

Depending on how instructors combine pedagogy and clicker use, four main outcomes can occur. Table 1 illustrates how research on clickers falls into these four broad combinations of clicker technology and pedagogical techniques.
If an instructor employs effective pedagogy and uses clickers, both the instructor and student can benefit (Table 1: Top right). Effective pedagogy without clickers may prompt learning gains among students but have no logistical value to the instructor (Table 1: Bottom right). If a classroom does not have effective pedagogy but uses clickers, there is value to the instructor for logistical reasons: attendance, ease of grading, etc. (Table 1: Top left). If there is no effective pedagogy in place and no clicker use, there are no learning gains for students and no value for the instructor (Table 1: Bottom left). The ideal outcome is when both effective pedagogy and clickers are present. However, many of the challenges we have summarized previously can make it difficult to achieve the ideal clicker classroom.
To help instructors move toward this goal, we provide some best practices and recommendations to address the challenges inherent in optimizing a clicker classroom.
Table 1. A matrix depicting how research on clickers falls into four possible combinations of clicker technology and novel pedagogy.
Table 1. A matrix depicting how research on clickers falls into four possible combinations of clicker technology and novel pedagogy.
Novel Pedagogy
Not PresentPresent
Clicker TechnologyPresentStudents’ experience
Positive Perception
[28,32,33,34]
Immediate Feedback
[24,34,53,90]
Anonymity
[34]
Increased Participation
[53]
Dislike of clicker practices
[23]
Students’ experience
Learning Gains
[8,12,23,72,85]
Flow being achieved
[5]
Positive Perception
[5,12,23,50,74,85,91]
Immediate Feedback
[5,72,74,85,91]
Anonymity
[12]
Increased Participation
[21,23,50,72,74,85,91,92,93]
Unfamiliar
[85]
Misuse of Discussion
[21,86]
Teachers’ experience
Logistical Benefits
[34,53,90]
Content Balance
[23]
Teachers’ experience
Logistical Benefits
[50,68,74,92,93,94]
More Time and Effort
[23,84]
Foreign Method
[84]
Not PresentStudents’ experience
No Benefits
[90,95,96]
Students’ experience
Learning gains
[12,25,52,70,71,97]
Positive Perception
[12,52,70,71]
Increased Participation
[12,52]
Teachers’ experience
Convenience
[90,98]
Teachers’ experience
Lack of Logistical Benefits
[12,68,71,94,97]

3.3.2. Addressing Instructor Challenges

Instructors should not have clicker technology merely dropped into their classrooms. To capitalize on the potential learning affordances of clickers, instructors must see the purpose and benefits of using clicker technology. By addressing technological concerns and pedagogical concerns together, we begin to enter the intersection of technical competence and pedagogical content knowledge where the most benefits to the students and teachers are present.
  • Clicker Questions
Questions help fuel positive pedagogical practices. However, instructors have concerns about the number of questions, the writing of questions, the asking of questions, and what to do with feedback. Caldwell [23] draws on the works of Beekes [92] and Wit [99] to formulate a list of ways to improve questions and methods of questioning. Caldwell recommends instructors pair at least a few questions with the Peer Instruction pedagogy. Robertson [54] also provides a very well-written list of twelve tips for utilizing clickers, over half of which pertain to clicker questions. Clicker questions should be short, easy to read, and demand some level of confidence from students. Too often students can guess the right answer after identifying and eliminating obvious distractors. While this is a well-known test-taking strategy, when instructors ask questions in class to gauge learning and help identify what material needs revisiting, student strategizing of responses hinders that goal.
  • Balancing Clicker Time with Course Content
A way to prevent clicker use from reducing the amount of content originally intended to be covered in a course is to have students engage with the material before class [25,52,100]. As discussed by Knight and Wood [77], class time is not the time to introduce brand-new material. In this model, instructors assign students material before class, and class becomes a place to learn what students struggle with and focus on those key areas of misunderstanding. However, this approach is not without concerns. A recent study showed that only 70% of students study material before class [101,102].

3.3.3. Addressing Student Challenges

While instructors bring clickers into their classrooms, students experience the deployment of the technology, and, ultimately, their learning should be the reason behind classroom practices. Instructors and researchers should address students’ potential challenges with clicker classrooms.
  • Addressing Association of Clickers with Testing Anxiety
As noted earlier, some students associate clickers with the anxieties that come with testing. A practical approach to address this issue would be to solicit student views before finalizing testing strategies. Instructors can poll students about views on taking tests with clickers. Another approach would be to reassess the need for high-stakes exams. For example, the novel pedagogies we have presented emphasize using clickers for low-stakes, engaging activities rather than exams. Also, additional studies on the impact of using clickers for testing on students’ willingness to use and preference for clickers would be valuable.
  • Addressing Manipulation of Attendance
Once students are familiar with clickers and know how to exploit them, cheating on attendance can occur. To mitigate this possibility, we recommend finding other ways than clicker points to reward students for coming to class. If there are discrepancies in visible attendance and response count, instructors can perform random checks of students’ IDs to see which students responded but are not present. A physical sign-in may also help this process. These additional steps should only be taken if a problem with attendance becomes apparent as extra preventative activities can detract from the benefits of using clickers.
  • Addressing Challenges with Discussion
One way to manage student discussion meandering away from course topics is to use “moving and guiding” during discussion periods. Having TAs or the instructor move through the class during discussion periods and monitor conversation helps focus students. Meaningful and engaging questions can also reduce off-topic discussion. Students who genuinely disagree because of a well-written question will be more inclined to discuss the topic. If questions fail to provoke disagreement, students will have more reason to discuss other things. When students become distracted, they may utilize devices intended for the classroom for other purposes [103,104]. A further development of applications to reduce this potential for distraction could prove beneficial.

4. Discussion

Clicker technology has matured from hard-wired systems intended to merely replace student hand-raising to dedicated handsets and mobile device apps with the potential to work in concert with novel pedagogies. Instructors fluent in both clicker technology and novel pedagogies see the most student learning gains when using clickers. Discussion is a common thread across pedagogies that promote student learning with clickers. Yet, instructor and student challenges with clickers persist. For instructors, these challenges include a lack of technical training/support in using clickers and a learning curve when it comes to writing thought-provoking clicker questions and managing the classroom discussions that well-written questions can promote. Meanwhile, students face challenges that include financial burden, a negative association of clickers with more traditional testing, and the reticence to ask questions aloud once it is possible to submit responses under the safety of anonymity.
Although instructors can address many of these challenges with pragmatic adjustments in how they use clickers, other issues call for additional research. Fies and Marshall [105] state that many clicker studies rely primarily on anecdotal evidence. They conclude that further focus on opinions and student perception about clickers is unneeded. While many published studies conclude that students and/or instructors have a favorable view of clickers, future studies should focus more on analyzing why/how this is the case. Studies should focus on learning gains compared across different clicker pedagogies and build upon the knowledge that discussion with clickers has positive benefits for students. Studies should identify measurable effects on student achievement across different settings and populations.
Fies and Marshall [105] also argue that current research superficially compares clickers and non-clicker classrooms. The research focuses on simply whether clickers are present or not, rather than assessing differing pedagogies and uses of clickers. Research suggests that certain clicker pedagogies improve student performance over more traditional instructor-centered classrooms, but how do different uses of clickers compare against each other? Moreover, what are the best practices that achieve the highest learning gains with clicker pedagogies? These questions are examples of gaps in our understanding that future research should explore.
Another potential area for future research centers on the current lack of longitudinal work in clicker studies. Longitudinal work could examine classrooms after the introduction of clickers, rather than merely the short-term impact of clickers. Much of the research that has been performed with clickers focuses on the short-term impacts of clickers in the classroom, ending studies with the last day of a class [24,42,67,74,106,107,108]. Work that does attempt to cover clickers over longer periods examines different classes over the years [4,12,56], failing to follow students through multiple years of working with clickers. By only having studies of these short-term impacts, we fail to see the long-term effects that students may be experiencing. For example, 70% of students enjoy clickers initially [23], and the teaching community interprets this reaction as a reason to use clickers. However, we lack knowledge about student perception after multiple years of working with clickers. At the undergraduate level, do freshmen new to clickers have a better perception of them than experienced seniors? Does the novelty and excitement wear off? More work that examines the effects of clickers on the same students over time would aid in furthering our understanding of how clickers affect students.
We mentioned the finding that around 30% of student participants consistently report not liking or not seeing positive benefits from clickers [23]. A limitation of the current literature is the inability to explain who makes up this 30%. Not enough is known about the specific effects of clickers on different demographic groups [4]. Some work suggests a differential impact of clickers on different demographic groups of students. The main body of this work focuses on gender, but some work suggests that looking at differential impact more broadly will be insightful. Angel Hoekstra [87] performed a socio-cultural analysis of the effects of clickers in higher education. Student preferences for clickers exhibited some trends based on gender. For example, Hoekstra’s findings suggest that during clicker activities male students are more likely to work independently, while female students prefer to work in groups for clicker questions. Women were more likely to float from group to group, rather than having a set group. Also, male interviewees were “much more likely to mention or emphasize appreciating clicker questions as an opportunity to self-test”. Other studies have noted that women are more likely to enjoy and benefit from clickers [109,110].
As with gender, there is research showing differential effects of clickers by ethnic background. Hoekstra [87] noted some differences in the way students interact with clickers and the pedagogy based on their culture. Cultural norms can make aspects of clicker activities inappropriate or foreign. A similar study studied the academic performance of white students and minority students, finding that minority students did not achieve as highly as the white students in the class using clickers [111]. As there are only a handful of studies examining possible interaction effects, further research is necessary to better understand clickers through the lens of inclusivity. This search for inclusivity does not end with gender or the visible identities of students. Every instructional tool and method used in a classroom will have a unique impact on each student for a variety of reasons: first-generation status, self-confidence, past educational experiences, and so much more. Ideally, instructors need to be better trained and made aware of the impacts different uses of clickers have on their students due to their variety of identities and experiences. Therefore, researchers should focus on further developing and testing the different ways students are asked to use clickers. Indeed, shedding light on the differential ways in which different clicker pedagogies impact different kinds of learners might be one of the most important directions for future research on clicker technologies.

Limitations

We acknowledge some design flaws in terms of the methodology (broad coding and aged references), limitations in terms of an original study not being performed, and how the findings may seem “out of place” in the current education sphere. This study, like many others, has run into difficulty with the ongoing pandemic. The original scope, plans, and timeline had to be modified as was necessitated by various events. Despite these changes, we still feel that the findings offer something, not only to the scientific community, but to the professionals who use, are starting to use, or want to use clicker-type devices and programs in their instruction. As previously mentioned, changing classrooms have led to an increase in programs that still fall within our definition of clickers. While much of the reviewed literature does not reflect current events, which does lend to obvious limitations, the pedagogical methods discussed and the effective classroom strategies may still prove useful for professionals.

Author Contributions

Conceptualization, J.B.H. and E.L.C.; methodology, E.L.C.; investigation, E.L.C. and J.B.H.; resources, J.B.H.; writing—original draft preparation, E.L.C. and J.B.H.; writing—review and editing, J.B.H. and E.L.C.; visualization, E.L.C.; supervision, J.B.H.; project administration, J.B.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Hanoa, E. An Update from CEO Eilert Hanoa: Kahoot! Delivered Strong Growth in an Unprecedented Third Quarter. Available online: https://kahoot.com/blog/2020/10/29/update-ceo-eilert-hanoa-kahoot-delivered-strong-growth-third-quarter/ (accessed on 1 November 2021).
  2. Wang, A.I.; Tahir, R. The effect of using Kahoot! for learning–A literature review. Comput. Educ. 2020, 149, 103818. [Google Scholar] [CrossRef]
  3. Judson, E.; Sawada, D. Learning from past and present: Electronic response systems in college lecture halls. J. Comput. Math. Sci. Teach. 2002, 21, 167–182. [Google Scholar]
  4. Kay, R.H.; LeSage, A. Examining the benefits and challenges of using audience response systems: A review of the literature. Comput. Educ. 2009, 53, 819–827. [Google Scholar] [CrossRef]
  5. Buil, I.; Catalán, S.; Martínez, E. The influence of flow on learning outcomes: An empirical study on the use of clickers. Br. J. Educ. Technol. 2019, 50, 428–439. [Google Scholar] [CrossRef]
  6. Freeman, M.; Blayney, P.; Ginns, P. Anonymity and in class learning: The case for electronic response systems. Australas. J. Educ. Technol. 2006, 22. [Google Scholar] [CrossRef]
  7. Guse, D.M.; Zobitz, P.M. Validation of the audience response system. Br. J. Educ. Technol. 2011, 42, 985–991. [Google Scholar] [CrossRef]
  8. Barrio, C.M.; Muñoz-Organero, M.; Soriano, J.S. Can Gamification Improve the Benefits of Student Response Systems in Learning? An Experimental Study. IEEE Trans. Emerg. Top. Comput. 2016, 4, 429–438. [Google Scholar] [CrossRef]
  9. Bruff, D. Teaching with Classroom Response Systems. In Teaching with Classroom Response Systems: Creating Active Learning Environments; John Wiley & Sons: Hoboken, NJ, USA, 2009; p. 240. [Google Scholar]
  10. Cooper, M.M. Cooperative Learning: An Approach for Large Enrollment Courses. J. Chem. Educ. 1995, 72, 162. [Google Scholar] [CrossRef]
  11. Crossgrove, K.; Curran, K.L. Using Clickers in Nonmajors- and Majors-Level Biology Courses: Student Opinion, Learning, and Long-Term Retention of Course Material. CBE Life Sci. Educ. 2008, 7, 146–154. [Google Scholar] [CrossRef]
  12. Crouch, C.H.; Mazur, E. Peer Instruction: Ten years of experience and results. Am. J. Phys. 2001, 69, 970–977. [Google Scholar] [CrossRef]
  13. Goodwin, S.; Hoffman, C. A clicker for your thoughts: Technology for active learning. New Libr. World 2006, 107, 422–433. [Google Scholar] [CrossRef]
  14. Berkey, N. Top Hat Ranked Number 200 Fastest Growing Company in North America on Deloitte’s 2018 Technology Fast 500TM. Available online: https://tophat.com/press-releases/top-hat-ranked-number-200-fastest-growing-company-in-north-america-on-deloittes-2018-technology-fast-500/ (accessed on 4 May 2019).
  15. Bransford, J.D.; Brown, A.; Cocking, R. How People Learn: Mind, Brain, Experience, and School; National Research Council: Washington, DC, USA, 1999.
  16. Penuel, W.R.; Boscardin, C.K.; Masyn, K.; Crawford, V.M. Teaching with student response systems in elementary and secondary education settings: A survey study. Educ. Technol. Res. Dev. 2007, 55, 315–346. [Google Scholar] [CrossRef]
  17. Trees, A.R.; Jackson, M.H. The learning environment in clicker classrooms: Student processes of learning and involvement in large university-level courses using student response systems. Learn. Media Technol. 2007, 32, 21–40. [Google Scholar] [CrossRef]
  18. Castillo-Manzano, J.I.; Castro-Nuño, M.; López-Valpuesta, L.; Sanz-Díaz, M.T.; Yñiguez, R. Measuring the effect of ARS on academic performance: A global meta-analysis. Comput. Educ. 2016, 96, 109–121. [Google Scholar] [CrossRef]
  19. Morling, B.; McAuliffe, M.; Cohen, L.; DiLorenzo, T.M. Efficacy of personal response systems (“clickers”) in large, introductory psychology classes. Teach. Psychol. 2008, 35, 45–50. [Google Scholar] [CrossRef]
  20. Freeman, M.; Bell, A.; Comerton-Forde, C.; Pickering, J.; Blayney, P. Factors affecting educational innovation with in class electronic response systems. Australas. J. Educ. Technol. 2007, 23, 149–170. [Google Scholar] [CrossRef]
  21. Draper, S.W.; Brown, M.I. Increasing interactivity in lectures using an electronic voting system. J. Comput. Learn. 2004, 20, 81–94. [Google Scholar] [CrossRef]
  22. Kennedy, G.E.; Cutts, Q.; Draper, S.W. Evaluating electronic voting systems in lectures: Two innovative methods. In Audience Response Systems in Higher Education: Applications and Cases; IGI Global: Hershey, PA, USA, 2006; pp. 155–174. [Google Scholar] [CrossRef]
  23. Caldwell, J.E. Clickers in the Large Classroom: Current Research and Best-Practice Tips. CBE—Life Sci. Educ. 2007, 6, 9–20. [Google Scholar] [CrossRef]
  24. Hatch, J.; Jensen, M.; Moore, R. Manna from Heaven or “Clickers” from Hell: Experiences with an Electronic Response System. J. Coll. Sci. Teach. 2005, 34, 36–39. [Google Scholar]
  25. Eric, M. Peer Instruction: A User’s Manual; Addison-Wesley: Boston, MA, USA, 1997. [Google Scholar]
  26. Banks, D. (Ed.) Audience Response Systems in Higher Education: Applications and Cases; IGI Global: Hershey, PA, USA, 2006. [Google Scholar]
  27. History. Stanford Graduate School of Education. Available online: https://ed.stanford.edu/about/history (accessed on 15 November 2018).
  28. Bapst, J.J. The Effect of Systematic Student Response upon Teaching Behavior; University of Washington: Seattle, WA, USA, 1971. [Google Scholar]
  29. Bessler, W.C. The Effectiveness of an Electronic Student Response System in Teaching Biology to the Non-Major Utilizing Nine Group-Paced, Linear Programs; Ball State University: Muncie, IN, USA, 1969. [Google Scholar]
  30. Bessler, W.C.; Nisbet, J.J. The Use of an Electronic Response System in Teaching Biology. Sci. Educ. 1971, 3, 275–284. [Google Scholar] [CrossRef]
  31. Brown, J.D. An Evaluation of the Spitz Student Response System in Teaching a Course in Logical and Mathematical Concepts. J. Exp. Educ. 1972, 40, 12–20. [Google Scholar] [CrossRef]
  32. Casanova, J. An instructional experiment in organic chemistry. The use of a student response system. J. Chem. Educ. 1971, 48, 453. [Google Scholar] [CrossRef]
  33. Chu, Y. Study and Evaluation of the Student Response System in Undergraduate Instruction at Skidmore College: Report to the National Science Foundation; Education Support Project of the General Electric Company. 1972; p. 21. Available online: https://files.eric.ed.gov/fulltext/ED076135.pdf (accessed on 30 November 2024).
  34. Garg, D.P. Experiments with a computerized response system: A favorable experience. In Proceedings of the Conference on Computers in the Undergraduate Curricula, Fort Worth, TX, USA, 16–18 June 1975; pp. 147–153. [Google Scholar]
  35. Abrahamson, L. A Brief History of Networked Classrooms: Effects, Cases, Pedagogy, and Implications. In Audience Response Systems in Higher Education: Applications and Cases; Banks, D.A., Ed.; Information Science Publishing: Hershey, PA, USA, 2006; pp. 1–25. [Google Scholar]
  36. Black, P.; William, D. Assessment and classroom learning. Education 1998, 5, 7–73. [Google Scholar]
  37. Sadler, D.R. Formative assessment and the design of instructional systems. Instr. Sci. 1989, 18, 119–144. [Google Scholar] [CrossRef]
  38. Yeh, S.S. Class size reduction or rapid formative assessment?: A comparison of cost-effectiveness. Educ. Res. Rev. 2009, 4, 7–15. [Google Scholar] [CrossRef]
  39. Turpen, C.; Finkelstein, N.D. Understanding How Physics Faculty Use Peer Instruction. AIP Conf. Proc. 2007, 951, 204–207. [Google Scholar] [CrossRef]
  40. El-Rady, J. To Click or Not to Click: That’s the Question. Innov. J. Online Educ. 2006, 2, 1–5. Available online: https://www.learntechlib.org/p/104268/ (accessed on 30 November 2024).
  41. Preszler, R.W.; Dawe, A.; Shuster, C.B.; Shuster, M. Assessment of the Effects of Student Response Systems on Student Learning and Attitudes over a Broad Range of Biology Courses. CBE—Life Sci. Educ. 2007, 6, 29–41. [Google Scholar] [CrossRef]
  42. Walklet, E.; Davis, S.; Farrelly, D.; Muse, K. The impact of Student Response Systems on the learning experience of undergraduate psychology students. Psychol. Teach. Rev. 2016, 22, 35–48. [Google Scholar] [CrossRef]
  43. Siau, K.; Sheng, H.; Fui-Hoon Nah, F. Use of a Classroom Response System to Enhance Classroom Interactivity. IEEE Trans. Educ. 2006, 49, 398–403. [Google Scholar] [CrossRef]
  44. Steele, C.M.; Aronson, J. Stereotype threat and the intellectual test performance of African Americans. J. Personal. Soc. Psychol. 1995, 69, 797. [Google Scholar] [CrossRef] [PubMed]
  45. Forbes, J.D. You Will Do Better If I Watch: Anonymity, Identifiability and Audience Effects in a Stereotype Threat Situation. Ph.D. Thesis, University of KwaZulu-Natal, Pietermaritzburg, South Africa, 2009. [Google Scholar]
  46. Aronson, J.; Quinn, D.M.; Spencer, S.J. Stereotype Threat and the Academic Underperformance of Minorities and Women. In Prejudice; Swim, J.K., Stangor, C., Eds.; Academic Press: Cambridge, MA, USA, 1998; pp. 83–103. [Google Scholar] [CrossRef]
  47. Nielsen, K.L.; Hansen, G.; Stav, J.B. Teaching with student response systems (SRS): Teacher-centric aspects that can negatively affect students’ experience of using SRS. Res. Learn. Technol. 2013, 21, 18989. [Google Scholar] [CrossRef]
  48. Sharma, M.D.; Khachan, J.; Chan, B.; O’Byrne, J. An investigation of the effectiveness of electronic classroom communication systems in large lecture classes. Australas. J. Educ. Technol. 2005, 21, 137–154. [Google Scholar] [CrossRef]
  49. Lantz, M.E. The use of ‘Clickers’ in the classroom: Teaching innovation or merely an amusing novelty? Comput. Hum. Behav. 2010, 26, 556–561. [Google Scholar] [CrossRef]
  50. Burnstein, R.A.; Lederman, L.M. Using wireless keypads in lecture classes. Phys. Teach. 2001, 39, 8–11. [Google Scholar] [CrossRef]
  51. Dijk, L.A.V.; Berg, G.C.V.D.; Keulen, H.V. Interactive lectures in engineering education. Eur. J. Eng. Educ. 2001, 26, 15–28. [Google Scholar] [CrossRef]
  52. Fagen, A.P.; Crouch, C.H.; Mazur, E. Peer Instruction: Results from a Range of Classrooms. Phys. Teach. 2002, 40, 206–209. [Google Scholar] [CrossRef]
  53. Paschal, C.B. Formative assessment in physiology teaching using a wireless classroom communication system. Adv. Physiol. Educ. 2002, 26, 299–308. [Google Scholar] [CrossRef]
  54. Robertson, L.J. Twelve tips for using a computerized interactive audience response system. Med. Teach. 2000, 22, 237–239. [Google Scholar] [CrossRef]
  55. Horowitz, L. ARS Evolution: Reflections and Recommendations. In Audience Response Systems in Higher Education: Applications and Cases; Banks, D.A., Ed.; Information Science Publishing: London, UK, 2006; pp. 1–25. [Google Scholar]
  56. Aljaloud, A.; Gromik, N.; Billingsley, W.; Kwan, P. Research trends in student response systems: A literature review. Int. J. Learn. Technol. 2015, 10, 313. [Google Scholar] [CrossRef]
  57. Sapp, M. Test Anxiety: Applied Research, Assessment, and Treatment Interventions; University Press of America: Lanham, MD, USA, 2013. [Google Scholar]
  58. Jones, M.G.; Jones, B.D.; Hardin, B.; Chapman, L.; Yarbrough, T.; Davis, M. The Impact of High-Stakes Testing on Teachers and Students in North Carolina. Phi Delta Kappan 2000, 81, 199–203. [Google Scholar]
  59. Basquiat [@JerryNotGerry]. My Man Has 9 Clickers Lined Up and Ready. Available online: https://x.com/JerryNotGerry/status/986040921212051456 (accessed on 19 February 2019).
  60. Jacobs, P. Dartmouth Students Allegedly Used These “Clickers” to Cheat in a Sports-Ethics Class. Business Insider. 9 January 2015. Available online: https://www.businessinsider.com/dartmouth-students-used-clickers-to-fake-attendance-2015-1 (accessed on 19 February 2019).
  61. Martin, M.T.; Belikov, O.M.; Hilton, J., III; Wiley, D.; Fischer, L. Analysis of Student and Faculty Perceptions of Textbook Costs in Higher Education. Open Prax. 2017, 9, 79–91. [Google Scholar] [CrossRef]
  62. Pricing. iClicker. Available online: https://www.iclicker.com/pricing (accessed on 14 May 2020).
  63. Carnaghan, C.; Webb, A. Investigating the Effects of Group Response Systems on Student Satisfaction, Learning, and Engagement in Accounting Education. Issues Account. Educ. 2007, 22, 391–409. [Google Scholar] [CrossRef]
  64. Jensen, M.; Moore, R.; Hatch, J. Cooperative Learning: Part I: Cooperative Quizzes. Am. Biol. Teach. 2002, 64, 29–34. [Google Scholar] [CrossRef]
  65. Cortright, R.N.; Collins, H.L.; Rodenbaugh, D.W.; DiCarlo, S.E. Student retention of course content is improved by collaborative-group testing. Adv. Physiol. Educ. 2003, 27, 102–108. [Google Scholar] [CrossRef]
  66. Nonacs, P. Why I Let My Students Cheat on Their Exam. Available online: https://www.zocalopublicsquare.org/2013/04/15/why-i-let-my-students-cheat-on-the-final/ideas/nexus/ (accessed on 1 October 2019).
  67. Zeilik, M.; Morris, V.J. The Impact of Cooperative Quizzes in a Large Introductory Astronomy Course for Non-Science Majors. Astron. Educ. Rev. 2004, 3, 51–61. [Google Scholar] [CrossRef]
  68. Byrd, G.G.; Coleman, S.; Werneth, C. Exploring the Universe Together: Cooperative Quizzes with and without a Classroom Performance System in Astronomy 101. Astron. Educ. Rev. 2004, 3, 26–30. [Google Scholar] [CrossRef]
  69. Kelly, K.G. Student Response Systems (“Clickers”) in the Psychology Classroom: A Beginner’s Guide. Available online: https://silo.tips/download/student-response-systems-clickers-in-the-psychology-classroom-a-beginner-s-guide (accessed on 1 October 2019).
  70. Lasry, N.; Mazur, E.; Watkins, J. Peer instruction: From Harvard to the two-year college. Am. J. Phys. 2008, 76, 1066–1069. [Google Scholar] [CrossRef]
  71. Schell, J.A.; Butler, A.C. Insights From the Science of Learning Can Inform Evidence-Based Implementation of Peer Instruction. Front. Educ. 2018, 3, 33. [Google Scholar] [CrossRef]
  72. Beatty, I.D.; Gerace, W.J.; Leonard, W.J.; Dufresne, R.J. Designing effective questions for classroom response system teaching. Am. J. Phys. 2006, 74, 31–39. [Google Scholar] [CrossRef]
  73. Banfield, J.; Wilkerson, B. Increasing Student Intrinsic Motivation And Self-Efficacy Through Gamification Pedagogy. Contemp. Issues Educ. Res. 2014, 7, 291. [Google Scholar] [CrossRef]
  74. Pettit, R.K.; McCoy, L.; Kinney, M.; Schwartz, F.N. Student perceptions of gamified audience response system interactions in large group lectures and via lecture capture technology. BMC Med. Educ. 2015, 15, 92. [Google Scholar] [CrossRef] [PubMed]
  75. Sun, J.C.-Y.; Hsieh, P.-H. Application of a Gamified Interactive Response System to Enhance the Intrinsic and Extrinsic Motivation, Student Engagement, and Attention of English Learners. J. Educ. Technol. Soc. 2018, 21, 104–116, Retrieved from JSTOR. [Google Scholar]
  76. MacArthur, J.R.; Jones, L.L. A review of literature reports of clickers applicable to college chemistry classrooms. Chem. Educ. Res. Pract. 2008, 9, 187–195. [Google Scholar] [CrossRef]
  77. Knight, J.K.; Wood, W.B. Teaching More by Lecturing Less. Cell Biol. Educ. 2005, 4, 298–310. [Google Scholar] [CrossRef]
  78. Schwartz, D.L. The Emergence of Abstract Representations in Dyad Problem Solving. J. Learn. Sci. 1995, 4, 321–354. [Google Scholar] [CrossRef]
  79. Smith, M.K.; Wood, W.B.; Adams, W.K.; Wieman, C.; Knight, J.K.; Guild, N.; Su, T.T. Why Peer Discussion Improves Student Performance on In-Class Concept Questions. Science 2009, 323, 122–124. [Google Scholar] [CrossRef]
  80. Schwarz, B.B.; Neuman, Y.; Biezuner, S. Two Wrongs May Make a Right... If They Argue Together! Cogn. Instr. 2000, 18, 461–494. [Google Scholar] [CrossRef]
  81. Arizona State University. University Technology Office. Clickers @ ASU. 2018. Available online: https://uto.asu.edu/services/tools/clickers (accessed on 16 April 2019).
  82. CWSEI. Clicker Resources. Carl Wieman Science Education Initiative at the University of British Columbia. 2019. Available online: http://www.cwsei.ubc.ca/resources/clickers.htm (accessed on 9 May 2019).
  83. The Learning Center. Clicker Questions. Washington University in St. Louis. 2019. Available online: https://teachingcenter.wustl.edu/resources/active-learning/active-learning-with-clickers/clicker-questions/ (accessed on 16 April 2019).
  84. Hu, J.; Bertok, P.; Hamiliton, M.; White, G.; Duff, A.; Cutts, Q. Wireless Interactive Teaching by Using Keypad-Based ARS. In Audience Response Systems in Higher Education: Applications and Cases; Banks, D.A., Ed.; Information Science Publishing: Hershey, PA, USA, 2006. [Google Scholar]
  85. Nicol, D.J.; Boyle, J.T. Peer Instruction versus Class-wide Discussion in Large Classes: A comparison of two interaction methods in the wired classroom. Stud. High. Educ. 2003, 28, 457–473. [Google Scholar] [CrossRef]
  86. James, M.C.; Willoughby, S. Listening to student conversations during clicker questions: What you have not heard might surprise you! Am. J. Phys. 2010, 79, 123–132. [Google Scholar] [CrossRef]
  87. Hoekstra, A.R. A Socio-Cultural Analysis of the Use of Clickers in Higher Education; ProQuest LLC: Ann Arbor, MI, USA, 2009. [Google Scholar]
  88. Vickrey, T.; Rosploch, K.; Rahmanian, R.; Pilarz, M.; Stains, M. Research-Based Implementation of Peer Instruction: A Literature Review. CBE—Life Sci. Educ. 2015, 14, es3. [Google Scholar] [CrossRef] [PubMed]
  89. Reay, N.W.; Bao, L.; Li, P.; Warnakulasooriya, R.; Baugh, G. Toward the effective use of voting machines in physics lectures. Am. J. Phys. 2005, 73, 554–558. [Google Scholar] [CrossRef]
  90. Heath, B.D. The Effects of Clicker Feedback on Student Success. Master’s Thesis, Angelo State University, San Angelo, TX, USA, 2009. [Google Scholar]
  91. Gok, T. An Evaluation of Student Response Systems from the Viewpoint of Instructors and Students. Turk. Online J. Educ. Technol. 2011, 10, 17. [Google Scholar]
  92. Beekes, W. The ‘Millionaire’ method for encouraging participation. Act. Learn. High. Educ. 2006, 7, 25–36. [Google Scholar] [CrossRef]
  93. Terrion, J.L.; Aceti, V. Perceptions of the effects of clicker technology on student learning and engagement: A study of freshmen Chemistry students. Res. Learn. Technol. 2012, 20, 16150. [Google Scholar] [CrossRef]
  94. Lasry, N. Clickers or Flashcards: Is There Really a Difference? Phys. Teach. 2008, 46, 242–244. [Google Scholar] [CrossRef]
  95. Pelton, L.F.; Pelton, T. Selected and constructed response systems in mathematics classrooms. In Audience Response Systems in Higher Education; Banks, D.A., Ed.; Information Science Publishing: Hershey, PA, USA, 2006; pp. 175–186. [Google Scholar]
  96. Webking, R.; Valenzuela, F. Using audience response systems to develop critical thinking skills. In Audience Response Systems in Higher Education; Banks, D.A., Ed.; Information Science Publishing: Hershey, PA, USA, 2006; pp. 127–139. [Google Scholar]
  97. Miller, C.J.; McNear, J.; Metz, M.J. A comparison of traditional and engaging lecture methods in a large, professional-level course. Adv. Physiol. Educ. 2013, 37, 347–355. [Google Scholar] [CrossRef]
  98. Bostock, S.J.; Hulme, J.A.; Davys, M.A. CommuniCubes: Intermediate technology for interaction with student groups. In Audience Response Systems in Higher Education; Banks, D.A., Ed.; Information Science Publishing: Hershey, PA, USA, 2006; pp. 321–333. [Google Scholar]
  99. Wit, E. Who wants to be… The Use of a Personal Response System in Statistics Teaching. MSOR Connect. 2003, 3, 14–20. [Google Scholar] [CrossRef]
  100. d’Inverno, R.; Davis, H.; White, S. Using a personal response system for promoting student interaction. Teach. Math. Its Appl. Int. J. IMA 2003, 22, 163–169. [Google Scholar] [CrossRef]
  101. Brost, B.D.; Bradley, K.A. Student Compliance with Assigned Reading: A Case Study. J. Scholarsh. Teach. Learn. 2006, 6, 101–111. [Google Scholar]
  102. Gooblar, D. They Haven’t Done the Reading. Again. ChronicleVitae. 24 September 2014. Available online: https://chroniclevitae.com/news/719-they-haven-t-done-the-reading-again (accessed on 24 September 2014).
  103. Bunce, D.M.; Flens, E.A.; Neiles, K.Y. How Long Can Students Pay Attention in Class? A Study of Student Attention Decline Using Clickers. J. Chem. Educ. 2010, 87, 1438–1443. [Google Scholar] [CrossRef]
  104. Duncan, D.K.; Hoekstra, A.R.; Wilcox, B.R. Digital Devices, Distraction, and Student Performance: Does In-Class Cell Phone Use Reduce Learning? Astron. Educ. Rev. 2012, 11, 1–4. [Google Scholar] [CrossRef]
  105. Fies, C.; Marshall, J. Classroom Response Systems: A Review of the Literature. J. Sci. Educ. Technol. 2006, 15, 101–109. [Google Scholar] [CrossRef]
  106. Freeman, M.; Blayney, P. Promoting interactive in-class learning environments: A comparison of an electronic response system with a traditional alternative. In Proceedings of the Eleventh Australasian Teaching Economics Conference: Innovation for Students Engaged in Economics, Sydney, Australia, 11–12 July 2005; pp. 23–34. [Google Scholar]
  107. Mankowski, A. Do “Clickers” Improve Student Engagement and Learning in Secondary Schools? Master’s Thesis, Portland State University, Portland, OR, USA, 2011. [Google Scholar] [CrossRef]
  108. Martyn, M. Clickers in the Classroom: An Active Learning Approach. Available online: https://er.educause.edu/articles/2007/4/clickers-in-the-classroom-an-active-learning-approach (accessed on 28 May 2019).
  109. King, D.B.; Joshi, S. Gender Differences in the Use and Effectiveness of Personal Response Devices. J. Sci. Educ. Technol. 2008, 17, 544–552. [Google Scholar] [CrossRef]
  110. Kang, H.; Lundeberg, M.; Wolter, B.; delMas, R.; Herreid, C.F. Gender differences in student performance in large lecture classrooms using personal response systems (‘clickers’) with narrative case studies. Learn. Media Technol. 2012, 37, 53–76. [Google Scholar] [CrossRef]
  111. Roberts, H.; Diaz-Rainey, I. Educational Performance, Clicker Engagement and Ethnicity: Evidence from Finance 101. J. Financ. Educ. 2018, 44, 12–33. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Henderson, J.B.; Chambers, E.L. The Past, Present, and Future of Clickers: A Review. Educ. Sci. 2024, 14, 1345. https://doi.org/10.3390/educsci14121345

AMA Style

Henderson JB, Chambers EL. The Past, Present, and Future of Clickers: A Review. Education Sciences. 2024; 14(12):1345. https://doi.org/10.3390/educsci14121345

Chicago/Turabian Style

Henderson, J. Bryan, and Elijah L. Chambers. 2024. "The Past, Present, and Future of Clickers: A Review" Education Sciences 14, no. 12: 1345. https://doi.org/10.3390/educsci14121345

APA Style

Henderson, J. B., & Chambers, E. L. (2024). The Past, Present, and Future of Clickers: A Review. Education Sciences, 14(12), 1345. https://doi.org/10.3390/educsci14121345

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop
  NODES
Association 6
coding 3
Community 2
Experiments 1
games 2
games 2
Idea 8
idea 8
innovation 7
Interesting 2
Intern 31
iOS 5
Javascript 2
languages 2
mac 28
Note 16
OOP 16
os 202
text 8
Training 5
twitter 1
Users 1
Verify 1
visual 1
web 9