Papers describe an educational research project, classroom experience, teaching technique, curricular initiative, or pedagogical tool in the computing content domain. All papers submitted to the SIGCSE TS should be original work that complies with the ACM authorship policies. SIGCSE TS considers papers in three distinct tracks, each with their own unique expectations. See further details below.

Paper Tracks

Please ensure that you submit your paper to the correct paper track by reading the the Reviewing Guidelines. Papers will be reviewed for the track they are submitted to and will not be moved between tracks. Any submissions made to more than one track will be desk rejected from both tracks.

  • Computing Education Research. The primary purpose of Computing Education Research (CER) papers is to advance what is known about the teaching and learning of computing. Papers should adhere to rigorous standards, describing their applicable theoretical/analytical lenses, research questions, contexts, methods, results, and limitations. These normally focus on topics relevant to computing education with emphasis on educational goals and knowledge units/topics; methods or techniques; evaluation of pedagogical approaches; studies of the many populations engaged in computing education, including (but not limited to) students and instructors; and issues of gender, diversity, and underrepresentation. CER papers are reviewed relative to the clarity of the research questions posed, the relevance of the work in light of prior literature and theory, the soundness of the methods to address the questions posed, and the overall contribution. Both qualitative and quantitative research is welcomed, as are replication studies and papers that present null or negative results.

  • Experience Reports and Tools. The primary purpose of Experience Reports and Tools (ERT) papers is observational in nature, and ERT papers should carefully describe the development and use of a computing education approach or tool, the context of its use including the formative data collected, and provide a rich reflection on what did or didn’t work, and why. Papers should carefully describe a computing education intervention and its context, and provide a rich reflection on what did or didn’t work, and why. ERT contributions should be motivated by prior literature and should highlight the novelty of the experience or tool presented. ERT papers differ from CER papers in that they frame their contributions to enable adoption by other practitioners, rather than focusing on the generalizability or transferability of findings, or threats to validity. This track accepts experience reports, teaching techniques, and pedagogical tools. All papers in this track should provide enough detail for adoption by others.

  • Position and Curricula Initiative. The primary purpose of Position and Curricula Initiative (PCI) papers is to present a coherent argument about a computing education topic, including, but not limited to curriculum or program design, practical and social issues facing computing educators, and critiques of existing practices. PCI papers should substantiate their claims using evidence in the form of thorough literature reviews, analysis of secondary data collected by others, or another appropriate rhetorical approach. Position papers should engender fruitful academic discussion through a defensible opinion about a computing education topic substantiated with evidence. Curricula Initiative papers discuss new and revised curricula, programs and degrees and should describe the motivating context before the new initiative was undertaken, what it took to put the initiative into place, the impact, and suggestions for others wishing to adopt it. In contrast to CER papers, PCI papers need not present original data or adhere to typical qualitative or quantitative research methods. PCI papers differ from ERT papers in that they do not necessarily report on individual experiences, programs or tools, but rather they may focus on broader concerns to the community.

Papers submitted to all tracks should address one or more computing content topic. Authors will be asked to select between 3 and 7 topics from this list at the time of submission. Papers deemed outside the scope of symposium by the program chairs will be desk rejected without review.

Authors submitting work to SIGCSE TS 2026 are responsible for complying with all applicable conference authorship policies and those articulated by ACM. If you have questions about any of these policies, please contact program@sigcse2026.sigcse.org for clarification prior to submission.

ACM has made a commitment to collect ORCiD IDs from all published authors (https://authors.acm.org/author-resources/orcid-faqs). All authors on each submission must have an ORCiD ID (https://orcid.org/register) in order to complete the submission process. Please make sure to get your ORCiD ID in advance of submitting your work.

Presentation Modality

Papers at SIGCSE TS 2026 can be presented either in-person using a traditional paper session or online via a limited number of synchronous Zoom session with Q/A. The online zoom presenters will be scheduled to present synchronously during the conference days just like the in-person presenters. Authors of accepted submissions must commit to one of these two presentation modalities in a timely manner to facilitate conference planning: due to additional costs, there will be a limited number of rooms set up with cameras and the necessary zoom session licenses. Registration rates for online presenters are likely to be comparable to those for in-person attendees and higher than that of online-only attendees, which will help offset the additional costs of supporting online presentation. Further instructions and information will be provided in acceptance notifications. Pre-recorded videos will NOT be required.

Dates

This program is tentative and subject to change.

You're viewing the program in a time zone which is different from your device's time zone change time zone

Thu 19 Feb

Displayed time zone: Central Time (US & Canada) change

10:40 - 12:00
Student Experiences with AIPapers at Meeting Room 100
10:40
20m
Talk
Capturing Student Reasoning with Low-Cost AI: An Early Experience in a Data-Structures Course
Papers
Kwabena Bamfo Ashesi University, Olaf Hall-Holt St. Olaf College, Oluwakemi Ola University of British Columbia, Govindha Yeluripati Ashesi University, Dennis Owusu Ashesi University
11:00
20m
Talk
Exploring Student Choice and the Use of Multimodal Generative AI in Programming Learning
Papers
Xinying Hou University of Michigan, Ruiwei Xiao Carnegie Mellon University, Runlong Ye University of Toronto, Michael Liut University of Toronto Mississauga, John Stamper Carnegie Mellon University
11:20
20m
Talk
If You Can’t Beat ‘Em, Conscript ‘Em: Experiences Requiring the Use of AI in a Capstone Course
Papers
David Levine University of Southern Maine
10:40 - 12:00
Teaching Ethics Across the Computing CurriculumPapers at Meeting Room 101
10:40
20m
Talk
Experience Report: Teaching Computer Science Ethics using Science Fiction Across Multiple Institutions and Course Types
Papers
Emanuelle Burton College of Engineering, University of Illinois at Chicago, Judy Goldsmith University of Kentucky, Nicholas Mattei Tulane University, Matthew Spradling University of Michigan-Flint, Alan Tsang Carleton University, Nanette Veilleux Simmons University
11:00
20m
Talk
How AI Ethics is Taught: Insights from a Syllabus-Level Review of U.S. Computing Courses
Papers
Wajdi Aljedaani Saud Data & Artifical Intelligent Authority, Parthasarathy PD BITS Pilani KK Birla Goa Campus
11:20
20m
Talk
Impacts of Adding Ethics Modules to Individual Computing Courses
Papers
11:40
20m
Talk
Student/Faculty Partnerships to Teach Computing Ethics Beyond the Computer Science Classroom
Papers
Elshaddai Muchuwa Franklin and Marshall College, Jason Wilson Franklin and Marshall College
10:40 - 12:00
TAs and TutorsPapers at Meeting Room 102
10:40
20m
Talk
A Replication Study on Student Expectations on CS Tutors: Understanding Roles and Labors of Tutors
Papers
Yubin Kim UC Irvine, Computer Science Department, Edward X. Chen University of California, Irvine, Sofia Caston Rose-Hulman Institute of Technology, Jeffrey Fairbanks University of California, Irvine, Sophie Russ California Polytechnic State University, Jett Spitzer University of California, Irvine, Duong Hoang Thuy Vu University of California, Irvine, James Andro-Vasko University of Nevada, Las Vegas, Wolfgang Bein University of Nevada, Las Vegas, Daniel Frishberg California Polytechnic State University, Stephen Tsung-Han Sher Rose-Hulman Institute of Technology, Michael Shindler University of California, Irvine
11:00
20m
Talk
Behind the Scenes of Delivering a Large Computing Course: The Experience of a TA Managing Logistics
Papers
Rachel S. Lim University of California San Diego, Philip Guo University of California San Diego
11:20
20m
Talk
Exploring Undergraduate Computing Tutors’ Pedagogical Practices
Papers
Esse Ciego University of Florida, Skyler Steiert University of Florida, Amanpreet Kapoor University of Florida, USA
11:40
20m
Talk
How Shared Gender Identity with Teaching Assistants Relates to Student Outcomes in an Undergraduate Algorithms Course
Papers
Alex Chao University of California, San Diego, Janet Jiang Duke University, Kristin Stephens-Martinez Duke University
10:40 - 12:00
Assessing Collaboration: Tools, Practices, and Student DynamicsPapers at Meeting Room 103-104
10:40
20m
Talk
A Pedagogy for Assessing Individual Contributions to Team-Based Software Projects
Papers
Yolanda Reimer University of Montana, Chris Hundhausen Oregon State University, USA, Ananth Jillepalli Washington State University, Olusola Adesope Washington State University
11:00
20m
Talk
Competing or Collaborating? The Role of Hackathon Formats in Shaping Team Dynamics and Project Choices
Papers
Sadia Nasrin Tisha pc, Md Nazmus Sakib University of Maryland, Baltimore County, Sanorita Dey University of Maryland, Baltimore County
11:20
20m
Talk
From Data to Action: Empowering Students to Assess and Improve Teamwork with Cross-Tool Log Data
Papers
Yifan Song University of Illinois Urbana-Champaign, Ritika Vithani University of Illinois Urbana-Champaign, Wenxuan Wendy Shi University of Illinois Urbana-Champaign, Brian Bailey University of Illinois Urbana-Champaign
11:40
20m
Talk
Improving Professional Dispositions in Computing Curriculum Using Sequential Peer Assessment
Papers
Dhaval K Patel Ahmedabad University, Ayush N. Patel Northeastern University, USA, Raj N. Dave Northwestern University
10:40 - 12:00
Automated Content Generation and GradingPapers at Meeting Room 105
10:40
20m
Talk
AI-Supported Grading and Rubric Refinement for Free Response Questions
Papers
Victor Zhao University of Illinois, Urbana-Champaign, Max Fowler University of Illinois, Yael Gertner University of Illinois Urbana-Champaign, Seth Poulsen Utah State University, Matthew West University of Illinois at Urbana-Champaign , Mariana Silva University of Illinois at Urbana Champaign
11:00
20m
Talk
Creating Exercises with Generative AI for Teaching Introductory Secure Programming: Are We There Yet?
Papers
Leo St. Amour Virginia Tech, Eli Tilevich Virginia Tech
11:20
20m
Talk
Improving LLM-Generated Educational Content: A Case Study on Prototyping, Prompt Engineering, and Evaluating a Tool for Generating Programming Problems for Data Science
Papers
Jiaen Yu University of California, San Diego, Ylesia Wu UC San Diego, Gabriel Cha University of California San Diego, Ayush Shah University of California San Diego, Sam Lau University of California at San Diego
11:40
20m
Talk
Measuring Students’ Perceptions of an Autograded Scaffolding Tool for Students Performing at All Levels in an Algorithms Class
Papers
Yael Gertner University of Illinois Urbana-Champaign, Brad Solomon University of Illinois Urbana-Champaign, Hongxuan Chen University of Illinois at Urbana-Champaign, Eliot Robson University of Illinois Urbana-Champaign, Carl Evans University of Illinois Urbana-Champaign, Jeff Erickson University of Illinois Urbana-Champaign
10:40 - 12:00
Building Capacity for K–12 CS and AI EducationPapers at Meeting Room 260-267
10:40
20m
Talk
Piloting a Vignettes Assessment to Measure K-5 CS Teacher Proficiencies and GrowthK12
Papers
Joseph Tise Institute for Advancing Computing Education, Monica McGill Institute for Advancing Computing Education, Vicky Sedgwick Visions by Vicky, Laycee Thigpen Institute for Advancing Computing Education, Amanda Bell Computer Science Teachers Association
11:00
20m
Talk
Transforming Confusion into Diffusion: Advancing Machine Learning Education via Bottom-Up Instruction
Papers
Carlos Cotrini ETH Zürich, Sverrir Thorgeirsson ETH Zurich, Jesus Solano ETH Zürich, Zhendong Su ETH Zurich
11:20
20m
Talk
The Impact of Misalignment between Student and Teacher Evaluation of Student Skills on Middle School Student Motivation in Computer ScienceK12
Papers
Sheila Foley University of Nebraska - Lincoln, Leen-Kiat Soh University of Nebraska-Lincoln, Colby Lamb University of Nebraska - Lincoln, Wendy Smith University of Nebraska - Lincoln
11:40
20m
Talk
Exploring K–12 Teacher Motivation to Engage with AI in EducationK12
Papers
Ethel Tshukudu San Jose State University, Katharine Childs University of Glasgow, Gaokgakala Alogeng CSEdBotswana, Emma R. Dodoo University of Michigan, Douglas R. Case San Jose State University, Tebogo Videlmah Molebatsi Kgale Hill Junior Secondary School
13:40 - 15:00
Assessment and FeedbackPapers at Meeting Room 100
13:40
20m
Talk
Assessing Student Proficiency in Foundational Developer Tools Through Live Checkoffs
Papers
Connor McMahon pc, Lauren Feldman University of North Carolina at Chapel Hill
14:00
20m
Talk
Understanding Student Interaction with AI-Powered Next-Step Hints: Strategies and Challenges
Papers
Anastasiia Birillo JetBrains Research, Aleksei Rostovskii JetBrains Research, Yaroslav Golubev JetBrains Research, Hieke Keuning Utrecht University
14:20
20m
Talk
Personalized Exam Prep (PEP): Scaling No-Stakes, No-LLM Dialogue-Based Assessments in a Large CS Course
Papers
Kelly Cochran pc, Chris Piech Stanford University
14:40
20m
Talk
Fine-Tuning Open-Source Models as a Viable Alternative to Proprietary LLMs for Explaining Compiler Messages
Papers
Lorenzo Lee Solano University of New South Wales, Sydney, Charles Koutcheme Aalto University, Juho Leinonen Aalto University, Alexandra Vassar University of New South Wales, Sydney, Jake Renzella University of New South Wales, Sydney
13:40 - 15:00
Improving Learning at Scale: Practice, Assessment, and Support in Large Computing CoursesPapers at Meeting Room 102
13:40
20m
Talk
Developing Problem-Solving Competency in Data Science: Exploring A Case-Based Approach
Papers
Lujie Karen Chen University of Maryland, Baltimore County, Maryam M. Alomair University of Maryland - Baltimore County, Muhammad Ali Yousuf University of Maryland, Baltimore County, Shimei Pan UMBC
14:00
20m
Talk
Encouraging Learning Through Repetition: Effects of Multiple Practice Opportunities in a Large Intro Programming Course
Papers
Jordan Elise Tate pc, Supriya Naidu University of Colorado at Boulder
14:20
20m
Talk
Improving the Reliability of Grading Written-Response Coding Questions in a Large CS1 Course
Papers
Wei Jin Georgia Gwinnett College, Xin Xu Georgia Gwinnett College, Hyesung Park Georgia Gwinnett College, Evelyn Brannock Georgia Gwinnett College, Tacksoo Im Georgia Gwinnett College
14:40
20m
Talk
When Support Isn’t Enough: Understanding and Redesigning Student Support Systems in Large Computing Courses
Papers
Teresa Luo University of California, Berkeley, Chenkun Sheng University of California, Berkeley, Lisa Yan UC Berkeley
13:40 - 15:00
Rethinking Data Learning: From Databases to DodgeballsPapers at Meeting Room 103-104
13:40
20m
Talk
Clause-Driven Automated Grading of SQL’s DDL and DML Statements
Papers
Benard Wanjiru Radboud University Nijmegen, Patrick van Bommel Radboud University Nijmegen, Djoerd Hiemstra Radboud University Nijmegen
14:00
20m
Talk
Integrating Hands-On Data Collection Experience in an Introductory Programming Class for Non-CS MajorsK12
Papers
Shuyin Jiao North Carolina State University, Warren Jasper North Carolina State University
14:20
20m
Talk
Reflecting on Thematic Analysis in Computer Science Education Research: A Field Guide for Researchers and Reviewers
Papers
Aadarsh Padiyath University of Michigan, Tamara Nelson-Fromm
14:40
20m
Talk
SportSense for Data Literacy: Applying Sports and Movement for Authentic and Personal Data Interactions in Elementary SchoolsK12
Papers
Ashley Quiterio Northwestern University, Megan Butler Northwestern University, Arianna Montas Northwestern University, Sara Bouftas Northwestern University, Marcelo Worsley Northwestern University
13:40 - 15:00
AI-Enhanced Tools, Training, and Equity in Computing EducationPapers at Meeting Room 105
13:40
20m
Talk
CS Teaching Assistant Perceptions on LLM-Generated Faded Worked Examples for Feedback Training
Papers
Justin Gonzaga University of New South Wales, Alexandra Vassar University of New South Wales, Sydney, Yuchao Jiang UNSW
14:00
20m
Talk
EduLint: a Versatile Tool for Code Quality Feedback
Papers
Anna Rechtackova Masaryk University Brno, Radek Pelánek Masaryk University Brno
14:20
20m
Talk
A Call for Critical Technology to Enable Innovative and Alternative Grading PracticesK12
Papers
Adrienne Decker University at Buffalo, Stephen Edwards Virginia Tech, Bob Edmison Virginia Tech, Manuel A. Pérez-Quiñones University of North Carolina Charlotte, Audrey Rorrer UNC Charlotte
14:40
20m
Talk
Fighting Fire with Fire: LLM-Assisted Grading of Handwritten CS AssessmentsK12
Papers
Jared Apillanes University of California, Irvine, Jason Weber University of California, Irvine, Sergio Gago-Masague University of California, Irvine, Jennifer Wong-Ma University of California, Irvine, Thomas Yeh University of California, Irvine
13:40 - 15:00
Accessible ComputingPapers at Meeting Room 260-267
13:40
20m
Talk
Debugging Support for Students with Blindness and Visual Impairments on Notebook-based Programming Environments
Papers
God'Salvation Oguibe The University of Texas at San Antonio, Lauryn Castro The University of Texas at San Antonio, Katherine Holloway University of Texas at San Antonio, Kathy Ewoldt The University of Texas at San Antonio, Leslie Neely The University of Texas at San Antonio, Taslima Akter UTSA, Wei Wang University of Texas at San Antonio, USA
14:00
20m
Talk
Where are the Disabled Students?: A Systematic Literature Review of Disability Inclusion in Computing Education Research
Papers
Isabela Figueira University of California, Irvine, Josahandi Cisneros University of California, Irvine, Jason Weber University of California, Irvine, Wendy Sanka University of California, Irvine, Karen Phan University of California, Irvine, Jennifer Wong-Ma University of California, Irvine, Stacy Branham University of California, Irvine
14:20
20m
Talk
Students with Disabilities in CS Principles: An Examination of Capacity, Access, and Participation
Papers
Sara Frey Pennsylvania Training and Technical Assistance Network, Hannah Williams University of Nevada, Las Vegas, Andreas Stefik University of Nevada at Las Vegas, USA
14:40
20m
Talk
Examining Inclusive Computing Education for Blind Students in India
Papers
Akshay Kolgar Nayak pc, Yash Prakash pc, Md Javedul Ferdous Old Dominion University, Sampath Jayarathna Old Dominion University, Hae-Na Lee Michigan State University, Vikas Ashok Old Dominion University
13:40 - 15:00
13:40
20m
Talk
Boosting Student Motivation through Game-based Learning in Programming Education with Gamify-ITK12Online
Papers
Niklas Meissner Institute of Software Engineering, University of Stuttgart, Sandro Speth Institute of Software Engineering, University of Stuttgart, Niklas Krieger University of Stuttgart (student), Steffen Becker University of Stuttgart
14:00
20m
Talk
Examining Discourse in a Large Online Education Program: A Machine-in-the-Loop ApproachOnline
Papers
Erik Goh Georgia Institute of Technology, Xuan Wang Georgia Institute of Technology, David A. Joyner Georgia Institute of Technology, Ana Rusch Georgia Institute of Technology
14:20
20m
Talk
Experiences with the ChCore Experimental Operating System KernelOnline
Papers
Haibo Chen Shanghai Jiao Tong University, Yubin Xia Shanghai Jiao Tong University, China, Jinyu Gu Shanghai Jiao Tong University
14:40
20m
Talk
miniK8s: A Pedagogical Cloud-Native SystemOnline
Papers
Dong Du pc, Mingyu Wu Shanghai Jiao Tong University, Haibo Chen Shanghai Jiao Tong University, Binyu Zang Shanghai Jiao Tong University
13:40 - 15:00
Decolonize, Diversify, Define Computing Ethics EducationPapers at Meeting Room 274
13:40
20m
Talk
Developing a Decolonial Mindset for Indigenising Computing EducationMSIIn-Person & Online
Papers
Jianhua Li , Yin Paradies Deakin University, Trina Myers Deakin University, Robin Doss Deakin university, Armita Zarnegar Swinburne University of Technology, Jack Reis Baidam Solutions
14:00
20m
Talk
Mapping Required Ethics Education in Computer Science: Insights from 100 U.S. ProgramsIn-Person & Online
Papers
Grace Barkhuff Georgia Institute of Technology, Ellen Zegura Georgia Institute of Technology
14:20
20m
Talk
Starting with DEI and Ethics - A New First-Year College Computer Science IntroductionIn-Person & Online
Papers
Scott Leutenegger pc, Stephen Hutt University of Denver, Andrew Hannum University of Denver, Sanchari Das George Mason University, Alannah Oleson University of Denver, Alexandria Leto University of Colorado Boulder, Sunny Shrestha University of Denver
14:40
20m
Talk
Where are the faculty? The missing perspective on teaching socio-ethical competencies in computer scienceIn-Person & Online
Papers
Tommaso Carraro University of Trento, Maurizio Marchese University of Trento, Lorenzo Angeli University of Trento
15:40 - 17:00
Entangling K-12 Teachers, Undergrads, and Cyber Pros in Quantum ConceptsPapers at Meeting Room 100
15:40
20m
Talk
Introducing Quantum Computing to K-12 Teachers through a Professional Development WorkshopK12
Papers
David Gonzalez-Maldonado University of Chicago, Emily Edwards Duke University, Diana Franklin University of Chicago
16:00
20m
Talk
PQCIP: A Post-Quantum Cryptography Educational Program for Cybersecurity Professionals
Papers
Ron Steinfeld Monash University, Muhammed F. Esgin Monash University, Nikai Jagganath Monash University, Amin Sakzad Monash University, Carsten Rudolph Monash University, James Boorman Monash University
16:20
20m
Talk
Quandray:Student Conceptions of Quantum Concepts from a Gameworld
Papers
David Gonzalez-Maldonado University of Chicago, Grace Williams University of Chicago, Emily Edwards Duke University, Danielle Harlow University of California at Santa Barbara, Diana Franklin University of Chicago
16:40
20m
Talk
QuantAid: A Quiz-based Quantum Learning Platform for High-school and Undergraduate StudentsK12
Papers
15:40 - 17:00
Struggling StudentsPapers at Meeting Room 101
15:40
20m
Talk
The Cost of Catching Up: Investigating the Impact of Late Enrollment on Student Success in a CS0 Course
Papers
Victoria Phelps UC Berkeley, Sahana Bharadwaj University of California, Berkeley, Aananya Lakhani UC Berkeley, Zihao Huang UC Berkeley, Heidy Hernandez UC Berkeley, Oindree Chatterjee University of California, Berkeley, Jordan Schwartz UC Berkeley, Stacey Yoo UC Berkeley, Daniel Garcia University of California Berkeley
16:00
20m
Talk
Using In-Class Exercise Data for Early Support of Struggling Students
Papers
Kritish Pahi The University of Memphis, Vinhthuy Phan The University of Memphis
16:20
20m
Talk
“Why Do I Feel Like a Fraud?”: Understanding Imposter Phenomenon in Computing Students through Ecological Momentary Assessment
Papers
Yetunde Okueso University of Maryland, Baltimore County, Sydnee Angus University of Maryland, Baltimore County, Sri Kavya Penta University of Maryland, Baltimore County, Lujie Karen Chen University of Maryland, Baltimore County
15:40 - 17:00
From Mastery to Mayhem: Managing Students’ Relationships with GenAIPapers at Meeting Room 102
15:40
20m
Talk
Pacing for Mastery: Optimizing LLM Interactions for Learning
Papers
Karena Tran University of California, Irvine, Ge Gao University of California, Irvine, Angela Lombard University of California, Irvine, Tyler Yu University of California, Irvine, Haoning Jiang UC Irvine, Thomas Yeh University of California, Irvine
16:00
20m
Talk
Scaffolding genAI for Critical Reflection: A Transformative Approach to Diverging Assessments in IT Forensics
Papers
Amin Sakzad Monash University, Judy Sheard Monash University, Tahmine Ghorbaniandehkordi Monash University, Mikaela E. Milesi Monash University, Monica Whitty Monash University
16:20
20m
Talk
Talking to Our Students about Generative AI
Papers
15:40 - 17:00
Accessibility: the Main Quest Instead of a Side QuestPapers at Meeting Room 103-104
15:40
20m
Talk
Evaluating the Impact of Accessibility Testing Tool Usage Across the Software Development Lifecycle in Student Projects
Papers
Wajdi Aljedaani Saud Data & Artifical Intelligent Authority, Parthasarathy PD BITS Pilani KK Birla Goa Campus, Swaroop Joshi BITS Pilani KK Birla Goa Campus
16:00
20m
Talk
Walking the Walk: Centering Students with Disabilities in Accessibility Education
Papers
Yasmine Elglaly Western Washington University, David Engebretson Western Washington University, Jesse Leaman Clemson University, Erin Howard Western Washington University
16:20
20m
Talk
You're on the Ball: Using Games to Explore Accessibility for Neurodivergent Users
Papers
Rachel F. Adler University of Illinois Urbana-Champaign, Bryan Rivera Brooklyn College, City University of New York, Devorah Kletenik pc
15:40 - 17:00
Student Behaviors and ReasoningPapers at Meeting Room 105
15:40
20m
Talk
Analogical Reasoning in Undergraduate Algorithms
Papers
Jonathan Liu University of Chicago, Erica Goodwin University of Chicago, Diana Franklin University of Chicago
16:00
20m
Talk
Choosing Their Own Way: Guided Self-Placement for Students in an Introductory Programming Sequence
Papers
Brett Wortzman pc, Melissa Chen Northwestern University, Miya Natsuhara pc, Eleanor O'Rourke Northwestern University
16:20
20m
Talk
Investigating Answer Choice Bias within a College-Level Introductory Computing Assessment
Papers
Miranda Parker University of North Carolina Charlotte, Sin Yu Ciou University of Washington, Yale Quan University of Washington, He Ren University of Washington, Chun Wang University of Washington, Min Li University of Washington
16:40
20m
Talk
Performance and Start-Time Trends in Asynchronous Computer-Based Assessments
Papers
Iris Xu University of British Columbia, Romina Mahinpei Princeton University, Steve Wolfman University of British Columbia, Firas Moosvi University of British Columbia Okanagan

Fri 20 Feb

Displayed time zone: Central Time (US & Canada) change

10:40 - 12:00
Scaling Feedback and Assessments Without Losing Your Sanity (or Your Servers)Papers at Meeting Room 100
10:40
20m
Talk
Aligning Small Language Models for Programming Feedback: Towards Scalable Coding Support in a Massive Global Course
Papers
Charles Koutcheme Aalto University, Juliette Woodrow Stanford University, Chris Piech Stanford University
11:00
20m
Talk
Designing and Implementing Skill Tests at Scale: Frequent, Computer-Based, Proctored Assessments with Minimal Infrastructure Requirements
Papers
Anastasiya Markova University of California San Diego, Anish Kasam University of California San Diego, Bryce Hackel University of California San Diego, Marina Langlois University of California San Diego, Sam Lau University of California at San Diego
11:20
20m
Talk
Scaling Engagement: Leveraging Social Annotation and AI for Collaborative Code Review in Large CS Courses
Papers
Raymond Klefstad University of California, Irvine, Susan Klefstad Independent Researcher, Vincent Tran University of California, Irvine, Michael Shindler University of California, Irvine
10:40 - 12:00
Belonging and Becoming: Self-Efficacy in CS From Prisons to CampusesPapers at Meeting Room 101
10:40
20m
Talk
CS Ed. in Prisons and Jails: Evidence of Computer Programming Self-Efficacy Growth Across Multiple Course Offerings
Papers
Andrew Fishberg MIT, Marisa Gaetz MIT, Martin Nisser University of Washington, Carole Cafferty MIT, Lee Perlman MIT, Raechel N. Soicher MIT, Joshua Long University of Massachusetts, Lowell
11:00
20m
Talk
Developing a Survey Instrument for Sense of Belonging in Computing CoursesMSI
Papers
Morgan Fong University of Texas at Austin, Andrea Watkins University of Illinois, Geoffrey Herman University of Illinois at Urbana-Champaign
11:20
20m
Talk
Relative Self-Efficacy in Computer Science Courses
Papers
Joseph Ditton , John Edwards Utah State University
10:40 - 12:00
Career Paths in Tech: What Students Do, Post, and Code to Get HiredPapers at Meeting Room 102
10:40
20m
Talk
Extracurricular Activities Predict CS Internship Attainment
Papers
Christopher Perdriau University of Illinois at Urbana-Champaign, Bridget Agyare University of Illinois Urbana-Champaign, Colleen M. Lewis University of Illinois Urbana-Champaign
11:00
20m
Talk
Navigating Computing Careers: TikTok’s Potential Role as an Informal ResourceK12
Papers
Emily Martinez Temple University, Yashi Patel Temple University, Adyan Chowdhury Temple University, Noel Chacko Temple University, Francisco Castro New York University, Stephen MacNeil Temple University
11:20
20m
Talk
The Open Source Resume: How Open Source Contributions Help Students Demonstrate Alignment With Employer Needs
Papers
Utsab Saha Computing Talent Initiative, Jeffrey D'Andria Computing Talent Initiative, Tyler Menezes CodeDay
11:40
20m
Talk
Why Learn This? Visualizing Pathways Between CS Courses and Careers to Engage Students
Papers
Stacey Levine Georgia State University, Anu G. Bourgeois Georgia State University
10:40 - 12:00
Teaching Computing Through Play: From Pointers to ParallelismPapers at Meeting Room 103-104
10:40
20m
Talk
DeliverC: Teaching Pointers through GenAI-Powered Game-Based Learning
Papers
Wyatt Petula Pennsylvania State University, Anushcka Joshi Pennsylvania State University, Peggy Tu Pennsylvania State University, Amrutha Somasundar Pennsylvania State University, Suman Saha PC
11:00
20m
Talk
Gamified Learning and Instructional Analogies for Theory of Computing Courses
Papers
Robert Belcher United States Military Academy, Wesley Yeatman United States Military Academy, Ryan Dougherty United States Military Academy
11:20
20m
Talk
Parallel X: Redesigning of a Parallel Programming Educational Game with Semantic Foundations and Transfer Learning
Papers
Devon McKee University of California at Santa Cruz, Zhiyu Lin University of California Santa Cruz, Boyd Fox Independent, Jiahong Li University of California Santa Cruz, Jichen Zhu IT University of Copenhagen, Magy Seif El-Nasr University of California Santa Cruz, Tyler Sorensen Microsoft Research and University of California at Santa Cruz
10:40 - 12:00
Visualization and Simulation Papers at Meeting Room 105
10:40
20m
Talk
University of Washington Web-Based Simulators for Visualizing Cache and Virtual Memory Concepts
Papers
Justin Hsia University of Washington, Seattle
11:00
20m
Talk
ChartCode: A Tool for Visual Coding, Simulation, and Targeted Formative Evaluation
Papers
Guangming Xing Western Kentucky University, Gongbo Liang Texas A & M University - San Antonio, Tawfiq Salem Purdue University
11:20
20m
Talk
Enhancing Computer Network Education for High School Students with an Educational Simulator Visualizing Packet Retransmission and RoutingCC
Papers
Yuki Kitamura The University of Osaka, Tomonari Kishimoto Otemon Gakuin University, Hiroyuki Nagataki The University of Osaka, Susumu Kanemune Osaka Electro-Communication University, Shizuka Shirai The University of Osaka
11:40
20m
Talk
HyProf: A Profiler for Programming Students that Offers Hypotheses about Performance Bugs
Papers
Hope Dargan MIT CSAIL, Adam J. Hartz MIT EECS, Robert Miller MIT
10:40 - 12:00
AI in the Classroom: Reflection, Help-Seeking, and Other MiraclesPapers at Meeting Room 260-267
10:40
20m
Talk
A ''watch your replay videos'' Reflection Assignment on Comparing Programming without versus with Generative AI: Learning about Programming, Critical AI Use and Limitations, and Reflection
Papers
Sarah Magz Fernandez University of Maine, Greg L Nelson University of Maine
11:00
20m
Talk
Closing the Loop: An Instructor-in-the-Loop AI Assistance System for Supporting Student Help-Seeking in Programming Education
Papers
Tung Phung MPI-SWS, Heeryung Choi University of Minnesota, Mengyan Wu University of Michigan - Ann Arbor, Christopher Brooks University of Michigan, Sumit Gulwani Microsoft, Adish Singla Max Planck Institute for Software Systems
11:20
20m
Talk
Enhancing Student Engagement and Learning in Database Programming Through Active Learning Strategies
Papers
Ignacio Marco-Pérez Universidad de La Rioja, Beatriz Pérez Universidad de La Rioja
11:40
20m
Talk
Owlgorithm: Supporting Self-Regulated Learning in Competitive Programming through LLM-Driven Reflection
Papers
Juliana Nieto Universidad Nacional de Columbia, Erin Kramer Purdue University, Peter Kurto Purdue University, Ethan Dickey Purdue University, Andres Mauricio Bejarano Posada Purdue University
10:40 - 12:00
Broadening ParticipationPapers at Meeting Room 274
10:40
20m
Talk
Creating a Second Pathway to the Computing MajorMSIIn-Person & Online
Papers
Ashley Pang UC Riverside, Paea LePendu pc, Mariam Salloum BCOE/Computer Science, Neftali Watkinson Medina University of California, Riverside, Carla Brodley Northeastern University, Center for Inclusive Computing
11:00
20m
Talk
Exploring the Relationship Between Department Characteristics and Computer Science Student Diversity in the US MSIIn-Person & Online
Papers
Max Fowler University of Illinois, Mariam Saffar Perez University of Illinois at Urbana-Champaign, Marcella Todd Harvey Mudd College, Rachel Perley Harvey Mudd College, Paul Bruno University of Illinois at Urbana-Champaign, Colleen M. Lewis University of Illinois Urbana-Champaign
11:20
20m
Talk
Teaching Authentic Programming Applications to Novices: Purpose-first Tutorials in a General Education Computing CourseIn-Person & Online
Papers
Mehmet Arif Demirtas University of Illinois Urbana-Champaign, Claire Zheng University of Illinois Urbana-Champaign, Kathryn Cunningham University of Illinois Urbana-Champaign
11:40
20m
Talk
Understanding Software Engineering Practices and Tools in Undergraduate Mechanical Engineering StudentsIn-Person & Online
Papers
Prisha Bhatia Olin College of Engineering, Ramzey Burdette Olin College of Engineering, Titilayo Oshinowo Olin College of Engineering, Michelle Jarvie-Eggart Michigan Technological University, Stephanos Matsumoto Olin College of Engineering
13:40 - 15:00
Testing, Teaching, and Hacking: Openness in CS EducationPapers at Meeting Room 100
13:40
20m
Talk
Comparing Student Performance on Un-Proctored Online Exams and Proctored In-Person Exams in a CS0 Course
Papers
Ben Stephenson University of Calgary
14:00
20m
Talk
Enabling Open Educational Resource Adoption through Integrated Sharing in PrairieLearn
Papers
Seth Poulsen Utah State University, Geoffrey Herman University of Illinois at Urbana-Champaign, Mariana Silva University of Illinois at Urbana Champaign, Max Fowler University of Illinois, David Smith Virginia Tech, Leo Porter University of California San Diego, Nico Ritschel University of Illinois Urbana-Champaign, Craig Zilles University of Illinois at Urbana-Champaign, Matthew West University of Illinois at Urbana-Champaign
14:20
20m
Talk
Open Cybersecurity Education: Five Years of pwn.college
Papers
Connor Nelson pc, Robert Wasinger Arizona State University, Adam Doupé Arizona State University, Yan Shoshitaishvili Arizona State University
13:40 - 15:00
From Minor to Major Accessibility in ComputingPapers at Meeting Room 101
13:40
20m
Talk
Earning a CS Minor: A Not-So-Minor Feat. A Survey of Accessibility and Structure of 120 Computer Science Minors
Papers
Albert Lionelle Khoury College of Computer Sciences, Northeastern University, Anya Amin Center for Inclusive Computing, Northeastern University, Megan Giordano Northeastern University, Center for Inclusive Computing, Catherine Gill Northeastern University
14:00
20m
Talk
Toward Accessible Parsons Problems on Mobile Platforms
Papers
Timothy Kluthe University of Nevada, Las Vegas, Gabriel Contreras University of Nevada, Las Vegas, Willliam Allee University of Nevada, Las Vegas, Wilfredo Robinson Saint Louis University, Namrata Roy Saint Louis University, Hannah Williams University of Nevada, Las Vegas, Alex Hoffman Belmont University, Derrick Smith Auburn University at Montgomery, Brianna Blaser University of Washington, Jenna Gorlewicz Saint Louis University, Nicholas Giudice University of Maine, Andreas Stefik University of Nevada at Las Vegas, USA
14:20
20m
Talk
Virtual Reality-Based, Gamified Accessibility Education: An Experience Report
Papers
Wajdi Aljedaani Saud Data & Artifical Intelligent Authority, Parthasarathy PD BITS Pilani KK Birla Goa Campus, Xin Tong Hong Kong University of Science and Technology (Guangzhou), Kyrie Zhixuan Zhou University of Illinois Urbana-Champaign
13:40 - 15:00
Learning in the Browser: IDEs, Collaboration, and eTextbooksPapers at Meeting Room 102
13:40
20m
Talk
WebTigerPython: A Low-Floor High-Ceiling Python IDE for the Browser
Papers
Clemens Bachmann ETH Zurich, Alexandra Maximova Department of Computer Science, ETH Zurich, Tobias Kohn Karlsruhe Institute of Technology, Dennis Komm ETH Zurich
14:00
20m
Talk
Rooms of Their Own: Structured Small-Group Learning in a Realtime Browser-Based IDE
Papers
Jacob Roberts-Baca Stanford University, Joshua Delgadillo Stanford University, Chris Piech Stanford University
14:20
20m
Talk
Students’ Evaluation of a Free and a Paid Interactive eTextbooks for Computing Education
Papers
Audria Montalvo University of California, San Diego, Anya Chernova University of California, San Diego, Vinod Vairavaraj University of California, San Diego, Gerald Soosairaj University of California, San Diego, Liam Hardy University of California, San Diego
13:40 - 15:00
Computational Thinking and CapstonePapers at Meeting Room 103-104
13:40
20m
Talk
Boosting Coding Confidence in Elementary Students: The Impact of ELA-Integrated Computational Thinking CurriculumMSIK12
Papers
Leiny Garcia pc, Yvonne Kao WestEd, Sharin Jacob Digital Promise, Clare Baek University of California, Irvine, Dana Saito-Stehberger University of California, Irvine, Diana Franklin University of Chicago, Mark Warschauer University of California, Irvine
14:00
20m
Talk
Bridging Computational Thinking, Science, and Storytelling: Reflections on an Interdisciplinary Learning ApproachK12
Papers
Jessica Vandenberg North Carolina State University, Andy Smith North Carolina State University, Robert Monahan North Carolina State University, James Minogue NC State University, Kevin Oliver North Carolina State University, Aleata Hubbard Cheuoua WestEd, Cathy Ringstaff WestEd, Bradford Mott North Carolina State University
14:20
20m
Talk
Effects of Project Type on CS Capstone Courses
Papers
Ananth Jillepalli Washington State University, David Rice Washington State University, James Crabb Washington State University, Assefaw Gebremedhin Washington State University
14:40
20m
Talk
Examining the Impact of Instructor-Client Mentoring Models in CS Capstone Courses at a Public University
Papers
Ananth Jillepalli Washington State University, James Crabb Washington State University, David Rice Washington State University, Assefaw Gebremedhin Washington State University
13:40 - 15:00
From Unplugged Activities to LLM Insights: Rethinking How We Teach Intro Computing CoursesPapers at Meeting Room 275
13:40
20m
Talk
CS Unplugged in Gateway Computing Courses: A Collaborative, Active Learning Approach in Introductory Computing
Papers
Hyesung Park Georgia Gwinnett College, Evelyn Brannock Georgia Gwinnett College, Tacksoo Im Georgia Gwinnett College, Wei Jin Georgia Gwinnett College, Sunae Shin Georgia Gwinnett College, Xin Xu Georgia Gwinnett College, David Kerven Georgia Gwinnett College
14:00
20m
Talk
Deriving Instructional Insights from Human–LLM Co-Evaluation of Student Collaboration in Data-Centric Programming
Papers
Marshall An Carnegie Mellon University, Christine Kwon Carnegie Mellon University, Yoonjae Lee Seoul National University, Ji-Hyeon Hur Seoul National University, Dongho LEE Dalhousie University, Vincent Huai Carnegie Mellon University, Barry Zheng Carnegie Mellon University, Matthew Yu Carnegie Mellon University, Joana Liu Carnegie Mellon University, Jenny Pugh Carnegie Mellon University, Gahgene Gweon Graduate School of Convergence Science and Technology, Seoul National University, John Stamper Carnegie Mellon University
14:20
20m
Talk
Repetition Meets Context: Teaching CS1 Through Two Scientific Domains
Papers
Meiying Qin York University, Jade Atallah York University, Hovig Kouyoumdjian York University, Jonatan Schroeder York University, Larry Yueli Zhang York University, May Haidar York University
14:40
20m
Talk
Systems for Scaling Accessibility Efforts in Large Computing Courses
Papers
Ritesh Kanchi Harvard University, Miya Natsuhara pc, Matt Wang University of Washington
Media Attached
15:40 - 17:00
Natural Language, Code, and Usability Papers at Meeting Room 100
15:40
20m
Talk
Describing Functionality in Natural Language May Improve Decomposition Behaviors
Papers
Matthew Burns Utah State University, Wesley Edwards Utah State University, John Edwards Utah State University
16:00
20m
Talk
HeuristicBuilder: An Interactive Multimodal Approach to Teaching Usability Heuristics
Papers
Wajdi Aljedaani Saud Data & Artifical Intelligent Authority, Marcelo Medeiros Eler University of São Paulo, Parthasarathy PD BITS Pilani KK Birla Goa Campus, Will Witherspoon University of North Texas, Andrew Pamer University of North Texas
16:20
20m
Talk
You Don't Need a Data Center to Explain in Plain English! Comparing Open-Source and Propriety LLMs for EiPE Grading
Papers
Eddy Jiang University of Illinois Urbana-Champaign, Max Fowler University of Illinois
16:40
20m
Talk
Systematically Thinking about the Complexity of Code Structuring Exercises at Introductory Level
Papers
Georgiana Haldeman Colgate University, Peter Ohmann College of St. Benedict / St. John's University, Paul Denny The University of Auckland
15:40 - 17:00
Study Abroad and Students TransitionsPapers at Meeting Room 101
15:40
20m
Talk
Defamiliarizing Data: An Education Abroad Course in Human-Centered Computing and Information Science
Papers
Amy Voida University of Colorado Boulder, Stephen Voida University of Colorado Boulder, Julia Dean CU Boulder, Will Schermer University of Colorado Boulder
16:00
20m
Talk
Bridging Prerequisite Gaps: When, How, and How Much?
Papers
Lisa Zhang University of Toronto Mississauga, Alice Gao University of Toronto, Jessica Wen University of Toronto Mississauga, Alisha Hasan University of Toronto Mississauga
16:20
20m
Talk
The Development of Intercultural Competence through Information Science Education Abroad
Papers
Amy Voida University of Colorado Boulder, Stephen Voida University of Colorado Boulder, Will Schermer University of Colorado Boulder, Julia Dean CU Boulder, Ishita Pradhan University of Colorado Boulder
16:40
20m
Talk
Exploring transitions of graduates from an online master's in computer science program to doctoral programs
Papers
Alex Greenhalgh Georgia Institute of Technology, Brian Yu Georgia Institute of Technology, Patrick Deng Georgia Institute of Technology, David A. Joyner Georgia Institute of Technology, Nicholas Lytle Georgia Institute of Technology
15:40 - 17:00
15:40
20m
Talk
API Can Code: Laying the Computational Foundations of Data Science in High School ClassroomsK12
Papers
Rotem Israel-Fishelson University of Maryland, David Weintrop University of Maryland
16:00
20m
Talk
“It Wasn’t As Bad As I Thought”: Exploring K-12 Students' Experiences with Real-Time and Pre-Recorded Physiological DataK12
Papers
Vincent Ingram University of Alabama, Myles Lewis University of Alabama, Wesley Junkins University of Alabama, Chris Crawford University of Alabama
16:20
20m
Talk
Psychometric Analysis of a Teacher Readiness and Concerns Scale in K-5 Computer Science EducationK12
Papers
Yiwen Yang pc, Ziyu Fan University of Texas at Austin, Miriam Jacobson The University of Texas at Austin, Zhuoying Wang The University of Texas at Austin, Judy Lau University of Texas at Austin, Texas Advanced Computing Center (TACC)
16:40
20m
Talk
Words Matter: Integrating Adaptive Cybersecurity Phraseology in K-12 Education Subjects to Improve Cyber HygieneK12
Papers
Timothy Crisp The University of Tulsa Oklahoma Cyber Innovation Institute, John Hale University of Tulsa
15:40 - 17:00
CS Curriculum Changes Papers at Meeting Room 260-267
15:40
20m
Talk
Algorithmic Arts: Attracting a New Type of Student to Computing - The Algorithm is the Medium
Papers
Erik Brunvand University of Utah, Bill Manaris College of Charleston
16:00
20m
Talk
Is It Time to Remove Data Structures? A Critical Look at Requirements and Curricular Placement.
Papers
Albert Lionelle Khoury College of Computer Sciences, Northeastern University
16:20
20m
Talk
Rethinking How We Discuss the Guidance of Student Researchers in Computing
Papers
Shomir Wilson Pennsylvania State University
16:40
20m
Talk
Towards a Computer Science Topics Ontology
Papers
Joshua Barron pc, Russell Feldhausen Kansas State University, Nathan H. Bean Kansas State University
15:40 - 17:00
15:40
20m
Talk
AI See What You Did There – The Prevalence of LLM-Generated Answers in MOOC ResponsesOnline
Papers
Petteri Nurmi University of Helsinki, Musfira Khan University of Helsinki, Zahra Safaei University of Helsinki, Ngoc Thi Nguyen University of Helsinki, Fatemeh Sarhaddi University of Helsinki, Mika Tompuri University of Helsinki, Henrik Nygren University of Helsinki, Päivi Kinnunen University of Helsinki, Agustin Zuniga University of Helsinki
16:00
20m
Talk
Like parsley in Greek food: Elementary set theory and the case for DM1Online
Papers
Siddharth Bhaskar University of Southern Denmark
16:20
20m
Talk
Teachers as Learners, Teachers as Teachers: Culturally Relevant Computational Thinking Professional Development for K-12 In-Service TeachersMSIK12Online
Papers
Hao Yue pc, Jingyi Wang San Francisco State University, Ilmi Yoon San Francisco State University, Qiang Hao Western Washington University
15:40 - 17:00
Software EngineeringPapers at Meeting Room 275
15:40
20m
Talk
Teaching Software Documentation through an Asynchronous Module: An Experience Report
Papers
Arist Alfred Bravo University of Toronto, Jonathan Calver University of Toronto
16:00
20m
Talk
A Framework to Detect, Classify, and Prioritise Student Quality Defects
Papers
Shiman Cui The University of Auckland, Paul Denny The University of Auckland, Andrew Luxton-Reilly The University of Auckland
16:20
20m
Talk
Turning Insight into Action: Evaluating Targeted Interventions for a Software Engineering Course Informed by Student Reflections
Papers
Sandra Wiktor University of North Carolina at Charlotte, Mohsen Dorodchi University of North Carolina Charlotte
16:40
20m
Talk
Benchmarking AI Tools for Software Engineering Education: Insights into Design, Implementation, and Testing
Papers
Nimisha Roy Georgia Institute of Technology, Oleksandr Horielko Georgia Institute of Technology, Fisayo Omojokun Georgia Institute of Technology

Sat 21 Feb

Displayed time zone: Central Time (US & Canada) change

10:40 - 12:00
CS TeachersPapers at Meeting Room 100
10:40
20m
Talk
A Longitudinal Pilot Study Exploring the Impacts of Coaching for Equity on Computer Science TeachersK12
Papers
Jennifer Rosato University of Minnesota, Joseph Tise Institute for Advancing Computing Education, Monica McGill Institute for Advancing Computing Education, Megan Deiger Consultant
11:00
20m
Talk
A Structured Inventory of Tools to unveil Teachers' Computer Science KnowledgeK12
Papers
Agnese Del Zozzo University of Trento, Luca Lamanna University of Milan, Violetta Lonati University of Milan, Alberto Montresor University of Trento
11:20
20m
Talk
Praxis Prep: Supporting Secondary Career & Technical Education Teachers Pursuing CS LicensureK12
Papers
Jon Stapleton CodeVA, Chris Mayfield James Madison University, Lesley Frew Fairfax County Public Schools, Ebonie Campbell Norfolk Public Schools, Shelita Hodges Richmond Public Schools, Buffie Holley Albemarle High School, Natalie Rice CodeVA, Perry Shank CodeVA, Debra Bernstein TERC, James K.L. Hammerman TERC
10:40 - 12:00
Inclusive ComputingPapers at Meeting Room 101
10:40
20m
Talk
Connecting Computing Students' External Help Resource Preferences and Internal Help Resource Usage: 2021-2025
Papers
Shao-Heng Ko Duke University, Kristin Stephens-Martinez Duke University
11:00
20m
Talk
“I Felt Dumb” vs. “It Suddenly Clicked”: Exploring Emotional Highs and Lows in Undergraduate Computing by Gender
Papers
Shelly Engelman Custom EduEval LLC, Andrew Watkins Case Western Reserve University, Rachelle Hippler Baldwin Wallace University, Dave Reed Capital University, Patricia Opong Columbus State Community College, Natalie Nurse Cuyahoga Community College
11:20
20m
Talk
Leveraging Collective Impact Principles for Identity-Inclusive Computing Education through AiiCE
Papers
Leiny Garcia pc, Shaundra Daily Duke University, Alicia Nicki Washington Duke University
11:40
20m
Talk
Occupation-Oriented Success: Fostering Competencies for Computing Careers with Hispanic or Latiné Students
Papers
Stephanie Lunn Florida International University, Edward Dillon University of Maryland, Baltimore County, Ashmita Thapaliya Florida International University, Krystal Williams University of Georgia
10:40 - 12:00
Programming LanguagesPapers at Meeting Room 260-267
10:40
20m
Talk
An Innovative Approach to Parsons Problems for Teaching and Learning Functional Programming
Papers
Jacob Bell Grinnell College, Anna Deschamps Grinnell College, Eva Kapoor Grinnell College, Salyan Karki Grinnell College, Julian Kim Grinnell College, Nicole Moreno Gonzalez Grinnell College, William Pitchford Grinnell College, Elene Sturua Grinnell College, Charles Wade Grinnell College, Samuel A. Rebelsky Grinnell College
11:00
20m
Talk
Blocks or Text: Who Struggles, Who Thrives?K12
Papers
Rafael Fernandes ETH Zürich, Alexander Wiß University of Trier, Andreas Schneider University of Trier, Angélica Herrera Loyo ETH Zurich, Dennis Komm ETH Zurich, Jacqueline Staub University of Trier
11:20
20m
Talk
Providing Choice of Programming Language: Student Outcomes in an Algorithms Course
Papers
John R. Hott University of Virginia
11:40
20m
Talk
Students' Understanding of (Delimited) Continuations
Papers
Filip Strömbäck Linköping University, Youyou Cong Institute of Science Tokyo, Kazuki Ikemori Tokyo Institute of Technology
10:40 - 12:00
10:40
20m
Talk
Rethinking the Future of Data Science Education: A Case for Thoughtful Design to Integrate AI into the College ClassroomOnline
Papers
Louise Yarnall SRI International, Hui Yang SRI International, Sophia Ouyang SRI International, Lujie Karen Chen University of Maryland, Baltimore County
11:00
20m
Talk
Investigating Student Belonging, Engagement, and Self-Efficacy in Online and In-Person Learning EnvironmentsMSIOnline
Papers
Vijayalakshmi Ramasamy Georgia Southern University, Hagit Kornreich-Leshem Florida International University, Maria Reid Florida International University, Sharon Tuttle Cal Poly Humboldt, Tiana Solis Florida International University, Md Ali Rider University, Edward Jones Florida A&M University, Peter J. Clarke Florida International University, USA
DOI
11:20
20m
Talk
The Impostor Phenomenon and the Confidence GapOnline
Papers
Arto Hellas Aalto University, Andrew Petersen University of Toronto Mississauga
11:40
20m
Talk
Teaching Algorithmic Thinking to Elementary Students in an Unplugged EnvironmentK12
Papers
Maggie Vanderberg Southern Oregon University, Dylana Garfus-Knowles Ashland School District, Gladys Krause William & Mary, Eva Skuratowicz Southern Oregon University, Eping Hung Southern Oregon University
10:40 - 12:00
AI For Everyone (K-12)Papers at Meeting Room 274
10:40
20m
Talk
AI for Everyone: Engaging Middle Schoolers through Collaborative, Ethical, and Multimodal AI LearningIn-Person & OnlineK12
Papers
Kayleigh Stallings , Nicole Tian University of Texas at San Antonio, Elif Yayla Ercek University of Texas at San Antonio, Haven Kotara University of Texas at San Antonio, Devin Marinelli University of Texas at San Antonio, Pragathi Durga Rajarajan University of Texas at San Antonio, Daniel Schumacher pc, Ismaila Temitayo Sanusi University of Eastern Finland, Fred Martin University of Texas at San Antonio
11:00
20m
Talk
A Research Course to Develop AI Tools for K–12 LearningIn-Person & OnlineK12
Papers
Ismaila Temitayo Sanusi University of Eastern Finland, Deepti Tagare University of Texas at San Antonio, Fred Martin University of Texas at San Antonio
11:20
20m
Talk
On Teaching Image Recognition to Children at a Summer CampIn-Person & OnlineK12
Papers
Pragathi Durga Rajarajan University of Texas at San Antonio, Fred Martin University of Texas at San Antonio
11:40
20m
Talk
Think Like AI: Hands-On Exploration of Sampling Parameters and Prompts for Middle School Students’ Generative AI LiteracyIn-Person & OnlineK12
Papers
Saniya Vahedian Movahed University of Texas at San Antonio
13:40 - 15:00
CS Teachers Papers at Meeting Room 100
13:40
20m
Talk
How Teacher Educators Adapt Debugging Instruction for Non-CS Teachers in K-12 Professional Development PracticesK12
Papers
Tamara Nelson-Fromm , Aadarsh Padiyath University of Michigan, Mark Guzdial University of Michigan
14:00
20m
Talk
Programmatic, Policy, and Communication Challenges in Developing CS Teacher Preparation PathwaysK12
Papers
Erin Anderson Georgia State, Yin-Chan Liao Georgia State University, Ijeoma Mbaezue Georgia State University, Jennifer Rosato University of Minnesota, Michelle Friend University of Nebraska Omaha
14:20
20m
Talk
Supporting K-12 CS Teacher Identity Development through Peer MentoringK12
Papers
Aleata Hubbard Cheuoua WestEd, Portia Morrell Computer Science Teachers Association, Bryan Twarek Computer Science Teachers Association, Tonya Davis CSTA Black Affinity Group, Kathleen Effner CSTA New Jersey, Amy Fetherston CSTA Wisconsin Dairyland, Kevin Jala CSTA New Jersey, Linnea Logan CSTA Wisconsin Dairyland
13:40 - 15:00
Detecting AI Generated Code Papers at Meeting Room 102
13:40
26m
Talk
Detecting AI-Generated Code in Introductory Programming Courses
Papers
Aryan Ramachandra pc, Suhani Chaudhary University of California, Riverside, Justin Tran University of California, Riverside, Riti Desai University of California, Riverside, Ashley Pang UC Riverside, Mariam Salloum BCOE/Computer Science
14:06
26m
Talk
LLM-Based Explainable Detection of LLM-Generated Code in Python Programming Courses
Papers
Jeonghun Baek The University of Tokyo, Tetsuro Yamazaki University of Tokyo, Akimasa Morihata University of Tokyo, Junichiro Mori The University of Tokyo, Yoko Yamakata The University of Tokyo, Kenjiro Taura The University of Tokyo, Shigeru Chiba The University of Tokyo
14:33
26m
Talk
AI in the Eyes of Middle Schoolers: Perceptions, Attitudes, and LiteracyK12
Papers
Maria Kasinidou Open University of Cyprus, Styliani Kleanthous Open University of Cyprus, Jahna Otterbacher Open University of Cyprus
13:40 - 15:00
Culturally Responsive Computing EducationPapers at Meeting Room 103-104
13:40
20m
Talk
Culturally Responsive Computer Science and Social Studies Integration in Middle SchoolMSIK12
Papers
Mengying Jiang Utah State University, Kristin Searle Utah State University, Michaela Harper Utah State University
14:00
20m
Talk
For TAs, With TAs: A Responsive Pedagogy Co-Design Workshop
Papers
Ian Pruitt Georgia State University, Grace Barkhuff Georgia Institute of Technology, Vyshnavi Namani Georgia Institute of Technology, Ellen Zegura Georgia Institute of Technology, William Gregory Johnson Georgia State University, Rodrigo Borela pc, Benjamin Shapiro Georgia State University, Anu G. Bourgeois Georgia State University
14:20
20m
Talk
Sustaining K-8 Computer Science Instruction with Indigenous CommunitiesK12
Papers
Kathryn M. Rich American Institutes for Research, Heather Cunningham Boot Up Professional Development, Joseph Wilson American Institutes for Research, Alberta Oldman Wyoming Indian Schools, Taralee Suppah Wyoming Indian Schools, Elena Singer Wyoming Indian Schools, Lara M. Lock Fort Washakie School, Amanda LeClair-Diaz Fort Washakie School, Claudette C'Bearing Arapahoe Schools, Wilfred J. Ferris III Arapahoe Schools, Veronica E. Miller Arapahoe Schools, Marissa Spang American Institutes for Research, Emily Kern Partner to Improve
14:40
20m
Talk
Why Some Students Still Opt Out of CS: Student Perspectives in a Culturally Responsive Program
Papers
Bridget Agyare University of Illinois Urbana-Champaign, Skyla Jin University of Illinois Urbana-Champaign, Diana Arreola Scripps College, Colleen M. Lewis University of Illinois Urbana-Champaign
13:40 - 15:00
Peer Instruction Papers at Meeting Room 105
13:40
20m
Talk
A Multi-Institutional Study on Peer Instruction: Evaluating Text-Chat with Assigned Group Members vs Verbal Discussion
Papers
Xingjian Gu University of Michigan, Barbara Ericson University of Michigan, Zihan Wu University of Michigan, Margaret Ellis Virginia Tech, Janice Pearce Berea College, Susan Rodger Duke University, Yesenia Velasco Duke University
14:00
20m
Talk
Overcoming Barriers to Adopting Peer Instruction
Papers
Xingjian Gu University of Michigan, Memuna Tariq University of Michigan, Zihan Wu University of Michigan, Barbara Ericson University of Michigan
14:20
20m
Talk
Supporting Peer-to-Peer Learning with LLMs: Investigating Smarter Student Solution Recommendations}
Papers
Sandra Wiktor University of North Carolina at Charlotte, Aileen Benedict University of North Carolina at Charlotte, Mohsen Dorodchi University of North Carolina Charlotte
13:40 - 15:00
Teaching Faculty Papers at Meeting Room 260-267
13:40
20m
Talk
CS Teaching-Track Faculty Recruiting in the USA: An Experience Report From the University of Washington
Papers
Justin Hsia University of Washington, Seattle
14:00
20m
Talk
Partnering with Community College Faculty to Co-Design Intelligent Tutoring Systems for Cybersecurity Workforce Training
Papers
Marshall An Carnegie Mellon University, Mahboobeh Mehrvarz Carnegie Mellon University, Leah Teffera Carnegie Mellon University, Matthew Kisow Community College of Allegheny College, Bruce M. McLaren Carnegie Mellon University, Christopher Bogart Carnegie Mellon University
14:20
20m
Talk
Scaling Large CS Courses via Full-time Teaching Support Staff
Papers
Alex Chao University of California, San Diego, Yesenia Velasco Duke University

Accepted Papers

Title
A Call for Critical Technology to Enable Innovative and Alternative Grading PracticesK12
Papers
A Code-Free, Direct-Manipulation Interface for Constructing Boolean Expressions
Papers
A Framework to Detect, Classify, and Prioritise Student Quality Defects
Papers
AI for Everyone: Engaging Middle Schoolers through Collaborative, Ethical, and Multimodal AI LearningIn-Person & OnlineK12
Papers
AI in the Eyes of Middle Schoolers: Perceptions, Attitudes, and LiteracyK12
Papers
AI See What You Did There – The Prevalence of LLM-Generated Answers in MOOC ResponsesOnline
Papers
AI-Supported Grading and Rubric Refinement for Free Response Questions
Papers
Algorithmic Arts: Attracting a New Type of Student to Computing - The Algorithm is the Medium
Papers
Aligning Small Language Models for Programming Feedback: Towards Scalable Coding Support in a Massive Global Course
Papers
A Longitudinal Pilot Study Exploring the Impacts of Coaching for Equity on Computer Science TeachersK12
Papers
Alternatives in Compiler Construction Pedagogy
Papers
A Multi-Institutional Study on Peer Instruction: Evaluating Text-Chat with Assigned Group Members vs Verbal Discussion
Papers
Analogical Reasoning in Undergraduate Algorithms
Papers
An Innovative Approach to Parsons Problems for Teaching and Learning Functional Programming
Papers
A Pedagogy for Assessing Individual Contributions to Team-Based Software Projects
Papers
API Can Code: Laying the Computational Foundations of Data Science in High School ClassroomsK12
Papers
A Replication Study on Student Expectations on CS Tutors: Understanding Roles and Labors of Tutors
Papers
A Research Course to Develop AI Tools for K–12 LearningIn-Person & OnlineK12
Papers
Assessing Student Proficiency in Foundational Developer Tools Through Live Checkoffs
Papers
A Structured Inventory of Tools to unveil Teachers' Computer Science KnowledgeK12
Papers
A ''watch your replay videos'' Reflection Assignment on Comparing Programming without versus with Generative AI: Learning about Programming, Critical AI Use and Limitations, and Reflection
Papers
Behind the Scenes of Delivering a Large Computing Course: The Experience of a TA Managing Logistics
Papers
Benchmarking AI Tools for Software Engineering Education: Insights into Design, Implementation, and Testing
Papers
Blocks or Text: Who Struggles, Who Thrives?K12
Papers
Boosting Coding Confidence in Elementary Students: The Impact of ELA-Integrated Computational Thinking CurriculumMSIK12
Papers
Boosting Student Motivation through Game-based Learning in Programming Education with Gamify-ITK12Online
Papers
Bridging Computational Thinking, Science, and Storytelling: Reflections on an Interdisciplinary Learning ApproachK12
Papers
Bridging Prerequisite Gaps: When, How, and How Much?
Papers
Capturing Student Reasoning with Low-Cost AI: An Early Experience in a Data-Structures Course
Papers
ChartCode: A Tool for Visual Coding, Simulation, and Targeted Formative Evaluation
Papers
Choosing Their Own Way: Guided Self-Placement for Students in an Introductory Programming Sequence
Papers
Clause-Driven Automated Grading of SQL’s DDL and DML Statements
Papers
Closing the Loop: An Instructor-in-the-Loop AI Assistance System for Supporting Student Help-Seeking in Programming Education
Papers
Codeless Modules for Parallel and Distributed Computing in Early Computing Curriculum
Papers
Comparing Student Performance on Un-Proctored Online Exams and Proctored In-Person Exams in a CS0 Course
Papers
Competing or Collaborating? The Role of Hackathon Formats in Shaping Team Dynamics and Project Choices
Papers
Connecting Computing Students' External Help Resource Preferences and Internal Help Resource Usage: 2021-2025
Papers
Creating a Second Pathway to the Computing MajorMSIIn-Person & Online
Papers
Creating Exercises with Generative AI for Teaching Introductory Secure Programming: Are We There Yet?
Papers
CS Ed. in Prisons and Jails: Evidence of Computer Programming Self-Efficacy Growth Across Multiple Course Offerings
Papers
CS Teaching Assistant Perceptions on LLM-Generated Faded Worked Examples for Feedback Training
Papers
CS Teaching-Track Faculty Recruiting in the USA: An Experience Report From the University of Washington
Papers
CS Unplugged in Gateway Computing Courses: A Collaborative, Active Learning Approach in Introductory Computing
Papers
Culturally Responsive Computer Science and Social Studies Integration in Middle SchoolMSIK12
Papers
Debugging Support for Students with Blindness and Visual Impairments on Notebook-based Programming Environments
Papers
Defamiliarizing Data: An Education Abroad Course in Human-Centered Computing and Information Science
Papers
DeliverC: Teaching Pointers through GenAI-Powered Game-Based Learning
Papers
Deriving Instructional Insights from Human–LLM Co-Evaluation of Student Collaboration in Data-Centric Programming
Papers
Describing Functionality in Natural Language May Improve Decomposition Behaviors
Papers
Designing and Implementing Skill Tests at Scale: Frequent, Computer-Based, Proctored Assessments with Minimal Infrastructure Requirements
Papers
Detecting AI-Generated Code in Introductory Programming Courses
Papers
Developing a Decolonial Mindset for Indigenising Computing EducationMSIIn-Person & Online
Papers
Developing a Survey Instrument for Sense of Belonging in Computing CoursesMSI
Papers
Developing Problem-Solving Competency in Data Science: Exploring A Case-Based Approach
Papers
Earning a CS Minor: A Not-So-Minor Feat. A Survey of Accessibility and Structure of 120 Computer Science Minors
Papers
EduLint: a Versatile Tool for Code Quality Feedback
Papers
Effects of Project Type on CS Capstone Courses
Papers
Enabling Open Educational Resource Adoption through Integrated Sharing in PrairieLearn
Papers
Encouraging Learning Through Repetition: Effects of Multiple Practice Opportunities in a Large Intro Programming Course
Papers
Enhancing Computer Network Education for High School Students with an Educational Simulator Visualizing Packet Retransmission and RoutingCC
Papers
Enhancing Student Engagement and Learning in Database Programming Through Active Learning Strategies
Papers
Evaluating the Impact of Accessibility Testing Tool Usage Across the Software Development Lifecycle in Student Projects
Papers
Examining Discourse in a Large Online Education Program: A Machine-in-the-Loop ApproachOnline
Papers
Examining Inclusive Computing Education for Blind Students in India
Papers
Examining the Impact of Instructor-Client Mentoring Models in CS Capstone Courses at a Public University
Papers
Experience Report: Teaching Computer Science Ethics using Science Fiction Across Multiple Institutions and Course Types
Papers
Experiences with the ChCore Experimental Operating System KernelOnline
Papers
Exploring K–12 Teacher Motivation to Engage with AI in EducationK12
Papers
Exploring Student Choice and the Use of Multimodal Generative AI in Programming Learning
Papers
Exploring the Relationship Between Department Characteristics and Computer Science Student Diversity in the US MSIIn-Person & Online
Papers
Exploring transitions of graduates from an online master's in computer science program to doctoral programs
Papers
Exploring Undergraduate Computing Tutors’ Pedagogical Practices
Papers
Extracurricular Activities Predict CS Internship Attainment
Papers
Fighting Fire with Fire: LLM-Assisted Grading of Handwritten CS AssessmentsK12
Papers
Fine-Tuning Open-Source Models as a Viable Alternative to Proprietary LLMs for Explaining Compiler Messages
Papers
For TAs, With TAs: A Responsive Pedagogy Co-Design Workshop
Papers
From Data to Action: Empowering Students to Assess and Improve Teamwork with Cross-Tool Log Data
Papers
Gamified Learning and Instructional Analogies for Theory of Computing Courses
Papers
HeuristicBuilder: An Interactive Multimodal Approach to Teaching Usability Heuristics
Papers
How AI Ethics is Taught: Insights from a Syllabus-Level Review of U.S. Computing Courses
Papers
How Shared Gender Identity with Teaching Assistants Relates to Student Outcomes in an Undergraduate Algorithms Course
Papers
How Teacher Educators Adapt Debugging Instruction for Non-CS Teachers in K-12 Professional Development PracticesK12
Papers
HyProf: A Profiler for Programming Students that Offers Hypotheses about Performance Bugs
Papers
“I Felt Dumb” vs. “It Suddenly Clicked”: Exploring Emotional Highs and Lows in Undergraduate Computing by Gender
Papers
If You Can’t Beat ‘Em, Conscript ‘Em: Experiences Requiring the Use of AI in a Capstone Course
Papers
Impacts of Adding Ethics Modules to Individual Computing Courses
Papers
Improving LLM-Generated Educational Content: A Case Study on Prototyping, Prompt Engineering, and Evaluating a Tool for Generating Programming Problems for Data Science
Papers
Improving Professional Dispositions in Computing Curriculum Using Sequential Peer Assessment
Papers
Improving the Reliability of Grading Written-Response Coding Questions in a Large CS1 Course
Papers
Integrating Hands-On Data Collection Experience in an Introductory Programming Class for Non-CS MajorsK12
Papers
Introducing Quantum Computing to K-12 Teachers through a Professional Development WorkshopK12
Papers
Investigating Answer Choice Bias within a College-Level Introductory Computing Assessment
Papers
Investigating Student Belonging, Engagement, and Self-Efficacy in Online and In-Person Learning EnvironmentsMSIOnline
Papers
DOI
Is It Time to Remove Data Structures? A Critical Look at Requirements and Curricular Placement.
Papers
“It Wasn’t As Bad As I Thought”: Exploring K-12 Students' Experiences with Real-Time and Pre-Recorded Physiological DataK12
Papers
Leveraging Collective Impact Principles for Identity-Inclusive Computing Education through AiiCE
Papers
Like parsley in Greek food: Elementary set theory and the case for DM1Online
Papers
LLM-Based Explainable Detection of LLM-Generated Code in Python Programming Courses
Papers
Mapping Required Ethics Education in Computer Science: Insights from 100 U.S. ProgramsIn-Person & Online
Papers
Measuring Students’ Perceptions of an Autograded Scaffolding Tool for Students Performing at All Levels in an Algorithms Class
Papers
miniK8s: A Pedagogical Cloud-Native SystemOnline
Papers
Navigating Computing Careers: TikTok’s Potential Role as an Informal ResourceK12
Papers
Occupation-Oriented Success: Fostering Competencies for Computing Careers with Hispanic or Latiné Students
Papers
On Teaching Image Recognition to Children at a Summer CampIn-Person & OnlineK12
Papers
Open Cybersecurity Education: Five Years of pwn.college
Papers
Overcoming Barriers to Adopting Peer Instruction
Papers
Owlgorithm: Supporting Self-Regulated Learning in Competitive Programming through LLM-Driven Reflection
Papers
Pacing for Mastery: Optimizing LLM Interactions for Learning
Papers
Parallel X: Redesigning of a Parallel Programming Educational Game with Semantic Foundations and Transfer Learning
Papers
Partnering with Community College Faculty to Co-Design Intelligent Tutoring Systems for Cybersecurity Workforce Training
Papers
Performance and Start-Time Trends in Asynchronous Computer-Based Assessments
Papers
Personalized Exam Prep (PEP): Scaling No-Stakes, No-LLM Dialogue-Based Assessments in a Large CS Course
Papers
Piloting a Vignettes Assessment to Measure K-5 CS Teacher Proficiencies and GrowthK12
Papers
PQCIP: A Post-Quantum Cryptography Educational Program for Cybersecurity Professionals
Papers
Praxis Prep: Supporting Secondary Career & Technical Education Teachers Pursuing CS LicensureK12
Papers
Programmatic, Policy, and Communication Challenges in Developing CS Teacher Preparation PathwaysK12
Papers
Providing Choice of Programming Language: Student Outcomes in an Algorithms Course
Papers
Psychometric Analysis of a Teacher Readiness and Concerns Scale in K-5 Computer Science EducationK12
Papers
Quandray:Student Conceptions of Quantum Concepts from a Gameworld
Papers
QuantAid: A Quiz-based Quantum Learning Platform for High-school and Undergraduate StudentsK12
Papers
Reflecting on Thematic Analysis in Computer Science Education Research: A Field Guide for Researchers and Reviewers
Papers
Relative Self-Efficacy in Computer Science Courses
Papers
Repetition Meets Context: Teaching CS1 Through Two Scientific Domains
Papers
Rethinking How We Discuss the Guidance of Student Researchers in Computing
Papers
Rethinking the Future of Data Science Education: A Case for Thoughtful Design to Integrate AI into the College ClassroomOnline
Papers
Rooms of Their Own: Structured Small-Group Learning in a Realtime Browser-Based IDE
Papers
Scaffolding genAI for Critical Reflection: A Transformative Approach to Diverging Assessments in IT Forensics
Papers
Scaling Engagement: Leveraging Social Annotation and AI for Collaborative Code Review in Large CS Courses
Papers
Scaling Large CS Courses via Full-time Teaching Support Staff
Papers
SportSense for Data Literacy: Applying Sports and Movement for Authentic and Personal Data Interactions in Elementary SchoolsK12
Papers
Starting with DEI and Ethics - A New First-Year College Computer Science IntroductionIn-Person & Online
Papers
Student/Faculty Partnerships to Teach Computing Ethics Beyond the Computer Science Classroom
Papers
Students’ Evaluation of a Free and a Paid Interactive eTextbooks for Computing Education
Papers
Students' Understanding of (Delimited) Continuations
Papers
Students with Disabilities in CS Principles: An Examination of Capacity, Access, and Participation
Papers
Supporting K-12 CS Teacher Identity Development through Peer MentoringK12
Papers
Supporting Peer-to-Peer Learning with LLMs: Investigating Smarter Student Solution Recommendations}
Papers
Sustaining K-8 Computer Science Instruction with Indigenous CommunitiesK12
Papers
Systematically Thinking about the Complexity of Code Structuring Exercises at Introductory Level
Papers
Systems for Scaling Accessibility Efforts in Large Computing Courses
Papers
Media Attached
Talking to Our Students about Generative AI
Papers
Teachers as Learners, Teachers as Teachers: Culturally Relevant Computational Thinking Professional Development for K-12 In-Service TeachersMSIK12Online
Papers
Teaching Algorithmic Thinking to Elementary Students in an Unplugged EnvironmentK12
Papers
Teaching Authentic Programming Applications to Novices: Purpose-first Tutorials in a General Education Computing CourseIn-Person & Online
Papers
Teaching Probabilistic Machine Learning in the Liberal Arts: Empowering Socially and Mathematically Informed AI Discourse
Papers
Teaching Software Documentation through an Asynchronous Module: An Experience Report
Papers
The Cost of Catching Up: Investigating the Impact of Late Enrollment on Student Success in a CS0 Course
Papers
The Development of Intercultural Competence through Information Science Education Abroad
Papers
The Impact of Misalignment between Student and Teacher Evaluation of Student Skills on Middle School Student Motivation in Computer ScienceK12
Papers
The Impostor Phenomenon and the Confidence GapOnline
Papers
The Linux Luminarium: Learning Linux by Leveraging Lightweight Labs and Ludicrous Lessons
Papers
The Open Source Resume: How Open Source Contributions Help Students Demonstrate Alignment With Employer Needs
Papers
Theory Is Cool But So Are Microcontrollers: Computer Science Student Reactions to Arduino TinyML
Papers
Think Like AI: Hands-On Exploration of Sampling Parameters and Prompts for Middle School Students’ Generative AI LiteracyIn-Person & OnlineK12
Papers
Toward Accessible Parsons Problems on Mobile Platforms
Papers
Towards a Computer Science Topics Ontology
Papers
Transforming Confusion into Diffusion: Advancing Machine Learning Education via Bottom-Up Instruction
Papers
Turning Insight into Action: Evaluating Targeted Interventions for a Software Engineering Course Informed by Student Reflections
Papers
Understanding Software Engineering Practices and Tools in Undergraduate Mechanical Engineering StudentsIn-Person & Online
Papers
Understanding Student Interaction with AI-Powered Next-Step Hints: Strategies and Challenges
Papers
University of Washington Web-Based Simulators for Visualizing Cache and Virtual Memory Concepts
Papers
Using In-Class Exercise Data for Early Support of Struggling Students
Papers
Virtual Reality-Based, Gamified Accessibility Education: An Experience Report
Papers
Walking the Walk: Centering Students with Disabilities in Accessibility Education
Papers
WebTigerPython: A Low-Floor High-Ceiling Python IDE for the Browser
Papers
When Support Isn’t Enough: Understanding and Redesigning Student Support Systems in Large Computing Courses
Papers
Where are the Disabled Students?: A Systematic Literature Review of Disability Inclusion in Computing Education Research
Papers
Where are the faculty? The missing perspective on teaching socio-ethical competencies in computer scienceIn-Person & Online
Papers
“Why Do I Feel Like a Fraud?”: Understanding Imposter Phenomenon in Computing Students through Ecological Momentary Assessment
Papers
Why Learn This? Visualizing Pathways Between CS Courses and Careers to Engage Students
Papers
Why Some Students Still Opt Out of CS: Student Perspectives in a Culturally Responsive Program
Papers
Words Matter: Integrating Adaptive Cybersecurity Phraseology in K-12 Education Subjects to Improve Cyber HygieneK12
Papers
You Don't Need a Data Center to Explain in Plain English! Comparing Open-Source and Propriety LLMs for EiPE Grading
Papers
You're on the Ball: Using Games to Explore Accessibility for Neurodivergent Users
Papers

Deadlines and Submission

Papers submitted to SIGCSE TS 2026 follow a two-step submission process. The first step requires that authors submit all paper metadata and a plain text abstract in EasyChair no later than Thursday, 26 June 2025. This data is used to allow reviewers to bid on potential papers to maximize the match of reviewer expertise to paper content. To help the bidding and reviewing process, please submit an abstract that is as close to the finished version as possible. The Program Chairs reserve the right to desk reject abstracts that do not contain content that can help a reviewer during bidding.

The second step of the paper submission process is to upload the final anonymized PDF of the full paper for review. This must be completed no later than Thursday, 3 July 2025. Authors who fail to submit an abstract by the first deadline will not be permitted to submit a full PDF.

Important Dates

Abstact Due Date Thursday, 26 June 2025
Abstract Due Time 23:59 AoE (Anywhere on Earth, UTC-12h)
Due Date Thursday, 3 July 2025
Due Time 23:59 AoE (Anywhere on Earth, UTC-12h)
Submission Limits 6 pages + 1 page only for references
Notification to Authors    Monday 15 September 2025 tentative
Submission Link https://easychair.org/conferences/?conf=sigcsets2026
Session Duration 15 minutes

Authors should carefully choose the appropriate track and review the authorship policies. Authors may also find it useful to read the Instructions for Reviewers and the Review Forms to understand how their submissions will be reviewed. Also note that when submitting, you will need to provide between 3-7 related topics from the Topics list under Info.

Abstracts

All papers must have a plain-text abstract of up to 250 words. Abstracts should not contain subheadings or citations. The abstract should be submitted in EasyChair along with paper metadata, and the same text should be included in the PDF version of the full paper at the appropriate location.

Submission Templates

SIGCSE TS 2026 is NOT participating in the new ACM TAPS workflow, template, and production system.

All paper submissions must be in English and formatted using the 2-column ACM SIG Conference Proceedings format and US letter size pages (8.5x11 inch or 215.9 x 279.4mm). You must NOT alter any of the templates (e.g., regarding font size, margins, etc.)

Here is a Sample Paper Submission with Notes that has some notes/tips and shows the required sections.

Page Limits: Papers are limited to a maximum of 6 pages of body content (including all titles, author information, abstract, main text, tables and illustrations, acknowledgements, and supplemental material). One additional page may be included, which contains only references. If included, appendix materials MUST NOT be present on the optional references page.

Requirements for Double Anonymous Review Process: At the time of submission all entries must include blank space for all anonymous author information (or anonymized author name, institution, location, and email address), followed by an abstract, keywords, CCS Concepts, placeholders for the ACM Reference Format and copyright blocks, and references. For anonymized submissions, all blank space necessary for all author information must be reserved under the Title in the form of fully anonymized text (e.g. 4 lines containing Author1, Author1Institution, Author1Location, anon1@university.edu - 3 author blocks per row). Do not just leave space on the sixth page - it will be impossible for chairs to review whether you adhered to the page limit, and may lead to a desk reject. In addition, please leave enough blank space for what you intend to include for Acknowledgements but do not include the text, especially names, university or lab names, and granting agencies and grant numbers. Acknowledgements must be included in the first 6 pages (not on page 7).

Other requirements: You must provide a separate block for each author, including name, email, institution, location, and country, even if authors share an institution (i.e., 4 lines per author).

Desk Rejects: Papers that do not adhere to page limits or formatting requirements will be desk rejected without review.

Accessibility: SIGCSE TS 2026 authors are strongly encouraged to prepare submissions using these templates in such a manner that the content is widely accessible to potential reviewers, track chairs, and readers. Please see these resources for preparing an accessible submission.

MS Word Authors: Please use the interim Word template provided by ACM.

  • NOTE: The default interim Word template text shows appendix materials following the references. SIGCSE TS 2026 does not permit appendices on the optional page allotted for references. Authors must include all relevant content within the 6 body pages of the paper. References are the ONLY thing that can be added on page 7.

LaTeX Authors:

  • Overleaf provides a suitable two-column sig conference proceedings template.
  • Do not use the anonymous document class option, as counter-intuitive as that sounds. Full author blocks with sufficient space (4 lines per author, 3 author blocks per row) have to appear in the submission right under the title, and the ‘anonymous’ option removes them.
  • Other LaTeX users may alternatively use the ACM Primary template, adding the “sigconf” format option in the documentclassto obtain the 2-column format. (ACM has recently changed the ACM template and we have not yet had a chance to verify that the new version works correctly.)
  • NOTE: The default LaTeX template text shows appendix materials following the references. SIGCSE TS 2026 does not permit appendices on the optional page allotted for references. Authors must include all relevant content within the 6 body pages of the paper. References are the ONLY thing that can be added on page 7.

Double Anonymized Review

Authors must submit ONLY an anonymized version of the paper. The goal of the anonymized version is to, as much as possible, provide the author(s) of the paper with an unbiased review. The anonymized version must have ALL mentions of the authors removed (including author’s names and affiliation plus identifying information within the body of the paper such as websites or related publications). However, authors are reminded to leave sufficient space in the submitted manuscripts to accommodate author information either at the beginning or end of the paper.

LaTeX/Overleaf users should be cautious when using the ‘anonymous’ option. Full author blocks with sufficient space (4 lines per author, 3 author blocks per row) have to appear in the submission right under the title (not just at the end of page 6), and that option removes them. Thus, authors have to use anonymized placeholder text in the author information block, e.g., “Author 1”, “Affiliation 1”, etc. Word users also have to adhere to this requirement by including a block for each author (4 lines per author, 3 author blocks per row).

Self-citations need not be removed if they are worded so that the reviewer doesn’t know if the writer is citing themselves. That is, instead of writing “We reported on our first experiment in 2017 in a previous paper [1]”, the writer might write “In 2017, an initial experiment was done in this area as reported in [1].

As per ACM guidelines, authors may distribute a preprint of their work on ArXiv.org. However, to ensure the anonymity of the process, we ask that you not publish your work until after you receive the accept/reject notice. If particular aspects of your paper require earlier distribution of the preprint, please consider changing the title and abstract so that reviewers do not inadvertently discover your identity.

Submissions to the Papers tracks are reviewed with the dual-anonymous review process. The reviewers and meta-reviewers (i.e. associate program chairs or APCs) are unaware of the author identities, and reviewers and APCs are anonymous to each other and to the authors.

The reviewing process includes a discussion phase after initial reviews have been posted. During this time, the reviewers and APC can examine all reviews and privately discuss the strengths and weaknesses of the work in an anonymous manner through EasyChair. Following discussion, the APC shall draft a meta-review that holistically captures the group position on the paper, incorporating views raised in the reviews and during the discussion phase.

The SIGCSE TS 2026 review process does not have a rebuttal period for authors to respond to comments, and all acceptance decisions are final.

ACM Policies

By submitting your article to an ACM Publication, you are hereby acknowledging that you and your co-authors are subject to all ACM Publications Policies, including ACM’s new Publications Policy on Research Involving Human Participants and Subjects (https://www.acm.org/publications/policies/research-involving-human-participants-and-subjects). Alleged violations of this policy or any ACM Publications Policy will be investigated by ACM and may result in a full retraction of your paper, in addition to other potential penalties, as per ACM Publications Policy. Please see the Authorship Policies page for details.

ORCID ID

ACM has made a commitment to collect ORCiD IDs from all published authors (https://authors.acm.org/author-resources/orcid-faqs). All authors on each submission must have an ORCiD ID (https://orcid.org/register) in order to complete the submission process. Please make sure to get your ORCiD ID in advance of submitting your work. (If EasyChair does not request the ORCiD ID for your coauthors, you do not need to find a way to enter one.)

Be aware of reviewing guidelines for each track

Once submitted, a paper will not be moved between the three paper tracks. If your paper does not fit with the reviewing criteria below for the track that you chose, it is probable that it will receive lower scores.

Authors should check the following review guidelines to see in which track their paper best fits.

There are three different paper types at SIGCSE TS : Computing Education Research (CER), Experience Reports and Tools (ERT), and Position and Curricula Initiative (PCI). Reviewers are assigned to a specific paper track (e.g. a reviewer in the CER track will only be assigned to review papers in that track). This is to avoid confusion and for reviewers to get familiar with the guidelines for their specific paper track.

All papers will be considered relative to criteria for motivation, use of prior/related work, approach, evidence, contribution/impact, and presentation. Each track has guidance about how reviewers should consider these criteria relative to the goal of the track, and each paper must be evaluated using the criteria for the track to which it is submitted.

The following table illustrates how to interpret the review criteria for each of the three tracks of papers. For convenience, you may also download a PDF copy of the paper review criteria.

Criteria Computing Education Research (CER) Experience Reports & Tools (ERT) Position & Curricula Initiative (PCI)
Motivation

Evaluate the submissions clarity of purpose and alignment with the scope of the SIGCSE TS.

  • The submission provides a clear motivation for the work.
  • The submission states a set of clear Research Questions or Specific Aims/Goals.
  • The submission provides a clear motivation for the work.
  • Objectives or goals of the experience report are clearly stated, with an emphasis on contextual factors that help readers interpret the work.
  • ERT submissions need not be framed around a set of research questions or theoretical frameworks.
  • The submission provides a clear motivation for the work.
  • Objectives or goals of the position or curricula initiative are clearly stated, and speak to issues beyond a single course or experience
  • Submissions focused on curricula, programs, or degrees should describe the motivating context before the new initiative was undertaken.
  • PCI papers may or may not ground the work in theory or research questions.
Prior and Related Work

Evaluate the use of prior literature to situate the work, highlight its novelty, and interpret its results.

  • Discussion of prior and related work (e.g., theories, recent empirical findings, curricular trends) to contextualize and motivate the research is adequate
  • The relationship between prior work and the current study is clearly stated
  • The work leverages theory where appropriate.
  • Discussion of prior and related work to contextualize and motivate the experience report is adequate
  • The relationship between prior work and the experience or tool is clearly stated
  • Discussion of prior and related work to contextualize and motivate the position or initiative is adequate
  • The relationship between prior work and the proposed initiative or position is clearly stated
Approach

Evaluate the transparency and soundness of the approach used in the submission relative to its goals.

  • Study methods and data collection processes are transparent and clearly described.
  • The methodology described is a valid/sound way to answer the research questions posed or address the aims of the study identified by the authors.
  • The submission provides enough detail to support replication of the methods.
  • For tool focused papers: Is the design of the tool appropriate for its stated goals? Is the context of its deployment clearly described?
  • For experience report papers: Is the experience sufficiently described to understand how it was designed/executed and who the target learner populations were?
  • For all papers: To what extent does the paper provide reasonable mechanisms of formative assessment about the experience or tool?
  • The submission uses an appropriate mechanism to present and defend its stated position or curriculum proposal (this may include things like a scoping review, secondary data analysis, program evaluation, among others).
  • As necessary, the approach used is clearly described.
  • PCI papers leveraging a literature-driven argument need not necessarily use a systematic review format, though it may be appropriate for certain types of claims.
Evidence

Evaluate the extent to which the submission provides adequate evidence to support its claims.

  • The analysis & results are clearly presented and aligned with the research questions/goals.
  • Qualitative or quantitative data is interpreted appropriately.
  • Missing or noisy data is addressed.
  • Claims are well supported by the data presented.
  • The threats to validity and/or study limitations are clearly stated
  • The submission provides rich reflection on what did or didn’t work, and why
  • Evidence presented in ERT papers is often descriptive or narrative in format, and may or may not be driven by explicit motivating questions.
  • Claims about the experience or tool are sufficiently scoped within the bounds of the evidence presented.
  • PCI papers need not present original data collection, but may leverage other forms of scholarly evidence to support the claims made.
  • Evidence presented is sufficient for defending the position or curriculum initiative
  • Claims should be sufficiently scoped relative to the type of evidence presented.
Contribution & Impact

Evaluate the overall contribution to computing education made by this submission.

  • All CER papers should advance our knowledge of computing education
  • Quantitative research should discuss generalizability or transferability of findings beyond the original context.
  • Qualitative research should add deeper understanding about a specific context or problem
  • For novel projects, the contribution beyond prior work is explained
  • For replications, the contribution includes a discussion on the implications of the new results–even if null or negative–when compared to prior work
  • Why the submission is of interest to SIGCSE community is clearly explained
  • The work enables adoption by other practitioners
  • The work highlights the novelty of the experience or tool presented
  • The implications for future work/use are clearly stated
  • The work presents a coherent argument about a computing education topic, including, but not limited to curriculum or program design, practical and social issues facing computing educators, and critiques of existing practices
  • The submission offers new insights about broader concerns to the computing education community or offers guidance for adoption of new curricular approaches.
Presentation

Evaluate the writing quality with respect to expectations for publication, allowing for only minor revisions prior to final submission.

  • The presentation (writing, graphs, or diagrams) is clear
  • Overall flow and organization are appropriate
  • The presentation (writing, graphs, or diagrams) is clear
  • Overall flow and organization are appropriate
  • The presentation (writing, graphs, or diagrams) is clear
  • Overall flow and organization are appropriate

Example papers

There are many resources for writing high quality papers for submission to the SIGCSE Technical Symposium. We encourage authors to read and evaluate papers from a prior SIGCSE Technical Symposium, especially those designated as best papers, which were selected both due to content and high quality reporting.

Here are the best papers from SIGCSE TS 2023 as examples that showcase the difference between the three paper tracks.

Computing Education Research (CER)

  • Geoffrey L. Herman, Shan Huang, Peter A. Peterson, Linda Oliva, Enis Golaszewski, and Alan T. Sherman. 2023. Psychometric Evaluation of the Cybersecurity Curriculum Assessment. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1 (SIGCSE 2023). Association for Computing Machinery, New York, NY, USA, 228–234. https://dl.acm.org/doi/10.1145/3545945.3569762

  • Rachel Harred, Tiffany Barnes, Susan R. Fisk, Bita Akram, Thomas W. Price, and Spencer Yoder. 2023. Do Intentions to Persist Predict Short-Term Computing Course Enrollments: A Scale Development, Validation, and Reliability Analysis. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1 (SIGCSE 2023). Association for Computing Machinery, New York, NY, USA, 1062–1068. https://dl.acm.org/doi/10.1145/3545945.3569875

  • Eric J. Mayhew and Elizabeth Patitsas. 2023. Critical Pedagogy in Practice in the Computing Classroom. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1 (SIGCSE 2023). Association for Computing Machinery, New York, NY, USA, 1076–1082. https://dl.acm.org/doi/10.1145/3545945.3569840

Experience Reports and Tools (ERT)

  • Bailey Flanigan, Ananya A. Joshi, Sara McAllister, and Catalina Vajiac. 2023. CS-JEDI: Required DEI Education, by CS PhD Students, for CS PhD Students. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1 (SIGCSE 2023). Association for Computing Machinery, New York, NY, USA, 87–93. https://dl.acm.org/doi/10.1145/3545945.3569733
  • Gloria Ashiya Katuka, Yvonika Auguste, Yukyeong Song, Xiaoyi Tian, Amit Kumar, Mehmet Celepkolu, Kristy Elizabeth Boyer, Joanne Barrett, Maya Israel, and Tom McKlin. 2023. A Summer Camp Experience to Engage Middle School Learners in AI through Conversational App Development. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1 (SIGCSE 2023). Association for Computing Machinery, New York, NY, USA, 813–819. https://dl.acm.org/doi/10.1145/3545945.3569864
  • Lisa Zhang, Bogdan Simion, Michael Kaler, Amna Liaqat, Daniel Dick, Andi Bergen, Michael Miljanovic, and Andrew Petersen. 2023. Embedding and Scaling Writing Instruction Across First- and Second-Year Computer Science Courses. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1 (SIGCSE 2023). Association for Computing Machinery, New York, NY, USA, 610–616. https://dl.acm.org/doi/10.1145/3545945.3569729

Position and Curricula Initiative (PCI)

  • Brett A. Becker, Paul Denny, James Finnie-Ansley, Andrew Luxton-Reilly, James Prather, and Eddie Antonio Santos. 2023. Programming Is Hard - Or at Least It Used to Be: Educational Opportunities and Challenges of AI Code Generation. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1 (SIGCSE 2023). Association for Computing Machinery, New York, NY, USA, 500–506. https://dl.acm.org/doi/10.1145/3545945.3569759
  • Muwei Zheng, Nathan Swearingen, Steven Mills, Croix Gyurek, Matt Bishop, and Xukai Zou. 2023. Case Study: Mapping an E-Voting Based Curriculum to CSEC2017. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1 (SIGCSE 2023). Association for Computing Machinery, New York, NY, USA, 514–520. https://dl.acm.org/doi/10.1145/3545945.3569811

Additional Resources

Below, we list additional resources that you may find useful as you write your papers, especially computing education research papers.

Language Editing Assistance

ACM has partnered with International Science Editing (ISE) to provide language editing services to ACM authors. ISE offers a comprehensive range of services for authors including standard and premium English language editing, as well as illustration and translation services. Editing services are at author expense and do not guarantee publication of a manuscript.

Additional details are in the instructions for authors.

Getting ready

  • Read over the tab called “Choosing a Track” to be certain that you have chosen the appropriate track for submission of your paper. Refer to the track descriptions on the About tab
  • Make sure that all authors have obtained an ORCiD identifier. These identifiers may be required for paper submission.
  • Check the author list carefully now and review with your co-authors. The authors on the submission must be the same as the authors on the final version of the work (assuming the work is accepted). Authors may not be added or removed after submission and must also appear in the same order as in the submission.
  • Identify at least one author who is willing to review for the symposium. Have that author or those authors sign up to review at https://tinyurl.com/review-sigcse25. (If they’ve done so already, there is no need to fill out the form a second time.) Researchers listed as co-authors on three or more submissions must volunteer to review. (Undergraduate co-authors are exempt from this requirement.)
  • Download the appropriate template. Check this Sample Paper Submission with Notes that has some notes/tips and shows the required sections.
  • Review the additional resources for the track.
  • Review the instructions for reviewers and the review forms to see what reviewers will be looking for in your submission.
  • Look at the list of topics in the Info menu on this site or on EasyChair and pick 3-7 appropriate topics for your submission. This helps in matching reviewers’ expertise with submissions and is different from the next item.
  • Make certain that you have entered CCS concepts in your paper by choosing them from the ACM Computing Classification System site.
  • Look at the EasyChair submission page to make sure you’ll be prepared to fill everything out. Note that you are permitted to update your submission until the deadline, so it is fine to put draft information there as you get ready.

The abstract on EasyChair

Note: EasyChair does not let you save incomplete submission forms. Please fill out all of the fields in one sitting and save them. After that, you can continue to update the information in the fields and your submission until the deadline.

  • Select the appropriate paper track for your paper
  • Submit a 250-word abstract by 11:59 p.m. AOE, Thursday, 26 June 2025.
  • IMPORTANT: as you enter the author names in EasyChair consider the order. Author lists can NOT be modified (this includes add/remove/reorder)

The paper on EasyChair

This page captures the reviewing policies of the papers tracks at SIGCSE TS. Please email the Program Chairs at program@sigcse2026.sigcse.org with comments or questions.

There are three different paper types at SIGCSE TS : Computing Education Research (CER), Experience Reports and Tools (ERT), and Position and Curricula Initiative (PCI). When authors submit a paper, they have to select to submit to one of the three different types of papers.

Timeline

Reviewing Phase Start Date End Date
Bidding Friday, 27 June 2025 Monday, 14 July 2025
Reviewing Friday, 18 July 2025 Monday, 4 August 2025
Discussion & Recommendations   Tuesday, 5 August 2025  Sunday, 10 August 2025

Note: Associate Program Chair (APC) Recommendation and Meta-Review Deadline: 11:59 p.m. Monday, 11 August 2025 anywhere on earth (AOE)

EasyChair

The review process for SIGCSE TS 2026 will be done using the EasyChair submission system (https://easychair.org/my/conference?conf=sigcsets2026) .

Preparing to Bid in EasyChair

Reviewers will be invited to join/login into EasyChair. Once you have accepted your invitation, you should update your profile, select topics you are most qualified to review, and identify conflicts of interest.

Selecting topics: Select SIGCSE TS 2026 > Conference > My topics from the menu and select at about five topics you are most qualified to review. Please select no more than seven topics; more topics make it harder for the EasyChair system to make a good set of matches.

Conflicts of interest: Reviewers also identify their Conflicts of Interest by selecting SIGCSE TS 2025 > Conference > My Conflicts.

Reviewing in EasyChair

To review a paper.

Select “PC member (Paper - Experience Reports and Tools)” in the SIGCSE TS 2026 area

  • Log in to EasyChair.
  • In the EasyChair menu, select “My Recent Roles”
  • Select your PC member role in the SIGCSE TS 2026 area
  • Select Reviews > Assigned to Me
  • Click on the Adobe PDF icon that corresponds to the paper. Doing so will give you access to the paper.
  • Make sure you’ve looked over the review criteria.
  • Click on the “Information” link (the I in a blue circle) associated with the paper.
  • Click on “Add Review” in the upper-right-hand corner
  • Enter your answers in a text editor or word processor. (EasyChair times out.)
  • Copy your answers over to EasyChair.

To update your review.

  • Select Reviews > Assigned to Me
  • Click on the appropriate link in the “Update Review” column.

You might also want to click on “Show Reviews”, where you can see other reviews and comments, update your review, and add a comment.

Roles in the Review Process

  • Reviewers write reviews of their assigned submissions, evaluating them against the review criteria.
  • Associate Program Chairs (APCs) write meta-review for their assigned submissions and provide a recommendation (accept/reject) and feedback to the Program Chairs.
  • Program Chairs make the final decisions on the program based on recommendations from the APCs (for papers) and from track chairs (for other tracks).

SIGCSE TS has three Program Chairs, each of whom serves a two-year term. Nominations for Program Chairs are solicited by the SIGCSE TS steering committee, which makes recommendations to the SIGCSE Board. Program Chairs are appointed by the SIGCSE board.

The Program Chairs invite and appoint the Reviewers and APCs. The number of submissions per Reviewer/APC depends on the number of volunteers and the size of the submissions pool.

The goals is for each paper submission to receive at least three reviews and a meta-review. All reviews are submitted through the submission system. In EasyChair, Reviewers are considered “Ordinary PC members” and APCs are considered “Senior PC members”.

Paper Reviewing Guidelines (CER, ERT, and PCI)

There are three different paper types at SIGCSE TS : Computing Education Research (CER), Experience Reports and Tools (ERT), and Position and Curricula Initiative (PCI). Reviewers are assigned to a specific paper track (e.g. a reviewer in the CER track will only be assigned to review papers in that track). This is to avoid confusion and for reviewers to get familiar with the guidelines for their specific paper track.

All papers will be considered relative to criteria for motivation, use of prior/related work, approach, evidence, contribution/impact, and presentation. Each track has guidance about how reviewers should consider these criteria relative to the goal of the track, and each paper must be evaluated using the criteria for the track to which it is submitted. A paper will not be moved between the three paper tracks.

The following table illustrates how to interpret the review criteria for each of the three tracks of papers. Please refer to this table to help better understand the emphases or characteristics of the track for which you will be reviewing. For convenience, you may also download a PDF copy of the paper review criteria.

Criteria Computing Education Research (CER) Experience Reports & Tools (ERT) Position & Curricula Initiative (PCI)
Motivation

Evaluate the submissions clarity of purpose and alignment with the scope of the SIGCSE TS.

  • The submission provides a clear motivation for the work.
  • The submission states a set of clear Research Questions or Specific Aims/Goals.
  • The submission provides a clear motivation for the work.
  • Objectives or goals of the experience report are clearly stated, with an emphasis on contextual factors that help readers interpret the work.
  • ERT submissions need not be framed around a set of research questions or theoretical frameworks.
  • The submission provides a clear motivation for the work.
  • Objectives or goals of the position or curricula initiative are clearly stated, and speak to issues beyond a single course or experience
  • Submissions focused on curricula, programs, or degrees should describe the motivating context before the new initiative was undertaken.
  • PCI papers may or may not ground the work in theory or research questions.
Prior and Related Work

Evaluate the use of prior literature to situate the work, highlight its novelty, and interpret its results.

  • Discussion of prior and related work (e.g., theories, recent empirical findings, curricular trends) to contextualize and motivate the research is adequate
  • The relationship between prior work and the current study is clearly stated
  • The work leverages theory where appropriate.
  • Discussion of prior and related work to contextualize and motivate the experience report is adequate
  • The relationship between prior work and the experience or tool is clearly stated
  • Discussion of prior and related work to contextualize and motivate the position or initiative is adequate
  • The relationship between prior work and the proposed initiative or position is clearly stated
Approach

Evaluate the transparency and soundness of the approach used in the submission relative to its goals.

  • Study methods and data collection processes are transparent and clearly described.
  • The methodology described is a valid/sound way to answer the research questions posed or address the aims of the study identified by the authors.
  • The submission provides enough detail to support replication of the methods.
  • For tool focused papers: Is the design of the tool appropriate for its stated goals? Is the context of its deployment clearly described?
  • For experience report papers: Is the experience sufficiently described to understand how it was designed/executed and who the target learner populations were?
  • For all papers: To what extent does the paper provide reasonable mechanisms of formative assessment about the experience or tool?
  • The submission uses an appropriate mechanism to present and defend its stated position or curriculum proposal (this may include things like a scoping review, secondary data analysis, program evaluation, among others).
  • As necessary, the approach used is clearly described.
  • PCI papers leveraging a literature-driven argument need not necessarily use a systematic review format, though it may be appropriate for certain types of claims.
Evidence

Evaluate the extent to which the submission provides adequate evidence to support its claims.

  • The analysis & results are clearly presented and aligned with the research questions/goals.
  • Qualitative or quantitative data is interpreted appropriately.
  • Missing or noisy data is addressed.
  • Claims are well supported by the data presented.
  • The threats to validity and/or study limitations are clearly stated
  • The submission provides rich reflection on what did or didn’t work, and why
  • Evidence presented in ERT papers is often descriptive or narrative in format, and may or may not be driven by explicit motivating questions.
  • Claims about the experience or tool are sufficiently scoped within the bounds of the evidence presented.
  • PCI papers need not present original data collection, but may leverage other forms of scholarly evidence to support the claims made.
  • Evidence presented is sufficient for defending the position or curriculum initiative
  • Claims should be sufficiently scoped relative to the type of evidence presented.
Contribution & Impact

Evaluate the overall contribution to computing education made by this submission.

  • All CER papers should advance our knowledge of computing education
  • Quantitative research should discuss generalizability or transferability of findings beyond the original context.
  • Qualitative research should add deeper understanding about a specific context or problem
  • For novel projects, the contribution beyond prior work is explained
  • For replications, the contribution includes a discussion on the implications of the new results–even if null or negative–when compared to prior work
  • Why the submission is of interest to SIGCSE community is clearly explained
  • The work enables adoption by other practitioners
  • The work highlights the novelty of the experience or tool presented
  • The implications for future work/use are clearly stated
  • The work presents a coherent argument about a computing education topic, including, but not limited to curriculum or program design, practical and social issues facing computing educators, and critiques of existing practices
  • The submission offers new insights about broader concerns to the computing education community or offers guidance for adoption of new curricular approaches.
Presentation

Evaluate the writing quality with respect to expectations for publication, allowing for only minor revisions prior to final submission.

  • The presentation (writing, graphs, or diagrams) is clear
  • Overall flow and organization are appropriate
  • The presentation (writing, graphs, or diagrams) is clear
  • Overall flow and organization are appropriate
  • The presentation (writing, graphs, or diagrams) is clear
  • Overall flow and organization are appropriate

Review Process Steps

Step 1: Authors submit Abstracts of Papers

Authors submit a title and abstract one week prior to the full paper deadline. Authors are allowed to revise their title, abstract, and other information before the full paper submission deadline.

Step 2: Reviewers and APCs Bid for Papers

Reviewers and APCs select topics they feel most qualified to review. This helps the system prioritize papers.

Reviewers and APCs are then asked to select a set of papers for which they have sufficient expertise (we call this “bidding”). The Program Chairs assign papers based on these bids. The purpose of bidding is NOT to express interest in papers you want to read. It is to express your expertise and eligibility for fairly evaluating the work. These are subtly but importantly different purposes. We ask reviewers and APCs to select more papers than they plan to review so that we can best ensure that every paper has at least three reviewers.

  • Make sure to specify all of your Conflicts of Interest.
  • Bid on all of the papers you believe you have sufficient expertise to review.
  • Do NOT bid on papers about topics, techniques, or methods that you oppose.

Step 3: Authors submit Full Papers

Submissions of the full papers are due one week after the abstracts are due. As indicated in the Instructions for Authors, submissions are supposed to be sufficiently anonymous so that the reviewer cannot determine the identity or affiliation of the authors. The main purpose of the anonymous reviewing process is to reduce the influence of potential (positive or negative) biases on reviewers’ assessments. You should be able to review the work without knowing the authors or their affiliations. Do not try to find out the identity of authors. When in doubt, please contact the Program Chairs.

Step 4: Program Chairs Decide on Desk Rejects

The Program Chairs will quickly review each paper submission to determine whether it violates anonymization requirements, length restrictions, or plagiarism policies. Authors of desk-rejected papers will be notified immediately. The Program Chairs may not catch every issue. If you see something during the review process that you believe should be desk rejected, contact the Program Chairs at program@sigcse2026.sigcse.org before you write a review. The Program Chairs will make the final judgment about whether something is a violation, and give you guidance on whether and if so how to write a review. Note that Program Chairs with conflicts of interest are excluded from deciding on desk-rejected papers, leaving the decision to the other Program Chairs.

Step 5: Program Chairs Assign Reviewers and APCs

Based on the bids and their judgment, the Program Chairs will collaboratively assign at least three Reviewers and one APC (meta-reviewer) for each paper submission. The Program Chairs will be advised by the submission system assignment algorithm, which depends on all bids being high quality. For the reviewer assignments to be fair and good, the reviewer bids should only be based on expertise and eligibility. Interest alone is not sufficient for bidding to review a paper. Reviewing assignments can only be made by a Program Chair without a conflict of interest.

Step 6a: Reviewers Review Papers

Assigned Reviewers submit their anonymous reviews by the review deadline, reviewing each of their assigned submissions against the Paper Reviewing Guidelines (CER, ERT, and PCI). We strongly recommend that you prepare your rationale in a separate document; EasyChair has been known to time out.

Note that Reviewers must NOT include accept or reject decisions in their review text. (They will indicate accept/reject recommendations separately.)

Due to the internal and external (publication) deadlines, we generally cannot give reviewers or APCs extensions. Note that reviewers, meta-reviewers, and Program Chairs with conflicts cannot see any of the reviews of the papers for which they have conflicts of interest during this process.

Step 6b: APCs and Program Chairs Monitor Review Progress

APCs and Program Chairs periodically check in to ensure that progress is being made. If needed, reminders are emailed to the reviewers with the expectations and timelines. If needed, the Program Chairs recruit emergency reviewers if any of the submissions do not have a sufficient number of reviews, if there is lots of variability in the reviews, or if an expert review is needed.

Step 7: Discussion between Reviewers and APCs

The discussion period provides the opportunity for the Reviewers and the APCs to discuss the reviews and reach an agreement on the quality of the submission relative to the expectations for the track to which it was submitted. The APCs are expected to take leadership role and moderate the discussion. Reviewers are expected to engage in the discussion when prompted by other Reviewers and/or by the APCs by using the Comments feature of EasyChair.

During the discussion period, Reviewers are able to revise their reviews but are NOT required to do so. It is important that at no point Reviewers feel forced to change their reviews, scores, or viewpoints in this process. The APC can disagree with the reviewers and communicate this to the Program Chairs if needed. Everyone is asked to do the following:

  • Read all the reviews of all papers assigned (and re-read your own reviews).
  • Engage in a discussion about sources of disagreement.
  • Use the Paper Reviewing Guidelines (CER, ERT and PCI) to guide your discussions.
  • Be polite, friendly, and constructive at all times.
  • Be responsive and react as soon as new information comes in.
  • Remain open to other reviewers shifting your judgments.
  • Explicitly state any clarifying questions that could change your evaluation of the paper

At the end of the discussion period, the APCs should have enough feedback so that they can make a recommendation for acceptance or rejection to the Program Chairs. This recommendation should be based on their own reading of the reviews and discussion, not simply on the overall score.

Step 8: APCs Write Meta-Reviews

Toward the end of the discussion period, APCs use the reviews, the discussion, and their own evaluation of the submission to write a meta-review and a recommendation for the Program Chairs. A meta-review should summarize the key strengths and weaknesses of the submission, in light of the Paper Reviewing Guidelines (CER, ERT, and PCI) and explain how these led to their recommendation decision. APCs are encouraged to also include their review/feedback in the meta-review. The summary and explanation should help the authors in revising their work where appropriate. The meta-review must constructively summarize all reviews and the discussion as well as summarize any open questions and doubts. A generic meta-review (“After long discussion, the reviewers decided that the paper is not up to standards, and therefore rejected the paper”) is not sufficient.

APCs do not include their recommendation for acceptance or rejection of a paper in their meta-review because they only see a small portion of the submitted papers. Instead, the APCs are asked to make a recommendation of accept or reject to the Program Chairs via the submission system. If however, the Reviewers had differing views and a consensus could not be reached, then the APC captures the essence of all reviews and leaves their recommendation as neutral, and the submission is then further discussed by the Program Chairs.

Recommendations should NOT be based only on scores. For example, an APC may decide to recommend rejection for a paper with three weak accepts, but recommend acceptance for a paper with two accepts and one strong reject (or vice versa)

Step 9: Program Chairs Make Decisions & Notify Authors

Before announcing decisions, the Program Chairs go through all the submissions and read all the reviews and meta-reviews to ensure clarity and consistency with the review process and its criteria as possible. This is done via synchronous meetings of the Program Chairs. APCs are consulted if needed. The Chairs make decisions based on recommendations and their own expertise as well as a desire to provide an appropriately varied program.

The Program Chairs then notify all authors of the decisions about their papers via the submission system.

Step 10: Evaluation

The Evaluation Chairs send out surveys to authors, reviewers, and APCs. Please take the time to respond to these surveys, as they inform processes and policies for future SIGCSE Technical Symposia.

The Program Chairs also request feedback from the APCs on the quality of reviews as a metric to be used for future invitations to review for the SIGCSE Technical Symposium.

We will do our best to identify a small set of exceptional reviewers who will receive reviewing awards at the symposium.

Conflicts of Interest

SIGCSE TS takes conflicts of interest, both real and perceived, quite seriously. The conference adheres to the ACM conflict of interest policy (https://www.acm.org/publications/policies/conflict-of-interest) as well as the SIGCSE conflict of interest policy (https://sigcse.org/policies/COI.html). These state that a paper submitted to the SIGCSE TS is a conflict of interest for an individual if at least one of the following is true:

  • The individual is a co-author of the paper
  • A student of the individual is a co-author of the paper
  • The individual identifies the paper as a conflict of interest, i.e., that the individual does not believe that they can provide an impartial evaluation of the paper.

The following policies apply to conference organizers:

  • The Program Chairs are not allowed to submit to any track.
  • The chairs of any track are not allowed to submit to that specific track.
  • All other conference organizers are allowed to submit to any track.
  • All reviewers (PC members) and meta-reviewers (APC members) are allowed to submit to any track.

No reviewer, meta-reviewer, or chair with a conflict of interest in the paper will be included in any evaluation, discussion, or decision about the paper. It is the responsibility of the reviewers, meta-reviewers, and chairs to declare their conflicts of interest throughout the process. The corresponding actions are outlined below for each relevant step of the reviewing process. It is the responsibility of the chairs to ensure that no reviewer or meta-reviewer is assigned a role in the review process for any paper for which they have a conflict of interest.

Recalcitrant Reviewers

Reviewers who don’t submit reviews, have reviews with limited constructive feedback, do not engage effectively in the discussion phase, or submit inappropriate reviews will be removed from the reviewer list (as per SIGCSE policy). Recalcitrant reviewers will be informed of their removal from the reviewer list. Reviewers with repeated offenses (two within a three-year period) will be removed from SIGCSE reviewing for three years.

These are some of the more common questions (and categories of questions) the Program Chairs have received. Please look over these questions in advance of reviewing. If you have a question not covered here (or even if you have a question about the questions covered here), please reach out to the program chairs at program@sigcse2026.sigcse.org.

Anonymity

General principle: We expect authors to make a good-faith effort to make their papers anonymous. We will not reject papers because it is possible with some sleuthing to discover their authors. If you do discover the authors’ identities, do your best to ignore them.

I was looking up aspects of this paper on the Web and found a copy of the paper on ArXiV (or other online archive) with the authors’ names listed. Does this break anonymity?
No. ACM Policy indicates that authors may submit works under review to online archives.
I see the name “Trovato” in the header of the document. Is this the name of one of the authors, thereby breaking anonymity?
No. “Trovato” is one of the default names in the ACM LaTeX template. You can feel free to ignore the name.
The name of the institution is mentioned in the text of the paper. Does this break anonymity?
Yes. Do your best to review the paper as if you did not know the authors and include a confidential comment in your review indicating this issue.
The name of the project is mentioned in the text of the paper. When I searched for the project on the Web so that I could better understand it, I discovered who the authors were.
Ideally, the authors should have anonymized the name of the project. Do your best to review the paper as if you did not know the authors and include a confidential comment in your review indicating this issue.

Human Subject Protection

General principle: We expect members of our community to follow high standards for the protection of human subjects. In particular, All authors submitting to SIGCSE TS are responsible for adhering to the ACM Publications Policy on Research Involving Human Participants and Subjects.

However, different countries and different institutions have different policies or interpretations of policies. For example, US policies recently changed to exempt normal classroom activities as long as they do not affect student learning. Different institutions have interpreted that exemption in multiple ways and have different processes for obtaining that exemption.

In general, reviewers should assume that authors are telling the truth when they indicate that a study is exempt from review by the local IRB/ethics board. Nonetheless, if a reviewer has any concern about human subjects protection in a study, they should reach out to their APC and the Program Chairs.

The paper includes a study of students but indicates that “the work does not directly involve human participants”. Is that okay?
A study that involves students might be exempt from review, but it does involve human participants. Please reach out to your APC and the Program Chairs so that we can clarify this issue. Then review the paper as if the work is acceptable.
The authors note that the project is exempt from review, but I have trouble believing that. It certainly wouldn’t be at my institution.
Policies vary between countries and their interpretation varies between institutions. Nonetheless, if you are worried, please reach out to your APC and the Program Chairs so that we can clarify this issue. Then review the paper as if the work is acceptable.
I am concerned that this study could cause harm to the participants.
Please reach out to your APC and the Program Chairs.

Plagiarism, including self plagiarism

I am concerned that this paper is too close to another paper I have seen (or am reviewing).
Please reach out to your APC and the Program Chairs so that we can explore the issue.Then review the paper as if the work is acceptable. Note that authors can (confidentially) tell the Program Chairs about potential overlaps between papers and it is the Program Chairs’ responsibility to determine whether such overlaps are acceptable.
The running header for this paper appears to be for an earlier iteration of the SIGCSE Technical Symposium. I am concerned that this work may be recycled.
Authors are certainly permitted to update and resubmit papers that were not accepted. Authors have also been known to reuse prior submissions as a template. In some such cases, they neglect to update the header. We find that these are the most common reasons we see headers that indicate a prior conference. However, if you are concerned that the content is recycled from an accepted paper, please reach out to your APC and the Program Chairs.

Generative AI

Projects involving generative AI
The authors built a system using data that they did not generate. They do not seem to have obtained permission to do so. What should I do?
SIGCSE TS requires that authors obtain permission to use other people’s data and to explicitly indicate this in the acknowledgements section. Please reach out to your APC and the Program Chairs so that we can clarify this issue. Then review the paper as if the work is acceptable.
Authors’ use of generative AI
I believe that the authors used ChatGPT, Google Translate, or other tool in writing this paper. However, they have not acknowledged this use.
ACM policy permits authors to use generative AI tools but requires that they acknowledge the use of such tools. Please reach out to your APC and the Program Chairs so that we can clarify this issue. Then review the paper as if the work is acceptable. You may also indicate your concerns in the review, but please clarify that your concerns did not affect your overall rating (unless the tools led to poor writing).
I am worried that some of the references are fake, perhaps generated by a tool.
Please reach out to your APC and the Program Chairs ASAP. Then review the paper as if the work is acceptable.
Reviewers’ use of generative AI

Please refer to the general AI policies for some background.

May I use ChatGPT or other Generative AI tool to write my reviews?
No. It is a violation of ACM policies to feed the authors’ text into an online tool.
Can I use the Google “Help Me Write” tool, Microsoft writing tools, Grammarly, or other similar software?
Yes.
I’m worried that some text in this paper plagiarizes text from elsewhere. May I use TurnItIn or similar software to check?
No. If you have such concerns, please reach out to your APC and the Program Chairs.

Track choice

This paper was submitted to XXX but I think it belongs in YYY.
We do not switch papers between tracks. Please review the paper according to the criteria of the track the authors selected. You may certainly raise your concern about choice of track during the discussion and you can also include a comment in the confidential notes to the Program Chairs.

Concerns about other reviewers

I think another reviewer’s comments are overly harsh.
Please mention that during the discussion. (Please be polite in doing so.) If the other reviewer does not respond, reach out to the APC.
I think another reviewer’s reviews were generated by ChatGPT or other tool.
Please reach out to your APC or the Program Chairs. Do not accuse another reviewer directly.
I am concerned that another reviewer’s comments are inappropriate.
If you are comfortable doing so, please mention that during the discussion. If not, please reach out to the APC or the Program Chairs, who can then raise the issue with the other reviewer.

The approximate text from the review form follows.

Note that not all reviewer responses are available to authors.

Common Introductory Fields

Summary: Please provide a brief summary of the submission, its audience, and its main point(s), with respect to the review criteria of this track. Refer to the Table on the current SIGCSE TS website (i.e., Instructions for Reviewers) to familiarize yourself with the review criteria for the appropriate track: (1) Computing Education Research, (2) Experience Reports and Tools, and (3) Position and Curricula Initiatives.

Familiarity: Rate your personal familiarity with the topic area of this submission in relation to your research or practical experience.

  • None - I have never reviewed or written a paper or otherwise have experience in this area
  • Low - I have read papers or otherwise have slight experience in this area
  • Medium - I have reviewed papers or otherwise have some experience in this area
  • High - I have written and reviewed papers or otherwise have moderate experience in this area
  • Expert - I have written and reviewed many papers or otherwise have extensive experience in this area

Computing Education Research

Motivation (CER): Evaluate the submission’s clarity of purpose and alignment with the scope of the SIGCSE TS.

  • The submission provides a clear motivation for the work.
  • The submission states a set of clear Research Questions or Specific Aims/Goals.

Prior and Related Work (CER): Evaluate the use of prior literature to situate the work, highlight its novelty, and interpret its results.

  • Discussion of prior and related work (e.g., theories, recent empirical findings, curricular trends) to contextualize and motivate the research is adequate.
  • The relationship between prior work and the current study is clearly stated.
  • The work leverages theory where appropriate.

Approach (CER): Evaluate the transparency and soundness of the approach used in the submission relative to its goals.

  • Study methods and data collection processes are transparent and clearly described.
  • The methodology described is a valid/sound way to answer the research questions posed or address the aims of the study identified by the authors.
  • The submission provides enough detail to support replication of the methods.

Evidence (CER): Evaluate the extent to which the submission provides adequate evidence to support its claims.

  • The analysis & results are clearly presented and aligned with the research questions/goals.
  • Qualitative or quantitative data is interpreted appropriately.
  • Missing or noisy data is addressed.
  • Claims are well supported by the data presented.
  • The threats to validity and/or study limitations are clearly stated.

Contribution & Impact (CER): Evaluate the overall contribution to computing education made by this submission.

  • All CER papers should advance our knowledge of computing education.
  • Quantitative research should discuss generalizability or transferability of findings beyond the original context.
  • Qualitative research should add deeper understanding about a specific context or problem.
  • For novel projects, the contribution beyond prior work is explained.
  • For replications, the contribution includes a discussion on the implications of the new results–even if null or negative–when compared to prior work.

Presentation (CER): Evaluate the writing quality with respect to expectations for publication, allowing for only minor revisions prior to final submission.

  • The presentation (e.g., writing, grammar, graphs, diagrams) is clear.
  • Overall flow and organization are appropriate.

Experience Reports and Tools

Motivation (ERT) Evaluate the submission’s clarity of purpose and alignment with the scope of the SIGCSE TS.

  • The submission provides a clear motivation for the work.
  • Objectives or goals of the experience report are clearly stated, with an emphasis on contextual factors that help readers interpret the work.
  • ERT submissions need NOT be framed around a set of research questions or theoretical frameworks.

Prior and Related Work (ERT) Evaluate the use of prior literature to situate the work, highlight its novelty, and interpret its results.

  • Discussion of prior and related work to contextualize and motivate the experience report is adequate.
  • The relationship between prior work and the experience or tool is clearly stated.

Approach (ERT): Evaluate the transparency and soundness of the approach used in the submission relative to its goals.

  • For tool-focused papers: Is the design of the tool appropriate for its stated goals? Is the context of its deployment clearly described?
  • For experience report papers: Is the experience sufficiently described to understand how it was designed/executed and who the target learner populations were?
  • For all papers: To what extent does the paper provide reasonable mechanisms of formative assessment about the experience or tool?

Evidence (ERT): Evaluate the extent to which the submission provides adequate evidence to support its claims.

  • The submission provides rich reflection on what did or didn’t work, and why.
  • Evidence presented in ERT papers is often descriptive or narrative in format, and may or may not be driven by explicit motivating questions.
  • ERT papers may include small-scale studies, but they need not be statistically significant.
  • Claims about the experience or tool are sufficiently scoped within the bounds of the evidence presented.

Contribution & Impact (ERT): Evaluate the overall contribution to computing education made by this submission.

  • Why the submission is of interest to SIGCSE community is clearly explained.
  • The work enables adoption by other practitioners.
  • The work highlights the novelty of the experience or tool presented.
  • The implications for future work/use are clearly stated.

Presentation (ERT): Evaluate the writing quality with respect to expectations for publication, allowing for only minor revisions prior to final submission.

  • The presentation (e.g., writing, grammar, graphs, diagrams) is clear.
  • Overall flow and organization are appropriate.

Position Papers and Curricular Initiatives

Motivation (PCI): Evaluate the submission’s clarity of purpose and alignment with the scope of the SIGCSE TS.

  • The submission provides a clear motivation for the work.
  • Objectives or goals of the position or curricula initiative are clearly stated, and speak to issues beyond a single course or experience.
  • Submissions focused on curricula, programs, or degrees should describe the motivating context before the new initiative was undertaken.
  • PCI papers may or may not ground the work in theory or research questions.

Prior and Related Work (PCI): Evaluate the use of prior literature to situate the work, highlight its novelty, and interpret its results.

  • Discussion of prior and related work to contextualize and motivate the position or initiative is adequate.
  • The relationship between prior work and the proposed initiative or position is clearly stated.

Approach (PCI): Evaluate the transparency and soundness of the approach used in the submission relative to its goals.

  • The submission uses an appropriate mechanism to present and defend its stated position or curriculum proposal (this may include things like a scoping review, secondary data analysis, program evaluation, among others).
  • As necessary, the approach used is clearly described.
  • PCI papers leveraging a literature-driven argument need not necessarily use a systematic review format, though it may be appropriate for certain types of claims.

Evidence (PCI): Evaluate the extent to which the submission provides adequate evidence to support its claims.

  • PCI papers need not present original data collection, but may leverage other forms of scholarly evidence to support the claims made.
  • Evidence presented is sufficient for defending the position or curriculum initiative.
  • Claims should be sufficiently scoped relative to the type of evidence presented.

Contribution & Impact (PCI) Evaluate the overall contribution to computing education made by this submission.

  • The work presents a coherent argument about a computing education topic, including, but not limited to curriculum or program design, practical and social issues facing computing educators, and critiques of existing practices.
  • The submission offers new insights about broader concerns to the computing education community or offers guidance for adoption of new curricular approaches.

Presentation (PCI): Evaluate the writing quality with respect to expectations for publication, allowing for only minor revisions prior to final submission.

  • The presentation (e.g., writing, grammar, graphs, diagrams) is clear.
  • Overall flow and organization are appropriate.

Common Text: Recommendation

Overall evaluation: Please provide a detailed justification that includes constructive feedback that summarizes the strengths & weaknesses of the submission and clarifies your scores. Both the score and the review text are required, but remember that the authors will not see the overall recommendation score (only your review text). You should NOT directly include your preference for acceptance or rejection in your review.

Presentation Details

In-person presentations

TL;DR: Each talk is in a session containing four talks. Please check the schedule in the Program menu for when and where your talk will be presented. Please arrive at the beginning of the session. You will need to bring your own laptop and an HDMI connector (e.g., an HDMI dongle for your laptop).

Your talk should be 15 minutes with 3 minutes for questions.

Presentation Room & Technology

All presentation rooms will have a podium with a microphone, 16:9 (aspect ratio) projectors and screens, with a single HDMI cable for video, and speakers.

You must bring your own laptop or plan to use someone else’s, the symposium will NOT provide one for you. Please bring with you the appropriate dongle to connect your laptop to HDMI.

Due to technical limitations in the convention center, paper presentations on-site will not be live streamed for virtual attendance. Nor will those attending the symposium virtually be able to present live in a physically scheduled paper session.

Presentation Session

There will be four paper presentations in each of the in-person paper sessions. Each paper presentation is an 18-minute block, which is a presentation of 15-minutes followed by 3 minutes for questions and answers (and for the next presenter to set up their computer). The session chair will introduce the session, and then prior to each paper presentation, will introduce you (the presenter), keep track of time, and provide you with five-minute, two-minute, and one-minute warnings before the question and answer period begins. Please note that the full paper presentation has a 18-minute limit.

Speaker Ready Room

In-person authors will have access to a speaker ready room throughout the conference. This is a quiet space for you to grab a cup of coffee, meet with your co-authors, prepare for your presentation, or log in to a Zoom call without going back to your hotel room. The room number is available in email sent only to paper authors.

Online presentation modality

There are limited slots for presenting a paper online. Requests for presenting over Zoom will be only considered during a short time after paper acceptance. If you indicate you will present in person or do not request an online presentation, then one author must present at the conference venue.

The authors for the Online Papers will present their papers ONLINE over a Zoom Session, which will be streamed live. Therefore, the presentations of the Online Papers can be attended by both In-person Attendees (in rooms in the venue) and Online Attendees (over Zoom).

The Zoom links will be sent to the online paper presenters on the day of their presentation.

Format of Online Presentation Sessions

There will be four paper presentations in each of the Online Paper sessions. Each paper presentation is an 18-minute block, which is a presentation of 15-minutes followed by 3 minutes for questions and answers. A Session Chair will manage the session with the help of a Student Volunteer.

Session Chair and Student Volunteer Responsibilities:

There will be a wired laptop logged into Zoom at the front of the room. The Session Chair, the Student Volunteer (and possibly a Hybrid Chair) will also be physically in the room and logged into Zoom to make sure that the online audience is muted and the online presenters are made co-hosts and can share their screens.

The Session Chair will introduce the presenter(s) before their presentations. To help presenters manage their time effectively, Session Chairs will use the Zoom Chat option to provide five-minute, two-minute, and one-minute warnings before the question and answer period begins. Please note that each full paper presentation has a 18-minute limit, and this is a hard stop time.

Session Chairs and Student Volunteers will ensure that questions from in-person attendees are relayed to the online presenters. The Online audience can ask their questions by unmuting themselves or through the Zoom chat. The in-person attendees must ask questions by relaying their questions to the Student Volunteer or the Zoom Chat.


Presentation Modality: Due TDB

Authors for all accepted papers must select a mode for presenting at the symposium (online or in-person). The first corresponding author on each paper should receive a survey by email shortly after acceptance notifications are sent. This survey should be completed only once per accepted paper.

Presentation modality selection is required by TDB. If authors do not submit a modality choice by the deadline, the paper will default to in-person presentation modality and will not be assigned to an online session.

Registration:

In order for your paper to be presented at the symposium and included in the proceedings, at least one author must register for the conference by Friday, February 7. Please let us know immediately if you or your co-authors are unable to present your paper at the symposium so we can withdraw it.

Camera-Ready: Due 5 November 2025

Authors should carefully consider the reviews when preparing final CAMERA-READY submissions. A camera-ready PDF must be submitted to Sheridan Communications for inclusion in the conference proceedings.

  • Authors can find initial instructions for preparing final camera-ready documents here: TBD
  • We also remind authors to review the accessibility tips to ensure the symposium content is widely usable for all parties.

Optional Video Presentations

Authors opting to provide the OPTIONAL video for the ACM DL as described in the camera-ready instructions, must check “YES” for being recorded on the ACM rights review form. If that option is not checked, the video will not be included in the ACM DL. Those who check “YES” will be asked to provide a video file for the ACM DL for the conference proceedings.