Working groups are formed by participants with a common interest in a topic related to the subject matter of the conference. The groups of 5 to about 10 participants work together electronically before the start of the conference. Working groups convene on the Sunday evening before the conference, and start face-to-face work in their sessions the following day, Monday, at 9am. Group members are expected to work together for the whole of Monday and Tuesday, and continue their work throughout the conference, which runs from Wednesday to Friday. However, members are able to attend some conference sessions and the Thursday afternoon excursion if they wish.

Every working group member must register for and be present at the conference in order to be considered a contributor to the final report. Participants present their preliminary results to conference attendees at a special working group presentation session, and submit a final report after the conference concludes. Final reports are refereed and, if accepted, are published in the ACM Digital Library.

The proposed working groups:

  • WG 1: Choosing code segments to exclude from code similarity detection
  • WG 2: Capturing and Characterising Notional Machines
  • WG 3: Toward High-Performance Computing Education
  • WG 4: Assessing how pre-requisite skills affect learning of advanced concepts
  • WG 5: Developing a Model Augmented Reality Curriculum
  • WG 6: Assessing Programming Assignments
  • WG 7: Cloud Computing Curriculum: Developing Exemplar Modules for General Course Inclusion
  • WG 8: Meaningful Assessment at Scale: Helping Instructors to Assess Online Learning
  • WG 9: Reviewing Computing Education Papers
  • WG 10: Developing Competency Statements for Computer Science Curricula: The Way Ahead

Applying to join a working group

Interested researchers may apply for working group membership up to the deadline: March 26. Working group membership decisions are generally made shortly after the deadline. If some working groups have too many applications and others are still lacking members, the working group chairs will try to facilitate moves between working groups, respecting the applicant’s and the working group leaders’ wishes.

Applications should be emailed to the nominated leaders of the working group in question (see below), and should include the following information (unless other/additional details are requested by the specific group):

  • your name, institution, country, and email address;
  • an explanation of your interest in the working group;
  • your experience relevant to the goals of the working group;
  • any further information requested in the description of the particular working group;
  • an assurance of your availability and willingness to take active part in the work of the working group before, during, and after the conference;
  • an assurance of your intention to register for and attend ITiCSE (this is a condition of working group membership).

WG 1: Choosing code segments to exclude from code similarity detection    

WG 1 Leaders:

Simon
University of Newcastle
Australia
simon@newcastle.edu.au

Judy Sheard
Monash University
Australia
judy.sheard@monash.edu

Oscar Karnalim
University of Newcastle
Australia
oscar.karnalim@uon.edu.au

When student programs are compared for similarity, certain segments of code are always sure to be similar. Some of these segments are boilerplate code — public static void main (String [] args) and the like — and some will be code that was provided to students as part of the assessment specification. The purpose of this working group is to explore what other code is expected to be reasonably common in student assessments, and should therefore be excluded from similarity checking. The answers will presumably vary with programming language and with level of assessment item.

Working group members will be required to collect assessment submissions from their own or their colleagues’ students, and it is hoped that these solutions will together encompass a good variety of assessment tasks in a good variety of programming languages.

The working group aims to deliver clear guidelines as to what code can reasonably be excluded from automatic code similarity detection in various circumstances. It also aims to deliver a summary of what sort of code lecturers tend to provide for students when setting an assigned task, and why they provide that code.  

WG 2: Capturing and Characterising Notional Machines    

WG 2 Leaders:

Sally Fincher
School of Computing University of Kent
Canterbury, Kent, UK
S.A.Fincher@kent.ac.uk

Johan Jeuring
Department of ICS
Utrecht University
The Netherlands
J.T.Jeuring@uu.nl

Craig S. Miller
School of Computing
DePaul University
Chicago, IL, USA
cmiller@cs.depaul.edu

A notional machine is a pedagogic device to assist the understanding of some aspect of programs or programming. It is typically used to support explaining a programming construct, or the user-understandable semantics of a program. For example, a variable is like a box with a label, and assignment copies or moves a value into that box.

This working group will capture examples of notional machines from actual pedagogic practice, as expressed in textbooks (or other teaching materials) or used in the classroom. We will interview at least 30 teachers about their experience with, and perceptions of, the use of notional machines in teaching. Using the interviews, we will work on devising and refining a form to characterise essential features of notional machines. We will also attempt to relate them to each other to describe potential learning sequences or progressions. The working group report will contain descriptions of notional machines used at different levels in education, in different countries, by many teachers.

The resulting catalogue of notional machines will allow a teacher to select a machine for a particular use, permit comparison between them, and provide a starting point for further categorization and analysis of notional machines.

Additionally, we will make more theoretical explorations. We argue that the creation and use of notional machines is potentially a signature pedagogy for computing and that creating and using notional machines represents a certain level of pedagogic sophistication that might be an indicator of pedagogic content knowledge (PCK).  

WG 3: Toward High-Performance Computing Education 

WG 3 Leaders:

Rajendra K. Raj
Rochester Institute of Technology
USA
rkr@cs.rit.edu

Allen Parrish
Mississippi State University
USA
aparrish@research.msstate.edu

John Impagliazzo
Hofstra University
USA
john.impagliazzo@hofstra.edu

High-performance computing (HPC) is the ability to process data and perform complex calculations at extremely high speeds. Using real or virtual supercomputers, HPC can perform quadrillions of calculations per second. The past three decades witnessed a vast increase in the use of HPC across different scientific, engineering and business communities, for example, sequencing the genome, predicting climate changes, designing modern aerodynamics, or establishing customer preferences. Although HPC has been well incorporated into curricula in sciences such as bioinformatics, the same cannot be said for most computer science programs.

This working group will thus explore opportunities for HPC to make inroads into computer science education, from the undergraduate to post-graduate levels. The group will address research questions such as how HPC can be adapted and adopted into CS education across the board; identify and handle obstacles and barriers that exist to inhibit HPC acceptance; or how HPC can enhance applied critical thinking and problem-solving skills. Initially contemplated are four deliverables: (1) a catalog of core HPC educational elements, (2) HPC curricula for contemporary computer science needs, such as in artificial intelligence, cyberanalytics, data science and engineering, or internet of things, (3) possible infrastructures for implementing HPC coursework, and (4) HPC-related feedback to the CC2020 project. The working group leaders will give priority to applicants with educational or practitioner experiences in HPC environments or solutions.  

WG 4: Assessing how pre-requisite skills affect learning of advanced concepts  

WG 4 Leaders:

Greg L. Nelson
University of Washington
Paul G. Allen School of Computer
Science & Engineering
Seattle, Washington
glnelson@uw.edu

Filip Strömbäck
Linköping University
Department of Computer and Information Science
Linköping, Sweden
filip.stromback@liu.se

Ari Korhonen
Aalto University
Department of Computer Science
Espoo, Finland
archie@cs.hut.fi

Advanced computing courses are hard, and comparatively little work studies why. It seems that learners do not master the most basic concepts like program tracing, or forget them between courses. If so, remedial practice could improve learning, but instructors rightly will not use scarce time for this without strong evidence.

To investigate this, our group will create theory-based assessments and plan a multi-national study on how tracing knowledge affects learning of advanced topics, such as data structures, algorithms, and concurrency. Our working group will identify relevant concepts in advanced courses, then conceptually analyze their pre-requisites and where an imagined student with some tracing difficulties would encounter barriers. Our group will use this theory to create clever, instructor-usable assessments for advanced topics that also identify issues caused by poor pre-requisite knowledge. We will plan to later use these at the start and end of advanced classes, and dig into the aforementioned barriers by qualitatively following a small set of students with varying initial tracing skills.  

WG 5: Developing a Model Augmented Reality Curriculum  

WG 5 Leaders:

Mikhail Fominykh
Department of Logistics
Molde University College
Molde, Norway
mikhail.fominykh@himolde.no

Fridolin Wild
Department of Computing and Communications Technologies
Oxford Brookes University
Oxford, United Kingdom
wild@brookes.ac.uk

Ralf Klamma
Department of Information Systems
RWTH Aachen University
Aachen, Germany
klamma@dbis.rwth-aachen.de

The evolution of Augmented Reality (AR) technology is accelerating, with the rapidly growing market requiring an increasing number of skilled professionals. Finding a university course or school that is teaching AR development skills currently poses a challenge. Students interested in a career in this field are usually advised to choose a technical degree such as engineering or design, to learn AR-specific skills independently or in the workplace. The educational offer is fragmented, usually combined with Human-Computer Interaction and Virtual Reality, as for example in the ACM and IEEE Computer Society curricula recommendations for Computer Science.

In this working group, we will build upon the work of the ongoing project Augmented Reality in Formal European University Education (AR-FOR-EU). In the frame of the project, we collected the currently available educational offer in the field of AR and analyzed the job market validating skills most valued by the employers. Moreover, we created a preliminary curriculum and evaluated it in two pilots. The objectives of the working group include:

  • Develop learning objectives based on the survey of Augmented Reality development skills
  • Develop a Model Augmented Reality curriculum
  • Describe Augmented Reality curriculum in the framework of the ACM and IEEE Computer Society curricula recommendations for Computer Science
  • Review and outline further development of Augmented Reality teaching resources, including the open Augmented Reality Teaching book, video lectures and tutorials, and an online course initiated in AR-FOR-EU.  

WG 6: Assessing Programming Assignments  

WG 6 Leaders:

Michelle Craig
University of Toronto
Toronto, Canada
mcraig@cs.toronto.edu

Paul Gries
University of Toronto
Toronto, Canada
pgries@cs.toronto.edu

Briana B. Morrison
University of Nebraska Omaha
Omaha, Nebraska, United States
bbmorrison@unomaha.edu  

What makes an exemplary programming assignment? The goal of this working group is to define criteria for evaluating the scholarship of instructional materials—those materials that describe work to be completed by students. This includes assignments, tutorials, labs, projects, and in-class activities. The working group will produce a set of categories on which the instructional materials can be evaluated. Each category will have an associated set of guiding questions designed to help a reviewer judge the quality, appropriateness, novelty, and usefulness of the instructional materials.

The criteria and questions for evaluating the instructional materials will be elicited from community knowledge through semi-structured interviews and additional research in other STEM disciplines. Using prior work (such as the reviewing criteria for Nifty Assignments) and research in other STEM disciplines, the organizers will develop an initial set of interview questions. During an initial teleconference meeting, working group members will refine these questions and develop a common interview protocol.

Before the face-to-face meetings at ITiCSE, each group member will conduct at least five 15–30 minute interviews with a broad range of experienced computer science faculty following a standard protocol. Each group member will be responsible for transcribing their own interview audio recordings. During our time at ITiCSE, the group will develop tags and code the transcribed data. We will then develop a draft set of criteria and conduct pilot reviews using the draft tool. Finally, we will revise the criteria and release them to the community in the form of a working group paper.  

WG 7: Cloud Computing Curriculum: Developing Exemplar Modules for General Course Inclusion  

WG 7 Leaders:

Joshua Adams
Donald R. Tapia College of Business
CS Department
Saint Leo University
Saint Leo, FL, USA
joshua.adams03@saintleo.edu

Laurie White
Google LLC
Seattle, WA, 98103
USA
lauriewhite@google.com

Brian Hainey
Department of Computing
Glasgow Caledonian University
Glasgow
Scotland
b.hainey@gcu.ac.uk

A 2018 Working Group created a report that among other things described fourteen Knowledge Areas (KAs) applicable when teaching cloud concepts. Building on that a Working Group in 2019 surveyed the existing landscape for publicly available courses and created a mechanism for faculty to share teaching materials.

This working group will continue the work of the previous two working groups by developing teaching material for exemplar modules to provide faculty with the ability to introduce cloud materials into their courses.

In particular, it will:

  • Collect materials from existing cloud courses and modules, both from academia and cloud providers.
  • Create a platform-neutral set of topics, along with detailed notes with references. Give ways to present these materials in sample syllabi.
  • Identify and map course outcomes (suggested course, i.e., virtualization, elasticity, security, etc.) to course modules.
  • Provide teaching materials (vendor-independent) to include slides, in-class exercises (closed labs), assignments, quizzes, tests, major projects, and instructor notes.
  • Build on the work of the 2018 and 2019 WG in the creation of these materials and alignment and relevancy of the KAs with specific jobs
  • Establish a process for peer review of teaching materials and updates as necessary.
  • Improve usability of the sharing site by the community.
  • Explore the viability of finding funding (NSF Grants, etc.) to create an academic cloud computing consortium for connecting industry and academia. Example can be seen here: https://nationalcyberwatchcenter.wildapricot.org  

WG 8: Meaningful Assessment at Scale: Helping Instructors to Assess Online Learning  

WG 8 Leaders:

Nickolas Falkner
The University of Adelaide
Australia
nickolas.falkner@adelaide.edu.au

Rebecca Vivian
The University of Adelaide
Australia
rebecca.vivian@adelaide.edu.au

Katrina Falkner
University of Adelaide
Australia
katrina.falkner@adelaide.edu.au

Increased opportunities for online learning, including growth in Massive Open Online Courses (MOOCS), are changing our education environments, increasing access and flexibility in how students engage with education. However, there are still many questions regarding how we engage with students effectively in these environments, in particular through assessment.

Assessment within on-line environments can vary, based on the technology available and pedagogical approach. However, forms of assessment in these environments must support additional constraints, in that they must scale to support potentially massive cohorts, minimal learner interaction, and range of learner intention. At the same time, there are unique opportunities due to the accessibility of rich learning analytics and learner data. Understanding effective assessment and assessment feedback at scale has broader implications as we cope with growing CS enrolments, and interest in technology.

This working group aims to explore assessment within CS MOOCs, as a specific example of the on-line learning environment, identifying engaging and effective assessment exemplars that reflect both the constraints and opportunities of this context. The working group will (1) identify and survey existing literature on formative and summative assessment of Computer Science MOOCS, (2) clarify how assessment may be considered meaningful for students, (3) identify key features of assessment that assist an instructor in evaluating the nature and quality of learning, and (4) identifying case studies that explore both innovative and effective assessments to provide a rich experience for students and also detailed feedback to teaching staff. The outcome of this working group will be a report.  

WG 9: Reviewing Computing Education Papers  

WG 9 Leaders:

Marian Petre
The Open University
Milton Keynes, UK
m.petre@open.ac.uk

Kate Sanders
Rhode Island College
Providence, RI
ksanders@ric.edu

Robert McCartney
University of Connecticut
Storrs, CT, USA
robert@engr.uconn.edu

Peer review is a mainstay of academic publication — indeed, it is the peer-review process that provides much of the publications’ credibility. This working group shall examine the ways peer review is used in various computing education venues, and use this examination to articulate community standards for peer review in this discipline.

We shall examine peer review from three viewpoints, those of community, process, and expectations:

  1. The community of reviewers: How are reviewers recruited and selected? How are they trained? What feedback do they get? What roles are there in the review process? What titles correspond to these roles?
  2. Process: from review to decision: How are reviewers matched to papers? How are the reviews used in making accept/reject decisions? What are the possible decisions? Are there multiple rounds of reviewing? What information is shared among reviewers, program chairs, and authors?
  3. Expectations: What is a reviewer expected to do when reviewing a paper? What are the reviews intended to accomplish, and who is the intended audience? What are the ethical rules associated with reviewing? How is a reviewer supposed to deal with issues of broken anonymity (if applicable)?

This examination shall be based on multiple data sources: published standards and descriptions from computing education conferences and journals; interviews with people who have served in various editorial and reviewing roles; papers and stated review principles from other disciplines. The WG report shall present the compiled standards and guidance for mentoring reviewers.  

WG 10: Developing Competency Statements for Computer Science Curricula: The Way Ahead  

WG 10 Leaders:

Alison Clear
School of Computing
Eastern Institute of Technology
Auckland, New Zealand
aclear@eit.ac.nz

Tony Clear
School of Engineering
Computer and Mathematical Sciences
Auckland University of Technology
Auckland, New Zealand
tony.clear@aut.ac.nz

Abhijat Vichare
Independent Consultant
Pune, India
abhijatv@gmail.com

This Working Group aims to take the current approved Computer Science curricula document, CS2013, and redevelop it into competency statements. The CC2020 project has designed and built a prototype of a visualization tool to compare and contrast current computing curricula. Three basic approaches were taken to portray the base data that will be used for the tool: the first being expert-defined competencies, the second based on mining, and the third based on expert-defined knowledge areas. The visualization tool takes competency statements from each of the current approved computing curricula and visually represents them. Using competency to frame curricula and describe educational outcomes in computing is not new. Since the CC2005 report was published several additional curricula have appeared and the information technology, information systems, and software engineering communities have developed three approaches to defining computing competency in the context of developing their curricula reports. In future the CC2020 report advocates that all new curricula will be written as competency statements. Currently the CS2013 curricula is expressed in learning outcomes rather than the competency statements, so it is essential to be able to demonstrate Computer Science curricula in these new terms to accommodate the new direction and demonstrate Computer Science in the new visualization tool.  

Sponsors

SIGCSE