Skip to content

2025 Working Group Proposals

Below are descriptions of the 9 Working Groups (WG) open for membership applications. Please read the descriptions below, and if you find one, or more, WG you are interested in, use the link (below) to apply to participate. During each round of applications, you will be submitting one application only for your preferred working group.

Schedule

1) Membership application form #1 open Mon 10 Feb
2) Membership application form #1 close Sun 2 Mar
3) Notifications for selections of first choice Mon 10 Mar
You will also be notified in case you were not selected for your first choice of WG
4) Membership application form #2 open Thu 6 Mar
Note that only groups (if any) that are not full will be available for selection
5) Membership application form #2 close Sun 16 Mar
6) Notifications for selections of second choice Mon 24 Mar

Groups that attract a sufficient number of members to create a viable group will proceed and work on their project from March to September to produce their extended abstract and final research report. All members are expected to contribute to the shape and direction of the working group and contribute to the abstract published at the time of the conference (Vols 1-2), as per ACM authorship policy. Revisions in response to the external review of the final report and the camera ready manuscript are due in November. Pending satisfactory reviews of the report, it will be published in the Conference Proceedings (Vol 3: ITiCSE WGR 2025). 

Participation

As a reminder, all members are expected to (a) participate in WG activities from March to November, as needed by your WG, (b) register for the conference (including the WG fee), and (c) be present at the 2025 ITiCSE Conference in Nijmegen, Netherlands. Note that there is a separate WG Conference fee that provides the work space (not lodging) for WGs in Nijmegen as well as lunch and morning and afternoon breaks during the weekend.

Working Group Travel Grants

The SIGCSE Board has announced a grant scheme to support participation in Working Groups at SIGCSE conferences. The Working Group Travel grant will be available for new Working Group participants at ITiCSE 2025.   The aim of the grant is to encourage participation in working groups from countries and regions where there have been traditionally very few participants. See WG Travel Grants for more information. 

Working Group Proposal List

  • WG1 – Paradigms, Methods, and Outcomes, Oh My!: Refining and Evolving a Research Knowledge Development Activity for Computer Science Education
  • WG2 – Refactoring Computing Courses to Leverage GenAI
  • WG3 – Notional Machines for Databases
  • WG4 – Exploring Effective Early Research Exposure for Broadening Participation in Computing Science
  • WG5 – Investigating How Course Features Correlate with Student Perceptions of Two-Stage CS Exams
  • WG6 – Fairness in Student Allocation and Group Formation
  • WG7 – Investigating Challenges in Assessing Team-Based Capstone Projects
  • WG8 – Ethical and Societal Impacts of Generative AI in Higher Computing Education: An ACM Task Force Working Group to Develop a Landscape Analysis
  • WG9 – Developing an AI Concept Inventory for Non-Experts

Applications

Application Form #1 link: Closed

Application Deadline #1: 2 March

Application Form #2 link: https://forms.gle/iYWN5HrSL4Pcfuyf6

Application Deadline #2: 16 March

All membership Decisions all expected by: 23 March

Please email (potential) WG Leaders to ask questions about a specific WG (as listed below).

Please email WG Co-Chairs with any/all questions about the process: iticse2025wg@easychair.org 

Archie & Ellie

ITiCSE 2025 WG co-chairs


WG1 – Paradigms, Methods, and Outcomes, Oh My!: Refining and Evolving a Research Knowledge Development Activity for Computer Science Education

Leaders:

Motivation:

Computer Science Education Research (CSER) combines the frequently quantitative approaches of computer science, engineering, and mathematics with the often more qualitative techniques seen in psychology, sociology, behavioural science, and education. It can be challenging to select appropriate research methods in effective and efficient ways. 

Inspired by the use of card-based techniques in the classroom, the Research Alternatives Exercise (RAE) is a pack of 105 cards introducing a wide range of possible research approaches. RAE provides alternatives to a participant’s current research plans using new random lenses, leading to the sketch of a new research design. The participant refers to their own design through the lens of the randomly drawn card, working to see how well this fits, informs, or improves what they have done. 

The initial version of the card deck and examples of play won best paper/demo at Koli Calling 2024 and an example “run” is shown below:

Goals:

  • review and modify the existing deck through collaboration in the WG
  • develop a version of the deck that can be shared and used widely across the CSER community, 
  • develop a concise support glossary for the cards

Methodology:

The current deck will be shared with participants, to support targeted literature review, research, and consultation to:

  1. refine the terminology used for categories, which are currently paradigms, methodologies, outcomes, and methods,
  2. refine the components within categories, 
  3. review the existing rules for suitability, 
  4. develop the first draft of the support glossary, and
  5. develop different decks and play approaches for specific purposes.

Following kickoff at the end of March, we will work on Items 1 and 3, aiming for completion by the start of May. When categories are finalized, we will undertake Item 2, where each group member will work in small groups to review each category. Findings will be presented to the whole group by the beginning of June, for further discussion and collaboration. Each sub-group will be responsible for the glossary elements of their contribution, to be completed and reviewed for the start of the in-person WG time. Each working group member will be asked to share the deck with colleagues to provide feedback. 

Member Selection:

We seek at least 8-10 individuals to share the required work manageably. 

We are looking for participants with at least one of:

  • Experience with a wide variety of research methodologies,
  • Experience in supervising graduate students, 
  • Interest and knowledge in using game-based and facilitated techniques, or
  • Experience with research skills development.

We actively invite applications from disciplines beyond computing for diversity in research skills development experience. We seek a diversity of experience, background, and culture, to ensure that the feedback encompasses the full range of CSER community experience. We also welcome student applications.

Successful applicants will:

  • Attend fortnightly 60-90 minute online progress meetings, held from mid-late March to the end of June,
  • Register for ITiCSE 2025,
  • Physically attend the full duration of the working group, and
  • Make significant contributions during the pre- and post-ITiCSE Working Group activities (3-4 hours a week).

WG2 – Refactoring Computing Courses to Leverage GenAI

Leaders:

Summary:

The rise of always-available GenAI has changed the way professionals work. Consequently, educators should prepare students for this new working-world reality. The availability of GenAI doesn’t just change what students do, it should change what and how we teach. This working group seeks to gather computing educators to explore this new reality and:

  • Reconsider course goals, learning objectives, and learning activities for upper-level computing courses
  • Identify appropriate uses of GenAI by students and how the availability of GenAI changes the expected knowledge of CS graduates on each specific topic.

Motivation:

Preparing students for a GenAI-driven professional environment necessitates targeted educational strategies. Students need to know (a) effective tool use (e.g., prompt engineering, interaction modes and other techniques, etc.) (b) validation techniques (e.g., unit tests, integration tests), and (c) AI’s limitations.

As such, traditional learning goals for a variety of CS courses (e.g., Introductory Programming, Object-Oriented Programming, Databases, Software Engineering, etc.) need to be adjusted. Since LLMs will have different impacts on different courses, it is of the utmost relevance to identify those specific impacts and adapt each courses’ learning goals accordingly.

The arrival of GenAI is recognized in recent educational research publications. In the publication titled “The Robots are Here: Navigating the Generative AI Revolution in Computing Education”, one of the early comprehensive reports on the impact on computing education, the very title suggests two important points: (1) this is a problem to address now, and (2) educators need help in navigating this revolution.

As we see it, there are three gaps between the current educational research and the typical computing educator: (1) very little has been written about what students should know about GenAI limitations, (2) most recent publications are focused on introductory courses, and (3) missing are specific recommendations for changes in course activities and goals.

Goals:

This proposal has the following top-level goals:

  1. Identify specific learning goals with respect to understanding GenAI usage – in particular, the knowledge needed to appreciate the limitations of GenAI;
  2. Identify course goals, learning outcomes, and educational activities for computing courses that prepare graduates for employment in a world where GenAI is ubiquitous.

In particular, the plan is to focus on courses other than introductory programming. The leaders are particularly interested in pursuing `GenAI in Database courses’, but will let the interests and experiences of the WG members determine which upper-level courses to focus on. The group will likely focus on more than one upper level course in sub-groups.

Methodology:

  • Literature survey – building on the work of the 2024 ITiCSE working group, extending it to include recent publications, and narrowing it to focus on publications that detail the operationalization of a computing course where teaching includes and/or is aided by GenAI
  • Our experiences – we will collect and report experiences (positive and negative outcomes) of WG members’ use of GenAI in upper-level computing courses 
  • Our ‘experiments’ – we will collect and report on the results of using GenAI to solve, or assist in solving, learning activity and/or assessments in our existing upper-level computing courses
  • Community input – input will be solicited from the community to augment our own investigations.

Expectations of Members:

  • we anticipate online meetings about twice a month
  • members will contribute by (a) reporting their experiences, (b) reviewing literature, (c) conducting GenAI experiments, (d) contribute to the writing and editing of the reports produced by the group, and (e) other activities the group decides to pursue

Member Selection Criteria:

Any computing educator is welcome; however, we will prioritize working group applicants with (any amount) of experience in teaching upper-level courses (e.g., databases, software engineering, operating systems, etc.). We are particularly interested in applications from educators who have incorporated GenAI into learning activities.


WG3 – Notional Machines for Databases

Leaders:

Motivation: 

Database education is a cornerstone under many of the more popular topics in computer science such as machine learning and visualization. Research on the practice of teaching databases, namely on the educational materials and explanations of teachers, can help us build a foundation for more theoretical research on data education. This working group aims to collect and present notional machines of different types, for a wide range of database subtopics. These materials offer an updated context for database educators to design their courses from, as well as open up pathways of further research into database education.

Goals: 

This working group has two goals. The first is to develop a taxonomy/categorization of the notional machines identified through empirical methods. The second is to evaluate to what extent the concept of a notional machine as currently defined for programming education [1], can be applied to database education contexts.

Methodology: 

We will undertake a multi-part study, starting with updating the literature review as done by Fincher et al. Further work depends on the number of interested candidates for the group. Based on this number, we will scale both the types and number of materials, as well as scope the number of database subtopics. For the materials, these are some we are considering an analysis of:

  • Textbooks
  • Education material such as handouts and visualization tools
  • MOOCs or other video lectures
  • Popular tutorials on YouTube
  • Blogs or other internet sources

For the topics for analysis, we have to scope the subtopics as most textbooks go far beyond introductory material. We will order textbook topics according to the findings in [2].

If possible, we would like to wrap up by surveying/interviewing teachers to get their input on identified notional machines.

Expectations for Group Members: 

We estimate contributions of three to four hours per week from mid-March to the end of June, including weekly online meetings for pre-conference work. We expect to finish the literature review by the end of April, and notional machine identification before the conference starts. We expect all members to attend the pre-conference working days on-site (July 5th to July 7th). Similarly to pre-conference work, we expect all members to work three to four hours per week from July to September for wrapping up the work.

We are looking for members who have experience with one of the following:

  • Teaching a data systems course
  • Running a systematic literature review
  • Directed content analysis

We welcome group members with various backgrounds: researchers, educators, junior and senior academics alike. We hope that members bring their expertise, enthusiasm, and commitment to data systems so that we can achieve meaningful results together.

[1] Fincher, S., Jeuring, J., Miller, C. S., Donaldson, P., du Boulay, B., Hauswirth, M., Hellas, A., Hermans, F., Lewis, C., Mühling, A., Pearce, J. L., & Petersen, A. (2020). Notional Machines in Computing Education: The Education of Attention. Proceedings of the Working Group Reports on Innovation and Technology in Computer Science Education, 21–50. https://doi.org/10.1145/3437800.3439202 

[2] Daphne Miedema, Toni Taipalus, Vangel V. Ajanovski, Abdussalam Alawini, Martin Goodfellow, Michael Liut, Svetlana Peltsverger, and Tiffany Young. 2024. Curriculum Analysis for Data Systems Education. In Proceedings of the 2024 on Innovation and Technology in Computer Science Education V. 2 (ITiCSE 2024). Association for Computing Machinery, New York, NY, USA, 761–762. https://doi.org/10.1145/3649405.3659529 


WG4 – Exploring Effective Early Research Exposure for Broadening Participation in Computing Science

Leaders:

Motivation:

There is extensive evidence in support of including research experiences for undergraduate computer science students as a means of broadening participation in computing. The details of the effective design and delivery of such research exposure programs are less explored. This study aims to investigate the effective design and delivery of undergraduate research programs.

Goals:

Based on a review of relevant research, we expand on and explore several effective factors, including but not limited to cultural relevance, the presence of a cross-disciplinary high-level view, task assignment, entry point, and support elements of a program. We ask:

RQ1: What are students’ perceptions and preconceptions about Computing Science research?
RQ2: What elements or conditions are necessary for a Computing Science research exposure program to be effective?
RQ3: How can research exposure programs attract, engage, and retain students in Computing Science?
RQ4: How can research exposure programs help broaden the participation of historically marginalized students in Computing Science?

We will employ mixed methods analysis through five phases, each with a deliverable milestone throughout this work: Initial Data Collection (surveys, interviews, and inquiries from past programs) and Study Design and refinements based on data, Further Data Collection, Data Analysis and Interpretations, Reporting, and Feedback and Revisions.

Member Selection:

We welcome scholars with an interest and experience in undergraduate research experiences, initiatives on broadening participation, computing science education research, data-driven research, and a motivation for performing high-quality research to join us.

We plan for a time commitment of 3-4 hours per week from March to July from all members. We expect all members to attend the pre-conference intensive working days in person. All members are expected to be involved in data collection and analysis and receive the required unified ethics approval and/or those from their host institutions.


WG5 – Investigating How Course Features Correlate with Student Perceptions of Two-Stage CS Exams

Leaders:

Motivation and Goals: 

Two-stage exams (TSEs), also known as collaborative exams, are a form of assessment that uses a team-based learning approach to turn summative assessments into formative learning experiences. They involve students writing an exam first by themselves and then rewriting the same exam (or a similar exam) with a group of peers.  TSEs are often considered as ‘active learning’ and can increase student engagement, improve comprehension, and encourage critical thinking within a group.  

This working group explores how instructor and student perceptions (and student performance) of TSEs in post-secondary computer science (CS) courses correlate with course environment dimensions. We will use the Collaborative Active Learning Inventory (CALI) which measures course environments across three dimensions: course structure, course sociality and course inclusiveness. The research questions we aim to answer are:

  1. How do student perceptions about TSEs vary with respect to CALI dimensions?
  2. How do course environment dimensions relate to individual versus group performance on TSEs (are there smaller differentials between individual scores and group scores when there is more active and collaborative learning in the course)?
  3. How do exam delivery differences (such as group size, group formation, timing) impact individual and group performance on TSEs?
  4. How do instructor perceptions and observations of TSE effectiveness vary based on course environment dimensions? 

By collecting data from students across many institutions taking part in post-secondary CS TSEs, we can begin to understand how TSEs are experienced across diverse populations of students and different types of pedagogical environments. This research aims to contribute insights that can help inform best practices for the implementation of TSEs in post-secondary CS classes.

Methodology and Member Expectations: 

Leaders (Latulipe and McIntyre) will provide members with an Ethics Protocol for the project that can be submitted (with institution-specific modifications) to their own institutional ethics review board for approval prior to data collection.

Members will either directly use TSEs in a CS course they are teaching and collect data directly from their students, or will work with an instructor in their department who will implement a TSE for their course. The course instructor will self-assess their course environment using the CALI inventory. Students will complete the TSE and answer a survey about their experience. Instructors will also complete a one-page report on their experiences with organizing, facilitating and grading the TSE, as well as observations about the students taking the TSE. Data will be collected by each member at their institution, then anonymized and uploaded to a secure data repository where all the data will be combined for analysis. 

Quantitative and qualitative analysis will be conducted to examine correlations between student perceptions, TSE performance and course environments. All group members will contribute to writing the final report, with optional further analysis on the data for future papers. 

Tentative Schedule:

March: Intro & planning, and member institution ethics protocol submission

April: Logistics planning

April-May: Data collection

May-June: Data organization & analysis, and initial report drafting

At Conference: In-person working group to continue data analysis and writing

July-September: Continued analysis & writing 

Member Selection Criteria: 

Members must be willing to implement a two-stage exam in a post-secondary CS course this term or work with an instructor in their department to have a two-stage exam conducted in a CS course. Additionally, all members will participate in both data analysis and report writing. Members do not have to have previous experience running two-stage exams. We will also consider selecting one or two members who are interested in the research, have prior experience with two-stage exams, but are unable to collect and contribute data. 


WG6 – Fairness in Student Allocation and Group Formation

Leaders:

Motivation: 

Allocating students to projects and groups are commonplace tasks in computing education. These decisions underpin student-supervisor pairing, and a variety of collaborative pair and group assignments. Allocation quality, e.g. gender or ethnicity balance, critically impacts individual students’ learning outcomes and the success of collaborative education. 

Prior allocation research focuses on workload distribution and complementarity of student skillsets. However, few consider the issue of fairness. Furthermore, the considered fairness definitions deal with allocation satisfaction alone, ignoring positive discrimination benefits, such as meeting marginalised students’ needs and adversities. 

Despite the critical importance of these allocation choices, we see little consensus on how allocation choices are implemented, as educators may rely on institutional norms or develop their own solution — through Excel spreadsheets, Python scripts, or even by hand. The allocation task can be challenging and time consuming, and the fairness of allocations can be difficult to assess. A lack of transparency on the allocation process may also lead to trust issues from students. 

Goals:

We will bridge a critical gap between research and educational practice, where existing allocation methods fail to capture pedagogic needs, and educational practice does not benefit from existing research. We will develop and support adoption of pedagogically informed allocation methods through the creation of an open-source tool for educators. We aim to answer these research questions: 

RQ1: What student allocation and group formation approaches are common in computing education practice, and why? 
RQ2: How should we define fairness in the student allocation and group formation context? 
RQ3: How can participatory methods support the development of fair allocation approaches? 

Methodology:

Our mixed-methods approach will begin with a systematic literature and tool review. Findings from local educator interviews and focus groups will inform a larger-scale educator survey. We will develop theoretically grounded definitions of fairness and produce a pedagogically grounded software tool along with representative benchmark datasets to perform fairness-aware allocations. Finally, we will produce accessible infographics and best practice guidelines on pedagogy for educators. We are committed to the sustainability, impact and broader adoption of the work.  

Expectations of Members:

We expect members to contribute 3-4 hours per week across the course of the Working Group. Full-group meetings will happen fortnightly, with project sub-groups collaborating more frequently. Meetings will be scheduled to accommodate members’ commitments and time zone differences. We expect members to bring their enthusiasm, expertise and a desire to learn from one another.  

Member Selection Criteria:

We aim to attract a diverse and multicultural membership for the Working Group. We encourage the participation of international and/or early-career researchers, and first-time working group participants. 

We are particularly interested in attracting members with student and project allocation experience, and those willing to run educator interviews and focus groups at their institutions. We will help you with standardised protocols for these, and provide mentoring and co-facilitation support. 

If you are unsure about applying, please reach out to the co-leads by email. We are happy to discuss any questions you may have, and we do hope you will join us!  


WG7 – Investigating Challenges in Assessing Team-Based Capstone Projects

Leaders:

Motivation:

Team-based capstone projects are a key component of computing education. They play a crucial role in preparing students for real-world challenges by fostering teamwork, communication, and skills in self-learning and self-reflection. However, assessing such projects presents significant challenges, such as aligning academic evaluation with stakeholders’ expectations, assessing individual contributions within teams, addressing the diverse skills required for successful team projects, and determining the appropriate involvement of external partners in the assessment process. Given the high stakes associated with capstone projects, developing robust and equitable assessment methods that support student learning and academic integrity is essential. Our Working Group will investigate these challenges by examining instructors, students, and stakeholder perspectives across multiple institutions and contexts. Our objective is to develop evidence-based recommendations for improving the assessment of team-based capstone projects in computing disciplines.

Goals:

Our Working Group has three main goals :

  • Review and analyze current assessment methodologies in computing capstone courses, considering their effectiveness and fairness.
  • Investigate challenges in assessing team-based capstone projects, including instructor concerns, student perceptions of fairness, the impact of generative AI tools in fair assessment, and the role of external stakeholders.
  • Develop evidence-based solutions and best practices to improve assessment strategies in computing capstone courses.

Methodology:

We will use a mixed methods approach to investigate capstone project assessment practices, as described below: 

  1. Literature review of current practices.
  2. Quantitative Data Collection: Surveys will be distributed to instructors, students, and other stakeholders to gather broad insights into assessment practices, challenges, and perceptions of fairness.
  3. Qualitative Data Collection: Semi-structured interviews with instructors and students will be conducted to explore specific challenges and experiences in more detail.
  4. Data Analysis and Synthesis: The findings from both data collection phases will be analyzed to identify common themes and inform the development of assessment strategies and recommendations.

Expectations of Members:

Participation in this Working Group requires active engagement in various research activities, including:

  • Obtaining ethical approval (IRB) for research involving human participants.
  • Assisting in the literature review process.
  • Participating in survey design and deployment efforts to collect quantitative data.
  • Conducting interviews with selected participants to gain qualitative insights.
  • Analyzing collected data and contributing to the synthesis of findings.
  • Attending regular virtual meetings (twice monthly from March to June 2025) to discuss progress, coordinate tasks, and refine research outcomes.
  • Collaborating on preparing a final report or publication summarizing the findings.
  • Attending ITiCSE 2025 in person in Nijmegen, Netherlands, to contribute to final discussions and present results.
  • The research activities will be distributed among WG members based on expertise, availability, and institutional contexts.

Member Selection Criteria:

We invite applications from individuals with relevant experience and interest in capstone course assessment. Specifically, we seek:

  • Academics and educators who have taught or are currently teaching team-based capstone courses in Computer Science (CS) or related fields.
  • Researchers in computing education who are interested in assessment, pedagogy, or curriculum design.
  • Academics who can facilitate access to survey and interview participants, including capstone instructors and students from their institutions.
  • Industry professionals with experience engaging with academic capstone projects can provide insight into assessment expectations from an external stakeholder perspective.
  • Applicants should be willing to actively contribute to the research process and attend ITiCSE 2025 in person to finalize the Working Group’s outcomes.

Meeting Frequency & Timeline:

  • Online meetings: Twice a month from March to June 2025 for research planning, data collection, and writing.
  • Milestone deadlines: We will set clear timelines and task assignments to ensure steady progress.
  • Final report preparation: Data analysis and a preliminary report draft will be completed before the in-person work at ITiCSE 2025.
  • Conference participation: Members must attend the ITiCSE 2025 in Nijmegen, Netherlands, in person and be present for the entire conference, including the designated working group pre-conference days. 

WG8 – Ethical and Societal Impacts of Generative AI in Higher Computing Education: An ACM Task Force Working Group to Develop a Landscape Analysis

Leaders:

Motivation:

Generative AI has a wide range of impacts on how we access and use information, particularly in educational settings. These impacts extend to society and include impacts on intellectual and creative works and the potential infringement of authorship. Differences in institutional GenAI policies (and in funding) may create unequal access to AI tools, the potential disparity in student knowledge of AI tools, responsible uses of AI tools, ethical questions about AI tools, and uneven student knowledge of the benefits and limitations of AI tools. Generative AI introduces questions concerning academic integrity, bias, and data provenance. The training data’s source, reliability, veracity, and trustworthiness may be in doubt, creating broader societal concerns about the output of the Generative AI models.

This ACM working group aims to investigate the ethical and societal impacts of Generative AI tools within the higher computing education landscape.  This working group will conduct a landscape analysis on ethical questions related to the use of Generative AI tools in higher education contexts, identifying promising principles, challenges, and ways to navigate the implementation of Generative AI in ethical and principled ways. 

Goals:

This working group aims to contribute a landscape study to guide ACM as an organization in its work with higher education institutions and the research community. 

  1. Assess the ethical and societal impacts of Generative AI in higher education.
  2. Analyse how Generative AI affects the socio-technical dynamics of higher education institutions. 
  3. Identify the challenges, opportunities, limitations of integrating Generative AI in higher education.

Methodology:

This working group aims to analyse and review the literature on ethical and societal questions around using Generative AI tools such as ChatGPT in higher education teaching and learning. 

  1. We will derive a set of challenges and recommendations through the systematic analysis of universities’ policies and guidelines on using Generative AI in Computer Science education. 
  2. Hold community webinars on different case studies on implementing Generative AI guidelines, policies, and evaluation frameworks in higher educational institutions. 
  3. Provide findings and recommendations to ACM’s Code of Ethics and institutional policies on Generative AI.

Expectations of Members:

Participation in this Working Group requires active engagement in various research activities, including:

  1. Contribute to the literature scoping review. The working group will have members in two subgroups: (a) academia literature review – Identify and examine literature on GenAI approaches and strategies applied in higher education environments, (b) Policy – Identify and examine literature on GenAI policies and trends, particularly related to ethical and societal impacts.
  2. Attend regular virtual meetings (monthly from March to October 2025) to discuss progress, coordinate tasks, and refine the final working group manuscript.
  3. Applicants should be willing to actively contribute to the research process and attend ITiCSE 2025 in person to finalize the Working Group’s outcomes.

Member Selection Criteria:

We welcome group members with various backgrounds, and researchers, educators, and junior and senior academics alike. We hope every member will bring their expertise, enthusiasm, and commitment to co-writing this working group landscape report to achieve meaningful results together. We seek working group members who are:

  1. Academics and educators interested in this topic and who have taught courses in computing education, or are currently teaching or are interested in teaching in the future.
  2. Academics who are currently teaching/researching or have taught/researched GenAI in higher education.
  3. Researchers with interests in the ethical and societal impacts of GenAI.
  4. Professionals and industry willing to contribute their expertise and insights on GenAI in Education.
  5. Represent diverse geographical regions.

Meeting Frequency & Timeline:

  1. Synchronous and asynchronous biweekly pre-ITiCSE 2025 meetings (estimate 2-3 hours/week). 
  2. In-person during the 2025 ITiCSE in Nijmegen, Netherlands including the designated working group pre-conference days. 
  3. Synchronous and asynchronous biweekly post-ITiCSE 2025 meetings to finalise the working group report/paper (estimate 2-3 hours/week).

Please reach out to the group leaders with any questions – we welcome your contributions!


WG9 – Developing an AI Concept Inventory for Non-Experts

Leaders:

Motivation and goals:

Artificial Intelligence (AI) is increasingly shaping the world around us, yet misconceptions about its capabilities and limitations persist, particularly among non-experts. As AI literacy becomes a crucial component of public understanding and education, there is a pressing need for reliable tools to assess how people conceptualize AI. 

We invite researchers from diverse backgrounds to join our interdisciplinary working group at ITiCSE 2025, where we will develop a research-based AI Concept Inventory (AI CI) designed to assess non-experts’ understanding of foundational AI concepts. This effort will contribute to AI education by identifying core concepts, uncovering common misconceptions, and creating a validated assessment tool that can inform teaching, policy, and public engagement. 

The working group offers an opportunity for interdisciplinary collaboration to contribute to the development of a foundational tool for AI literacy. We will systematically identify key AI concepts for non-experts and explore how these concepts are understood – or misunderstood – by the general public. By integrating literature reviews, expert consultations, and empirical studies, our working group will: 

  • Establish a list of essential AI concepts for non-expert audiences. 
  • Identify and analyze common misconceptions about AI. 
  • Design and validate an assessment tool that accurately measures non-expert understanding of AI. 

Expected Outcomes:

By the end of the project, we aim to deliver: 

  • A research-backed list of key non-expert AI concepts and misconceptions. 
  • A first version of an AI Concept Inventory tool suitable for educators, policymakers, and the general public. 
  • Insights into AI literacy across different demographic groups. 
  • A publication summarizing our findings and methodology. 

Work Plan and Commitment:

The working group will be active from March to September 2025, including phases such as literature review, concept identification, survey development, and validation. Meetings will be held online every 2-3 weeks, complemented by asynchronous collaboration. Participants are expected to contribute to the development of the concept inventory, engage in discussions, and support data collection in their respective countries. We will meet for an in-person working session the weekend before the conference. 

Expectations of Working Group Members:

We welcome researchers from diverse fields, including but not limited to: 

  • AI and computer science education 
  • Cognitive science and psychology 
  • Human-computer interaction and UX research 
  • Social sciences and AI ethics 

Previous experience in AI is not required – what matters is an interest in interdisciplinary collaboration and a commitment to supporting AI related education. We particularly encourage early-career researchers and doctoral students to apply, as this working group provides an excellent environment for networking and knowledge exchange. 

How to Apply:

If you are interested in joining our working group, please submit a short statement of interest outlining your background and how you can contribute to the working group to the leaders by email. We look forward to building a diverse and engaged team to advance AI literacy research. 

For more information, feel free to contact us.