Designing Courses for Liberal Arts and Sciences Students Contextualized Around Creative Expression and Social Justice: SIGCSE 2025 Experience Report
I am sorry to miss the SIGCSE 2025 Technical Symposium this week. I so want to hear Mitchel Resnick’s keynote for his Outstanding Contribution award, and to cheer on Manuel Pérez-Quiñones for his Distinguished Service award. There’s going to be a great session on “Ethics, Power, and Persistence” with papers from ACM TOCE, including a paper by Aadarsh Padiyath (see his recent blog post here) whom I co-advise with Barbara Ericson, and a paper by Noelle Brown, on whose dissertation committee I served.
Barbara and I are in New Zealand on sabbatical. We are Visiting Erskine Fellows at the University of Canterbury in Christchurch.
Tamara Nelson-Fromm and I have an Experience Report at SIGCSE 2025, which she will be presenting, “Designing Courses for Liberal Arts and Sciences Students Contextualized Around Creative Expression and Social Justice.” Preprint available here.
Readers of this blog have read some of the story in this paper. We point out that the early computer science scholar wanted everyone to learn programming, from reasons other than getting a job (a story I tell in this blog post). Most of what we have done in computer science education research has been focused on preparing students for jobs as software developers. We are still learning to develop design processes for alternative endpoints (see this blog post). In a paper that Gus Evrard and I presented at the Koli Calling conference in 2023 (see paper here), we tell the story of learning what liberal arts and sciences faculty want from computing education and then developing the Program in Computing for the Arts and Sciences. We describe the participatory design process for the first two courses in PCAS: COMPFOR 111: Computing’s Impact on Justice: From Text to the Web and COMPFOR 121: Computing for Creative Expression. I told some of that story, and showed the results of the participatory design process in this blog post.
This new paper adds an evaluation of the design. Tamara in her dissertation is looking at the motivations and process of students in these classes, so she had terrific interviews with the students that addressed the design of these classes. We had put questions into the students’ course evaluations to ask about our design. We used all of these to investigate what worked and what didn’t in the class.
I’ll summarize the results here:
- Students felt that the classes were useful, and not “overwhelming.” I was surprised that students used that word when talking about the class. We aimed to create undergraduate courses in programming that students would perceive as easy, but students still commented on the workload and how they expected it to be “overwhelming.” It’s hard to underestimate just how scary students find coding classes.
- Students feel that they are now able to talk about programming and learn more programming. Helping students to become conversational programmers was an explicit goal for the class. The Runestone ebooks that show students code in Python, SQL, HTML, and Processing help.
- Snap! works well, but students need some convincing. Students are amazed at the “cool things” they can make in Snap!. But some students said that it wasn’t “real” programming. One student said that it was “demeaning” at first to be asked to use blocks-based programming, but then they realized that they were actually learning a lot about “general computer science.”
Beyond the paper
The Computer Science Division here at University of Michigan wrote a very nice article about PCAS here. Perhaps the most convincing evidence that the PCAS courses are working are the enrollment numbers. We just surpassed 500 students enrolled in our courses this summer. Students want these classes, even when they’re not required of anyone.

We’re able to scale because there are a bunch of us teaching PCAS courses. My Physist colleague on the task force and in directing PCAS, Gus Evrard, created and is teaching our 101 course, The Transistor Disruption: How a tiny tool transforms science and society. We hired a lecture, Brian Miller (PhD in Music), who has been teaching the Creative Expression and Social Justice courses — and doing a great job developing them. There’s now an exciting data visualization unit in the Expression class, and due to his efforts, the Justice class has been approved for the Race and Equity requirement in LSA — the first computing course at Michigan to receive that approval. We just hired a lecturer this semester, Donovan Colquitt (PhD in Engineering Education), to teach the Expression class and the MediaComp Python class. A bunch of PCAS courses are taught by faculty across LSA: Sara Veatch from Biophysics is teaching the Python for scientists course, Andy Marshall from Anthropology developed an R for scientists course, and Andrew McInnerney of Linguistics now teaches the “Alien Anatomy” course on Generative AI. We also have terrific staff — Tyrone Stewart as our Academic Program Manager, and Kelly Campbell who is our chief administrator and who helps us figure out all the bureaucracy of creating a new program.
Today, I’m the only person teaching COMPFOR courses who has a computer science graduate degree. That may be the secret sauce to making PCAS scale — it’s not about computer science (field, major, or department). It’s about liberal arts and sciences faculty teaching the computing that their students need to succeed.
How K-12 CS teachers support students for whom English is not their first language
Emma Dodoo is an Engineering Education Research PhD student working with me. She was born in the United States but grew up in Ghana. Her world growing up was multi-lingual and multi-cultural. When Emma took her first computer science classes, she was surprised to find that the curriculum was not only dominated by English but that it was virtually impossible to learn programming without also learning English terminology. She won an NSF GRFP to explore how to support students for whom English isn’t their first language (emerging bilinguals (EB)) taking CS courses.
Over the next two weeks, she’s presenting two papers from her first study. She interviewed US-based K-12 computer science teachers who have EB students in their classes, to ask them about the challenges that they saw EB students facing, the teachers’ strategies for dealing with those challenges, and their programming tool choices based on the EB students’ needs. She showed them a variety of programs (in blocks, in teaspoon languages, in different text programming languages) as design probes, for them to say what they liked and disliked.
At the SIGCSE 2025 Technical Symposium (program here), she is presenting the challenges and strategies that the teachers identified. (Pre-print of her paper available here.) Here’s my over-simplified take on what she learned. (Be sure to read the whole paper to get the important nuances that the student reports and the advisor glosses over.) The cognitive load for EB students in CS classes is enormous — learning CS and learning English at the same time. They’re going to get lost. There’s no way to avoid it. So teachers build in touchpoints for the EB students to synch back up with the class. The teachers emphasize color (e.g., for keywords in the IDE) and pictures (e.g., in diagrams) to provide non-linguistic ways for EB students to figure out what’s going on.
At the PLATEAU 2025 workshop, she is presenting a paper on how K-12 CS teachers make programming tool choices with the needs of EB students in mind. (Pre-print available here.) The trade-offs here are much more complicated. The teachers told her that block-based programming languages are a huge win — colors, non-linguistic information in the form of block shapes, and the ability to localize the terms in the blocks. BUT, the CS teachers are concerned for their EB students as immigrants to the United States. The teachers want students to be able to have job skills as soon as possible, because it’s important for their and their family’s success. So many of the high school teachers emphasize Python and software engineering skills.
This is such a hard trade-off. Nobody gets a job programming in a block-based programming language. Text programming languages can scare students off. Teaspoon languages or blocks-based programming languages could create a welcoming on-ramp to programming that could lead more CS classes later. Balancing these trade-offs in an instructional design is what Emma is working on next in her dissertation.
Media Computation in Python running in Google Colab Notebooks
Here’s why I decided to work on yet-another implementation of a Python API that we first developed in 2001, how to get the implementation, and how it’s different.
Why we needed Python MediaComp in the browser
In 2002, we created a course at Georgia Tech called “Introduction to Media Computation.” Georgia Tech required all liberal arts majors to take a course in computer science, but the withdrawal-or-failure rate was high (about 50% for some majors). The core idea of Media Computation (MediaComp) is to program on the digital media in their lives, at different levels of abstraction: pixels in pictures, samples in sounds, frames in video. Students succeeded at the MediaComp course at a much higher rate (about 85%).
We developed MediaComp, studied it, and disseminated it. Barbara Ericson and I wrote textbooks using MediaComp (in Python and Java), and Barb developed “Picture Lab” for AP CS A. I presented a paper at ICER 2012 that summarized the research on MediaComp in the first decade of its use, “Exploring Hypotheses about Media Computation” The effect on retention had been documented at multiple institutions, and the qualitative data suggested that it was because the focus on media made the course seem more “relevant” and thus, more motivating which increased retention. We saw it as a form of contextualized-computing education.
I complained the last time I blogged on MediaComp (in 2021) that it was hard to teach MediaComp in Python today. We had created the JES IDE (Jython Environment for Students), but that required Java (which has been increasingly hard to install) and Jython (which was no longer being actively developed). I said in 2021 that best bet I knew for teaching MediaComp in Python was the JES4Py library.
I started teaching COMPFOR 121: Computing for Creative Expression at the University of Michigan in Fall 2022 based on participatory design with groups of liberal arts faculty. It’s a course that uses teaspoon languages, mostly Snap!, and some work in Python in a Runestone ebook (see ITiCSE 2023 papers on this progression). It covers everything we did in MediaComp at Georgia Tech, and a bunch more. (Snap! is so cool.) The course works (more on that in a blog post in a couple of weeks), but not all liberal arts faculty were happy about us using non-mainstream languages. I mostly heard calls for Unity or Python. I decided to create a MediaComp Python course for PCAS, but instead of being an introduction to computer science, the goal was to provide an introduction to digital media to arts and humanities students, with Python as the manipulation language.
I taught COMPFOR 221: Digital Media in Python in Winter 2024 based on the JES4Py library and the Thonny IDE. The library is great, stitching together other Python multimedia libraries. The contributors have done a marvelous job of replicating the IDE that we had in JES. But it requires students to install Python, and that became a showstopper. I learned that many liberal arts students at the University of Michigan don’t update their OS, and keep very little disk space free. We spent a huge amount of time figuring which version of which library could actually run on their OS, and then which other libraries had to be rolled back to previous versions because of interactions. (Many thanks to Ben Shapiro at U. Washington Seattle and Apple who helped me debug some of these and figure out workarounds.) About a third of the class couldn’t do everything we asked for in the Final Project because we just couldn’t get all the libraries to work on their computers.
We needed a browser-based solution. So I devoted a chunk of my time in Summer 2024 to getting a version of JES4Py to work in Google Colab notebooks. I picked Google Colab because it offered a way for me to make audio players in the notebook (to heard digital sounds) and it easily connected to students’ Google Drives, so that they could use their own photos by simply copying them to the right directory in their Drive. We used it in Fall 2024, and it worked well! Few technical details, and students could do a lot of MediaComp Python. A new lecturer in PCAS is teaching with it now in Winter 2025.
Side note: This was my first time developing code in the post-ChatGPT world. What a wonderful tool! It had read all the documentation, and gave me examples of each thing I needed. Every generated example was wrong somewhere, but it was exactly the right example to tell me how to do what I wanted. It was great for a more experienced (but infrequent) developer like me who could interpret the results, but oof, I’d worry about students being led astray.
Using JES4Py Colab
Here’s a zip file with code and some sample media: https://computinged.wordpress.com/wp-content/uploads/2025/01/mediacomp.zip.
The assumption in the class is that you would have a folder in your Google Drive (“COMPFOR221” for us, mediacomp for the demo below) in which there were two folders: code and mediasources. Whenever students wanted to add another picture or sound (or image sequence for movies), they just copied it to mediasources.
The first cell in all our notebooks looks like this:
from google.colab import drive
drive.mount('/content/drive')
import sys
sys.path.append('/content/drive/My Drive/mediacomp/jes4py_colab')
from jes4py import *
setMediaPath("/content/drive/My Drive/mediacomp/mediasources/")
This code mounts the student’s Google Drive, modifies the sys.path to access the code directory, then imports the jes4py library. Then it sets up media folder to point to the mediasources folder. When you execute it, Google asks you (several times) if you’re really, really sure that you want to use a notebook to manipulate your Google Drive.
Here’s a demo notebook to see how this works. Remember that you’ll need to download the zip file and expand it in your Google Drive before the notebook will work for you, but you can open the notebook without the zip file — you just won’t be able to execute any of the code.
The “Hello World” sequences from JES work fine.

We can manipulate pixels.

And we can manipulate samples (shifting sounds up and down in frequency).

What’s different in JES4Py Colab
The biggest difference with JES4Py (or JES) was that there are no file dialog boxes. So functions like pickAFile() and pickAFolder() don’t work.
The second big difference is that I couldn’t get sound to play as a background process. Instead, audio objects get returned to the Colab notebook, and that generates a player. Functions like blockingPlay() don’t work, since there is no background process to block. Audio took the biggest effort. I had to re-work the way that WAV files were read in so that they could match to the audio object that Colab notebooks understand.
Pictures work really well. I didn’t have to change much from JES4py. I mostly had to fix the show() function. It doesn’t open a new window — it shows the picture in the notebook. Window-oriented JES functions like repaint() don’t work.
Movies work, but they are kind of clunky. An image sequence (directory of picture files numbered in consecutive) can be played as a Movie object, but basically, the notebook just fetches each image and displays it, trying to achieve the given frame rate. Depending on the size of the picture objects and the speed of the network, it can work. To make an image sequence, you create a folder in your mediasources directory to store the image sequence, then generate the images there. Then you can play it back in the notebook. For my students, this was enough to do debugging, and then they used the frames in some other tool (like Adobe Premiere) to generate movie files from the image sequence.

I took a PDF of the JES help files for pictures and sounds and marked them up for changes in JES4Py Colab. They’re available here: Pictures and Sounds.
Disclaimer: The best way to release this would be to set up a GitHub repository, and let people grab it and share revisions. I’m not interested in championing another open source project. I plan to keep this running for our class at the University of Michigan, and I invite anyone else to use the code without warranty or guarantee. I welcome someone else to set up a project and put this in a repository, if they’re interested in being the library owner.
Dr. Bahare Naimipour defended her dissertation
I’m on sabbatical this semester, so I finally have time to catch up on some long overdue blog posts.

Bahare at her defense with her committee: From left, Barbara Ericson, Shanna Daly, James Holly Jr., Bahare, me, and Tammy Shreiner.
Dr. Bahare Naimipour successfully defended her Engineering Education Research dissertation this last August 2024, Supporting Social Studies Data Literacy Education: Design of Technology Tools and Insights from Expert Teachers and Teacher Change Journeys.
I’ve posted about Bahare’s work over the years. She had a poster in 2019 about our first participatory design sessions aimed at understanding what social studies teachers wanted in data visualization tools (see post here). She has been working on the NSF grant that Tammy Shreiner and I received in 2020 to study how social studies teachers adopted data literacy (announcement of that grant here). Bahare had a paper at FIE 2020 (presented virtually, as that was during the pandemic) on how social studies teachers interacted with programming-based data visualization tools (post here). She compared programming and non-programming tools at SITE 2021 (post here). The tool that we created, DV4L, was the first of what we later called teaspoon languages — here is the post where we talked about a couple of teaspoon languages for social studies education.
Bahare’s dissertation is made of three related studies. The abstract from her dissertation is below. Here’s my quickie summary of the three studies, framed for a computing education audience.
First, Bahare describes the long process of developing DV4L — across multiple participatory design sessions, both in-person in Tammy’s pre-service classes before the pandemic, and on-line with in-service teachers during the pandemic. She articulates the features of DV4L which are specific to social studies teachers and describes how they were developed in response to teacher needs. This chapter has been accepted as a paper in J-PEER.
Second, Bahare followed three teachers for two years as they (slowly) developed data literacy plans for their classrooms that used technology. This is such a rich story. Bahare frames it in terms of Guskey’s Model of Teacher Change. Guskey said that teachers don’t change because of professional development. They have to have some interest in change, or they wouldn’t be taking the professional learning opportunities seriously. They actually change when they try something in the classroom and the students’ response convinces the teacher that a new approach might work. Bahare watched that happen, but found that it was even more iterative than Guskey describes. Her teachers took multiple professional development sessions before they might even try something. She saw teachers try something…and get it wrong, and with some encouragement from Bahare, try again. This study really gives you a sense for what it’s going to take to achieve CS for All across the curriculum.
Finally, Bahare interviewed exemplary social studies teachers (selected by some pretty tough criteria) and asked them how they implemented data literacy in their classrooms. Bahare saw patterns across what the teachers were doing, and those data literacy design patterns are going to feed into future professional learning opportunities. The amazing thing for this audience is almost none of them used any computational tools. They liked our tools when Bahare demonstrated them, and maybe some might adopt — but I doubt it. They are excellent teachers recognized for their skill, and they got there without computation. Why would they change now? Maybe if we showed them how much more they could do with computational tools. Maybe if we showed them how easy it could be. Those are possibilities for future studies.
All told, Bahare has written a remarkable dissertation. It’s about data literacy in social studies education, but more, it’s about the challenges that face us as we bring the power of computing beyond the STEM classroom.
Abstract
This dissertation aims to contribute to the K-12 engineering education literature in a social studies context. Data literacy (DL) is the ability to understand and interpret what data means by drawing conclusions from patterns, trends, and correlations in data visualizations (DVs). DL is part of K-12 U.S. social studies standards making it relevant for engineering education researchers since it intersects both engineering and social studies. All K-12 students take social studies classes, yet most people are not data literate. Research suggests that social studies teachers have insufficient resources for teaching DL, so not all social studies teachers teach it. The goal of this dissertation is to shed light on the topic of K-12 DL in social studies by exploring three research questions:
- When designing engineering tools for non-STEM social studies teachers, what design considerations should be met?
- How do K-12 social studies teachers choose to explore data literacy in their pedagogy after participating in a data literacy professional learning opportunity (PLO)?
- How do expert social studies teachers use and explore DVs in their pedagogy, describe their data literacy pedagogical strategies, and explore/use technology tools to support their data literacy pedagogy?
To answer my first research question (Study 1), a participatory design (PD) approach was used to learn what social studies teachers (both pre-service and in-service) want in their classrooms by testing the usability of real tools with participants. Through three design phases, pre and in-service teacher groups informed the design and development of learning tools for social studies DL. Using a Social Construction of Technology lens, I describe the scaffolding embedded in the resulting tool DV4L by considering: 1) teachers’ perceptions of usefulness and usability in the DL tools they explored, and 2) how PD sessions with pre- and in-service teacher groups evolved over time beginning with their interactions with existing tools and leading to our current DV4L prototype tools.
I addressed my second research question through a longitudinal study (Study 2) that delved into how three K-12 social studies teachers explored DL during and after a PLO. Narrative methods were used to describe how three social studies teachers changed their DL practices. The journeys began with teachers as they explored a DL focused PLO, incorporated DL in their lesson plan(s), and include their reflections after implementing the lesson(s) in their classrooms. I used Guskey’s Model for Teacher Change as my analytical lens to understand each teacher’s DL journey.
My experiences in Study 1 and Study 2 made me wonder how expert teachers were meeting their DL learning goals. I used Shulman’s Pedagogical Content Knowledge framework to design Study 3 and address my third research question. I looked at how expert teachers explored DVs and described their DL pedagogical strategies and technology uses through a think aloud and semi-structured interview. Findings describe how five expert teachers made meaning of data and DVs through the practices and strategies they used or described using in their pedagogy.
This dissertation informs the design of curriculum, PLOs, and technology tools to support social studies teachers reach their DL learning goals. It has already informed the design of two socially constructed DL tools for K-12 social studies. Such tools provide teachers pedagogical power in their graphing activities in ways that support their DL learning goals while also promoting engineering skills and thinking.
Do I Have a Say in This, or Has ChatGPT Already Decided for Me?: Guest Blog Post from Aadarsh Padiyath
Hi all! This is Aadarsh Padiyath, a PhD Candidate at the University of Michigan, advised by Barbara Ericson and Mark Guzdial. At ICER and in ACM XRDS this year, I presented research that challenges a pervasive narrative in our field.
In discussions about ChatGPT in computing education, I frequently see claims that “LLMs are here to stay,” from both dejected colleagues and ecstatic researchers… proclamations that we need to “embrace LLMs or face being left behind…” that “since students used ChatGPT they must’ve found it beneficial…” also “Since ChatGPT is free, students will inevitably use it…” and “Prompts First Finally: natural language programming was where we were always headed.”
I see these pieces as exemplifying technological determinism – the belief that technology inevitably shapes society in predictable ways. It’s the reductive “if X, then Y” thinking that assumes new technologies will have predetermined outcomes on our lives and institutions, with humans playing a more passive role – regardless of the countless human choices and social influences that exist.
Our research, presented at ICER 2024, tells a different story. We found ChatGPT adoption in computing education was not a one-time choice nor a foregone conclusion, but an ongoing active negotiation shaped by social factors and individual experiences.
Through surveys, interviews, and midterm performance data from a Python programming course, we found students’ actual use of ChatGPT to challenge determinist narratives:
- Social factors, not just ChatGPT’s capabilities, shape adoption: Students’ perceptions of ChatGPT’s role in their future careers and their beliefs about their peers’ usage (often overestimated) significantly influenced their decisions.
- ChatGPT adoption varies widely – there’s no one-size-fits-all approach: We found diverse approaches, from full embrace to strategic application to complete rejection on moral grounds, often tied to students’ personal learning goals and values.
- Approaches change over time – there’s a feedback loop between ChatGPT use and learning outcomes: Students often initially internalized prevailing determinist narratives, but after experiencing ChatGPT’s impact on their learning (sometimes through receiving helpful guidance and sometimes through a lower midterm performance than they expected) many changed their approach, demonstrating agency and intentionality in their learning process.
These findings challenge the idea that ChatGPT’s impact on CS education is predetermined, LLMs are not necessarily “here to stay.” Instead, they highlight the complex, dynamic relationship between students and this tool.
In light of this, I wrote a XRDS article speaking directly to students about intentionality in their use of ChatGPT. The article encourages students to make active decisions rather than passive ones when incorporating ChatGPT into their learning process. I argue that students should explicitly define their relationship with ChatGPT in their learning process, align their usage with their goals and values, and continually reflect on this relationship. By doing so, they’re less likely to find themselves in unexpected or undesired situations regarding their learning outcomes or skill development.
As educators and researchers, I see our role as helping students make these informed decisions about their relationships with tools like ChatGPT. We need to help students understand the social influences at play, correct misconceptions about ChatGPT’s capabilities and usage, and guide them in aligning their use of tools like ChatGPT with their personal learning goals and values.
By moving beyond deterministic narratives, we can develop more nuanced, student-centered approaches to addressing AI in computing education.For more details on this research and its implications, check out our ICER paper, “Insights from Social Shaping Theory: The Appropriation of Large Language Models in an Undergraduate Programming Course” and my XRDS article “Do I Have a Say in This, or Has ChatGPT Already Decided for Me?”
For the reasons liberal arts and sciences majors should learn to program, AI’s not really that big a deal
“It’s the end of Computer Programming as We Know It” wrote Farad Manjoo in the NYTimes last June.
Jensen Huang, CEO of Nvidia, said earlier this year that programming is no longer important:
“It is our job to create computing technology such that nobody has to program. And that the programming language is human,” Jensen Huang told the summit attendees. “Everybody in the world is now a programmer. This is the miracle of artificial intelligence.”
And just this last week, NYtimes asked “Will A.I. Kill Meaningless Jobs? And is that so bad?” And there examples were mostly about…programming.
I read these pieces and saw a narrow definition of programming. They describe coding as a drudgery of carefully transforming requirements into stable, robust applications. But there’s another side of programming — the one that predates the drudgery one. Programming is also a tool to think and learn with.
Back in 2009, I’d written a blog post, “There will always be friction” inspired by an essay by Michael Mateas. I remembered that the argument then was why natural language programming would never quite work (and the argument pretty much still stands). But then I re-read Michael’s essay and realized that he was saying something much deeper than that. He was describing the reasons to program the way that the LSA faculty who talked to our Computing Education Task Force explained how they thought about and used programming.
There are four reasons to learn programming that we heard liberal arts and sciences faculty asking for and which are reflected in Michael’s essay — reasons that ChatGPT don’t change:
- Computational scientists use code as a way to describe models, and they need the exactness and interpretability of code.
- Computational artists need to know code to use it as a medium, in the same way that artists get to know other media (e.g., watercolors, oil painting, sculpture) to develop their skill and their potential to express with the medium.
- Critical computing scholars have to understand code to reason about its effects.
- Software studies scholars study code the way that earlier humanities scholars study text manuscripts, and it’s hard to study something you don’t understand.
I decided to write a Medium essay about those reasons, grounded in Michael’s essay but updated with an AI-framing — see link here. I decided to try Medium for this because it’s a pretty long essay and many subscribers here get these posts via email. I also want to explore other platforms, to see if maybe Medium is better for commenters than WordPress.
There are other reasons that LSA faculty wanted their students to learn programming, but I left out of the essay one other key reason to learn programming that LSA faculty talked to us about: To become a “conversational programmer.”. Some students take programming courses because they want to understand software development as an activity and to improve their communication with professional programmers. LSA faculty liked the idea of “conversational programmer” for their students.
I left it out because that one likely will get changed by ChatGPT, Co-Pilot, and all the other LLMs. If AI changes the task of professional programming, and conversational programmers want to understand professional programming, then yeah, of course, AI is going to change things.
But AI doesn’t change programming for people who want to think with, express with, and reason about code.
Assessing Student Understanding of Computing: Self-Efficacy, Non-CS Majors, and ChatGPT
Assessment is a hot topic in computing education research right now.
I’m sharing below a workshop announcement from Nell O’Rourke and Melissa Chen. They want to help students make accurate self-assessments, because (as Nell’s group has found in the past, with one paper described here) students tend to have inflated views of what they should be able to do, and when they can’t achieve those lofty goals, there is a negative impact on their self-efficacy.
We just received notice that our panel for SIGCSE Virtual 2024 has been accepted, on the topic of “Assessments for Non-CS Major Computing Classes” from Jinyoung Hur, Parmit Chilana, Katie Cunningham, Dan Garcia and me. I’ll give away here my position on this panel: We get assessments for non-CS majors wrong because we think about them as CS majors. Calling a non-major’s introductory computing course “CS0” is making the assumption that it’s the starting point for a sequence that goes on to CS1 and so on. Mastery learning is a good idea, but only when the skills to be mastered are appropriate for the student. Asking non-CS majors to master the skills of a CS1 is holding them to the standards of the CS major. There is more than one kind of “knowing how to code.” There are conversational programmers, computational artists and scientists, and others in our CS1 classes who need to code or to understand the process of coding, but don’t need or want the skills of a professional software developer. Assessment for non-CS majors has to recognize alternative endpoints for computing education.
Side note: Everything that we say about non-CS majors computing education applies to K-12 computing education. We should not assume that K-12 students are being prepared for software development jobs. Not all K-12 students will be CS majors, and there are other uses for programming in other careers besides software development.
Finally, ChatGPT is showing up everywhere in computing education research these days. We computing teachers have typically assessed understanding of computing by evaluating proficiency with textual programming. Now ChatGPT can be as proficient at the textual languages as the average CS1 student. Assessing understanding becomes harder when we can’t use proficiency as a proxy — the LLMs can make students appear proficient without any actual understanding.
We have a lot to do in assessment as computing education expands and LLMs can perform more of the programming tasks.
——————————
Do you teach an undergraduate introductory computing or programming course and want to help your students make accurate judgements about their programming ability?
We are researchers from Northwestern University interested in co-designing curricular and policy interventions with instructors to help students more accurately assess their programming abilities and develop higher self-efficacy.
Sign up here to learn more about our research on student self-assessments and to collaboratively design interventions at our two-day workshop on August 7 and August 8 at 12-3 PM central time (in your time zone)! Registration will close 3 days prior to the first session. More information about this workshop is available on the workshop website.
To be eligible for this workshop, you must teach an undergraduate-level introductory course and are 18 years of age or older. This study has been approved by the Northwestern University IRB (STU00222017: “Designing interventions to support student motivation and self-efficacy“). The PI for this study is Professor Eleanor O’Rourke.
If you have any questions, please email melissac@u.northwestern.edu.
Best,
Dr. Eleanor O’Rourke
Melissa Chen
Northwestern University
Everyday Equitable Data Literacy is Best in Social Studies, far more than in STEM
We want students to be data literate. We want them to be able to understand, critique, and argue with data, both visualizations and tables. A new book edited by Colby Tofel-Grehl and Emmanuel Schanzer (Improving Equity in Data Science: Re-Imagining the Teaching and Learning of Data in K-16 Classrooms) focuses on developing equitable data literacy — making sure that all students have these skills, and to use them in culturally responsive contexts.
Tammy Shreiner and I have a chapter in the book where we are arguing with all the STEM-focused chapters in the book, “Everyday Equitable Data Literacy is Best in Social Studies: STEM Can’t Do What We Can Do.” If you want students to develop equitable data literacy skills that they can use in their everyday life, then those skills should be taught in social studies classes.
Here’s a sketch of the argument. We make four main points:
- Data literacy makes social studies more effective. All 50 states and the District of Columbia include data literacy as part of their social studies standards.
- Social studies teachers are explicitly taught to take diverse perspectives on data for the purposes of equity and social justic. That’s what the social studies are about. For STEM (especially in CS), it’s something new being brought to the table. If you want equitable data literacy, then have the social studies teachers do it because that’s their strength.
- A historical lens on data literacy is different than a STEM lens. I learned this from working with Tammy. Her first question about any data set is, “Where did it come from?” Data aren’t free. Someone paid a price (e.g., time and effort) to gather these data, and they had an agenda. What was that agenda, and how might that have biased the data? That’s not the first question that most data scientists ask about data.
- Most of us don’t read Science or Nature. Data visualizations that most of us see in our everyday lives are from opinion polls or about prices or taxes — they’re about social science topics (economics, government/civics, history, geography). If you want the knowledge to transfer, teach it in the context that you want to transfer. Many students avoid STEM classes. We talk in the paper about Tammy Clegg’s insightful work on how Division 1 athletes use data literacy to communicate with their coaches and trainers about their workouts and diet. Division 1 athletes mostly major in social studies and business, not STEM subjects. If you want to help develop the data literacy skills of these students who are actively using data, then put it in social studies.
Then we discuss what supports social studies teachers need in order to teach data literacy. Most data tools are designed for STEM, and as we’ve found, social studies teachers have different needs which we discover through a participatory design process. Just creating new tools isn’t enough. We have to support teachers in learning, adapting, and adopting new practices.
A submitted version of the manuscript is linked here, and the book’s website is here.
A Purpose-First Theory of Transfer of Knowledge from Programming
One of the persistent research questions in computing education research (CER) is, “If you learn something in programming, can you use that something somewhere else?” Learning scientists call this “knowledge transfer.” In the first few decades of CER, the question was whether problem-solving skills that you developed in programming could be used elsewhere. (Spoiler: only if they were taught for transfer — see that story in this blog post.) Then the question was “What about programming itself? If you learn one programming language, is it easier to learn the next one?” Spoiler: Yes, to some extent. Ethel Tshukudu did her dissertation on transfer between Java and Python, where it works and where it doesn’t (see her ICER 2020 paper).Felienne Hermans is designing Hedy explicitly to support transfer (see her ICER 2020 paper).
Tamara Nelson-Fromm and I had a paper in this year’s PLATEAU Workshop (link to workshop) that has now been posted in the repository (link to paper), “A purpose-first theory of transfer through programming in a general education computing course.” We propose a different perspective on how to think about knowledge transfer from programming. If you use programming to learn X, does X transfer to a new programming context? Here’s our theory: For most people, the most common kind of knowledge transfer from programming is transfer from their purpose for programming, far more than the programming language itself. We call this “purpose-first” in reference to Katie Cunningham’s “purpose-first programming” dissertation work (see blog posts here and also here).
Our paper frankly describes an accident. Calling it an experiment is absolutely wrong, and it’s only a study because of post-hoc analysis. It comes out of our Computing for Creative Expression class in PCAS. Here’s the set-up.

I gave a lecture on digital representations of color, pixels, and image manipulations, like posterizing, negating an image, and generate a grayscale. I always use peer instruction questions in my classes, and in this class, I built on the forward testing effect. Before I taught any programming, I gave the students example programs in Snap! and using the Pixel Equations teaspoon language (link to a blog post on Pixel Equations), and just asked them “What does this do?” The idea is that they wouldn’t do well, but they’d be thinking about these things going into Pixel Equations. Then, before going on to talk about building image filters in Snap!, we did another forward-testing quiz with Snap! examples (and some Pixel Equations, which was no longer forward testing).
Here’s the accident part. I always collect their names during peer instruction so that I can give them participation credit, but on these, I forgot. So, I had a big mess of anonymized data, from the “pre” programming stage, and “mid” between Pixel Equations and Snap! (but without explicit pre-mid links). Our human subjects review board gave Tamara and I permission to analyze those two piles of data.
We looked for a variety of kinds of transfer. Did they focus more on procedure, e.g. describe the moment-by-moment process? Did they describe the structure of the code? What really popped for us is that the mid-quiz had so much more image processing language. Suddenly, they’re talking about luminance and posterizing, when they weren’t on the pre-quiz. We suggest that they’re transferring their purpose for programming (image processing), without any sign of transferring structural or behavioral knowledge.
Here’s a metaphor to explain what’s happening. Imagine that you’re taking a music class here in the United States, where they’re teaching the class in English. Then, you study abroad, where you’re taking another music course, but now it’s in Spanish or German. The first language that you’re going to want to pick up in the study abroad course is for the music. You’ll want to figure out how to talk about “rhythm,” “notes,” and “time signatures.” You’re working at transferring your knowledge of music. Now, if you’re a student of language, maybe you’re also interested in how Spanish and English are similar and different, or maybe you’re noticing the syntax and semantics of English and German. Maybe you’re a student of education, and you’re interested in how English, Spanish, or German support (or detract) from the discussion of music (and there may very well be differences in how the languages interact with the learning of music). But those are the unusual cases. Mostly, you’re a music student and you want to talk and learn about music.
Most people don’t want to learn programming for its own sake. Even if programming is a great way to explore math, science, expression, and music, the focus is on the purpose for the programming. This is likely the most common case. I work with a biologist who does data manipulation and modeling in Python, and is now doing her statistics in R. She sometimes moves code between Python and R (with a lot of ChatGPT help), but she’s not really interested in learning about either Python or R. She transfers her scientific purpose. She knows what she’s learning in each. She’s doing science, and programming is just a tool for her purpose. Python is a good tool for her modeling, and R is a good tool for her statistics. Maybe she thinks about why each is good for each purpose — but from our discussions, she mostly doesn’t.
Most transfer of knowledge between programming experiences is not about syntax or semantics. It’s about the purpose for programming. That comes first.
PCAS Expansion, Growth, Research, and SIGCSE 2024 Presentations
The ACM SIGCSE Technical Symposium is March 20-23 in Portland (see website here). I rarely blog these days, but the SIGCSE TS is a reminder to update y’all with what’s going on in the College of Literature, Science, & the Arts (LSA) Program in Computing for the Arts and Sciences (PCAS). PCAS is my main activity these days. Here’s the link to the PCAS website, which Tyrone Stewart and Kelly Campbell have done a great job creating and maintaining. (Check out our Instagram posts on the front page!)
PCAS Expansion
I’ve blogged about our first two courses, COMPFOR (COMPuting FOR) 111 “Computing’s Impact on Justice: From Text to the Web” and COMPFOR 121 “Computing for Creative Expression.” Now, we’re up to eight courses (see all the courses described here). As I mentioned at the start of PCAS, we think about computing in LSA in three themes: Computing for Discovery, Expression, and Justice. Several of these courses are collaborations with other departments, like our Discovery classes with Physics, Biophysics, Ecology and Evolutionary Biology, and Linguistics.

This semester, I’m teaching two brand new courses. That means that I’m creating them just ahead of the students. I did this in Fall 2022 for our first two courses (see links to the course pages here with a description of our participatory design process), and I hope to never do this again. It’s quite a sprint to always be generating material, all semester long, for about a hundred students.
One course is like the Media Computation course I developed at Georgia Tech, but in Python 3: COMPFOR 221: Digital Media with Python. The course title has changed. When we first offered it, we called it “Python Programming for Digital Media,” and at the end of registration, we had only five students enrolled! We sent out some surveys and found that we’d mis-named it. Students read “Python Programming” and skipped the rest. The class filled once we changed our messaging to emphasize Digital Media first.

When we taught Media Computation at Georgia Tech, we used Jython and our purpose-built IDE, JES. Today, there’s jes4py that provides the JES media API in Python 3. I had no idea how hard it was to install libraries in Python 3 today! I’m grateful to Ben Shapiro at U-W who helped me figure out a bunch of fixes for different installation problems (see our multi-page installation guide).
The second is more ambitious. It’s a course on Generative AI, with a particular focus on how it differs from human intelligence. We call it Alien Anatomy: How ChatGPT Works. It’s a special-topics course this semester, but in the future, it’ll be a 200-level (second year undergraduate) course with no pre-requisites open to all LSA students, so we’re relying on teaspoon languages and Snap! with a little Python. I’m team-teaching with Steve Abney, a computational linguist. Steve actually understands LLMs, and I knew very little. He’s been a patient teacher and a great partner on this. I’ve had to learn a lot, and we’re relying heavily on the great Generative AI Snap! projects that Jens Mönig has been creating, SciSnap from Eckart Modrow, and Ken Kahn’s blocks that provide an API to TensorFlow.

As of January, we are approved to offer two minors: Computing for Expression and Computing for Scientific Discovery. We have about a half dozen students enrolled so-far in the minors, which is pretty good for three months in.

PCAS Growth
When I offered the first two courses in Fall 2022, we had 11 students in Expression and 14 in Justice. Now, we’re up to 308 students enrolled. That’s probably our biggest challenge — managing growth and figuring out how to sustain it.

Research in PCAS
We’re starting to publish some of what we’re learning from PCAS. Last November, Gus Evrard and I published a paper at the Koli Calling International Conference on Computing Education Research about the process that we followed co-chairing the LSA Computing Education Task Force to figure out what LSA needed in computing education. That paper, Identifying the Computing Education Needs of Liberal Arts and Sciences Students, won a Best Discussion Paper Award. Here’s Gus and me at the conference banquet when we got our award.

Tamara Nelson-Fromm just presented a paper at the 2024 PLATEAU Workshop on evidence we have suggesting transfer of learning from teaspoon languages into our custom Snap! blocks. I’ll wait until those papers are released to tell you more about that.
SIGCSE 2024 Presentations
We’re pretty busy at SIGCSE 2024, and almost all of our presentations are connected to PCAS.
Thursday morning, I’m on a panel led by Kate Lehman on “Re-Making CS Departments for Generation CS” 10:45 – 12:00 at Oregon Ballroom 203. This is going to be a hardball panel. Yes, we’ll talk about radical change, but be warned that Aman and I are on the far end of the spectrum. Aman is going to talk about burning down the current CS departments to start over. I’m going to talk about giving up on traditional CS departments ever addressing the needs of Generation CS (because they’re too busy doing something else) and that we need more new programs like PCAS. I’m looking forward to hear all the panelists — it’ll be a fun session.

Thursday just after lunch, Neil Brown and I are presenting our paper, Confidence vs Insight: Big and Rich Data in Computing Education Research 13:45 – 14:10 at Meeting Room D135. It’s an unusual computing education research paper because we’re making an argument, not offering an empirical study. We’re both annoyed at SIGCSE reviewers who ask for contextual information (Who were these students? What programming assignments were they working on? What was their school like?) from big (millions of points) data, and then complaining about small sample sizes from rich data with interviews, personal connections, and contextual information. In the paper, we make an argument about what are reasonable questions to ask about each kind of data. In the presentation, the gloves come off, and we show actual reviews. (There are also costumes.)
We don’t really get into why SIGCSE reviewers evaluate papers with criteria that don’t match the data, but I have a hypothesis. SIGCSE reviewers are almost all CS teachers, and they read a paper asking, “Does this impact how I teach? Does it tell me what I need to do in my class? Does it convince me to change?” Those questions are too short-sighted. We need papers that answer those questions to help us with our current problems, but we also need to have knowledge for the next set of problems (like when we start teaching entirely new groups of students). The right question for evaluating a computing education research paper is, “Does this tell the computing education research community (not you the reviewer, personally, based on your experience) something we didn’t know that’s worth knowing, maybe in the future?”

At the NSF Project Showcase Thursday 15:45 – 17:00 at Meeting Rooms E143-144, Tamara Nelson-Fromm is going to show where we are on our Discrete Mathematics project. She’ll demonstrate and share links to our ebooks for solving counting problems with Python and with one of our teaspoon languages, Counting Sheets.

In the “second flock” of Birds of a Feather sessions Thursday 18:30 – 19:20 at Meeting Room D136, we’re going to be a part of Zach Dodd’s group on “Computing as a University Graduation Requirement”. There’s a real movement towards building out computing courses for everyone, not just CS majors, as we’re doing in PCAS. Zach is pushing further, for a general education requirement. I’m excited for the session to hear what everyone is doing.

On Saturday afternoon 15:30 – 18:30 at Meeting Rooms B113, I’ll offer a three-hour workshop on how we teach in our PCAS courses for arts and humanities students, with teaspoon languages, custom Snap! blocks, and ebooks. Brian Miller has been teaching these courses this year, and he’s kindly letting me share the materials he’s been developing — he’s made some great improvements over what I did. This workshop was inspired by a comment from Joshua Paley in response to our initial posts about how we’re teaching, where he asked if I’d do a SIGCSE workshop on how we’re teaching PCAS. Will do it on Saturday!

What Humanities Scholars Want Students To Know About the Internet: Alternative Paths for Alternative Endpoints
After we got the go-ahead to start developing PCAS (see an update on PCAS here), I had meetings with a wide range of liberal arts and sciences faculty. I’d ask faculty how they used computing in their work and what they wanted their students to know about computing. Some faculty had suggested that I talk to history professor, LaKisha Michelle Simmons. I met with her in January 2023, and she changed how I thought about what we were doing in PCAS.
I told her that I’d heard that she built websites to explain history research to the general public, and she stopped me. “No, no —- my students build websites. I don’t built websites.” I asked her what she would like her students to know about the Internet. “I could teach them about how the Internet works with packets and IP addresses. I could explain about servers and domain names.”
She said no. She was less interested in how the Internet worked. She had three specific things she wanted me to teach students.
- She wanted students to know that there are things called databases.
- That databases, if they are designed well, are easy to index and to find information in.
- Databases could be used to automatically generate Web pages.
Her list explains a huge part of the Web, but was completely orthogonal to what I was thinking about teaching. She wasn’t asking me to teach tools. She wanted me to teach fundamental concepts. She wanted students to have understanding about a set of technologies and ideas, and the students really didn’t need IP addresses and packets to understand them.
The important insight for me was that the computing that she was asking for was a reasonable set, but different from what we normally teach. These are advanced CS ideas in most undergraduate programs, typically coming after a lot of data structures and algorithms. From her perspective, these were fundamental ideas. She didn’t see the need for the stuff we normally teach first.
I put learning objectives related to her points on the whiteboards in my participatory design sessions. This showed up in the upper-right hand corner of the Justice class whiteboard — the most important learning objective. LaKisha gave me the learning objectives, and the humanities scholars who advised me supported what she said. This became a top priority for our class Computing’s Impact on Justice: From Text to the Web.

Figuring out how
During the summer of 2022, a PhD student working with me, Tamara Nelson-Fromm, a group of undergraduate assistants, and I worked at figuring out how to achieve these goals. We had to figure out how to have students work with LaKisha’s three learning objectives, without complicated tools. We were committed to having students program and construct things — we didn’t want this to be a lecture and concepts-only class.
We were already planning on using Snap, and it has built-in support for working with CSV files. Undergraduate Fuchun Wang created a great set of blocks explicitly designed to look like SQL for manipulating CSV files. We used these blocks to talk about queries and database design in the class.

Tamara and I talked a lot about how to make the HTML part work. I had promised our advisors that we would not require LSA students to install anything on their computers in the intro courses. We talked about the possibility of building a teaspoon language for Web page development and for use as templating tool for databases, but I was worried that we were already throwing so many languages at the students.
Then it occurred to us that we could do this all with Snap. We built a set of blocks to represent the structure of an HTML page, like in this example. Since we could define our own control structures in HTML, we could present the pieces of a Web page nested inside other blocks, to mirror the nested structure of the tags.

Those last two blocks were key. The view webpage block displays in the stage the first 50 lines of the input HTML. That’s important so that students see the mapping from blocks to HTML. The open HTML page block opens a browser window and renders the HTML into it. (That was a tricky hack to get working.)
This was enough for us to talk about building Web pages in Snap, viewing the HTML, then rendering the HTML in the browser. Here’s a slide from the class. In deciding what computer science ideas to emphasize, I used the work of Tom Park who studied student errors in HTML and CSS, and found that ideas of hierarchy and encapsulation were a major source of error. Those are important ideas across computing, so I used those as themes across the CS instruction — and the structure we could build in the Snap block helped to present those ideas.

All of that together is enough to build Web pages from database queries. Here’s an example — querying the billionaires database from Forbes for those from Microsoft, then creating a Web page form letter asking them for money.

We use these blocks in both of our classes:
- In the Justice class, students use the HTML blocks to create a resume for a fictional or historical character in a homework assignment. In a bigger project, students design their own database of anything they want, then create two queries. One should return 1-3 items, and should generate a detail page for each of those items. The second query should return several items, and return an overview page for that set of items.
- In the Expression class, building an HTML page is the last homework. They use style rules and have to embed a Snap project so that there’s interactivity in the page. Here’s a slide from the class where we’re showing how adding style rules changes the look-and-feel of an HTML page.

Alternative Paths to Alternative Endpoints
Mike Tissenbaum, David Weintrop, Nathan Holbert, and Tammy Clegg have a paper that I really like called “The case for alternative endpoints in computing education” (BJET link, UIUC repository link). They argue “for why we need more and diverse endpoints to computing education. That many possible endpoints for computing education can be more inclusive, just and equitable than software engineering.” I strongly agree with them, but I learned from this process that there are also alternative paths.
Computer science sequences don’t usually start with databases, HTML, and building web pages from database queries, but that’s what my humanities scholars advisors wanted. Computer science usually starts from algorithms, data structures, and writing robust and secure code, which our scholars did not want. Our PCAS courses are certainly about alternative endpoints — we’re not preparing students to be professional software developers. We’re also showing that we can start from a different place, and introduce “advanced” ideas even in the first class. Computing education isn’t a sequence — it’s a network.
A Scaffolded Approach into Programming for Arts and Humanities Majors: ITiCSE 2023 Tips and Techniques Papers
I am presenting two “Tips and Techniques” papers at the ITiCSE 2023 conference in Turku, Finland on Tuesday July 11th. The papers are presenting the same scaffolded sequence of programming languages and activities, just in two different contexts. The complete slide deck in Powerpoint is here. (There’s a lot more in there than just the two talks, so it’s over 100 Mb.)
When I met with my advisors on our new PCAS courses (see previous blog post), one of the overarching messages was “Don’t scare them off!” Faculty told me that some of my arts and humanities students will be put off by mathematics and may have had negative experiences with (or perceptions of) programming. I was warned to start gently. I developed this pattern as a way of easing into programming, while showing the connections throughout.
The pattern is:
- Introduce computer representations, algorithms, and terms using a teaspoon language. We spend less than 10 minutes introducing the language, and 30-40 minutes total of class time (including student in-class activities). It’s about getting started at low-cost (in time and effort).
- Move to Snap! with custom blocks explicitly designed to be similar to the teaspoon language. We design the blocks to promote transfer, so that the language is similar (surface level terms) and the notional machine is similar. Students do homework assignments in Snap!.
- At the end of the unit, students use a Runestone ebook, with a chapter for each unit. The ebook chapter has (1) a Snap! program seen in class, (2) a Python or Processing program which does the same thing, and (3) multiple choice questions about the text program. These questions were inspired by discussions with Ethel Tshukudu and Felienne Hermans last summer at Dagstuhl where they gave me advice on how to promote transfer — I’m grateful for their expertise.
I always teach with peer instruction now (because of the many arguments for it), so steps 1 and 2 have lots of questions and activities for students throughout. These are in the talk slides.
Digital Image Filters
The first paper is “Scaffolding to Support Liberal Arts Students Learning to Program on Photographs” (submitted version of paper here). We use this unit in this course: COMPFOR 121: Computing for Creative Expression.
Step 1: The teaspoon language is Pixel Equations which I blogged about here. You can run it here.
Students choose an image to manipulate as input, then specify their image filter by (a) writing a logical expression describing the pixels that they want to manipulate and (b) writing equations for how to compute the red, green, and blue channels for those pixels. Values for each channel are 0 to 255, and we talk about single byte values per channel. The equation for specifying the channel change can also reference the previous values of the channels, using the variables red, green, blue, rojo, verde, or azul.

Step 2:The latest version of the pixel microworld for Snap is available here. Click See Code to see all the examples — I leave lots of worked examples in the projects, as a starting point for homework and other projects.
Here’s what negation looks like:

Here’s an example of replacing a green background with the Alice character so that Alice is standing in front of a waterfall.


The homework assignment here involves creating their own image filters, then generate a collage of their own images (photos or drawn) in their original form and filtered.
Step 3:The Runestone ebook chapter on pixels is here.

Questions after the Python code include “Why do we have the for loop above?” And “What would happen if we changed all the 255 values to 120? (Yes, it’s totally fair to actually try it.)”
Recognizing and Generating Human Language
The second paper is “Scaffolding to Support Humanities Students Programming in a Human Language Context” (submitted version here). I originally developed this unit for this course COMPFOR 111: Computing’s Impact on Justice: From Text to the Web because we use chatbots early on in the course. But then, I added chatbots as an expressive medium to the Expression course, and we use parts of this unit in that course, too.
Step 1: I created a little teaspoon language for sentence recognition and generation — first time that I’ve created a teaspoon language with me as the teacher, because I needed one for my course context. The language is available here (you switch between recognition and generation from a link in the upper left corner).
The program here is a sentence model. It can use five words: noun, verb, adverb, adjective, and article. Above the sentence model is the dictionary or lexicon. Sentence generation creates 10 random sentence from the model. Sentence recognition also takes an input sentence, then tries to match the elements in the model to the input sentence. I explain the recognition behavior like this:

This is very simple, but it’s enough to create opportunities to debug and question how things work.
- I give students sentences and models to try. Why is “The lazy dog runs to the student quickly” recognized as “noun verb noun” but “The lazy dog runs to the house quickly” not recognized? Because “house” isn’t in the original lexicon. As students add words to the lexicon, we can talk about program behavior being driven by both algorithm and data (which sets us up for talking about the importance of training data when creating ML systems later).
- I give them sentences in different English dialects and ask them to explore how to make models and lexicons that can match all the different forms.
- For generation, I ask them: Which leads to better generated sentences? Smaller models (”noun verb”) or larger models (“article adjective noun verb adverb”)? Does adding more words to the lexicon? Or tuning the words that are in the lexicon?
Step 2: Tamara Nelson-Fromm built the first set of blocks for language recognition and generation, and I’ve added to them since. These include blocks for language recognition.


And language generation.


The examples in this section are fun. We create politically biased bots who tweet something negative about one party, listen for responses about their own party, then say something positive in retort about their party.

We create scripts that generate Dr. Seuss like rhymes.

The homework in this section is to generate haiku.
Step 3: The Runestone ebook chapter for this unit is here. The chapter starts out with the sentence generator, and then Snap! blocks that do the same thing, and then two different Python programs that do the same thing. We ask questions like “Which of the following is a sentence that could NOT be produced from the code above?” And “Let’s say that you want to make it possible for to generate ‘A curious boat floats.’ Which of the lines below do you NOT have to change?”

Where might this pattern be useful?
We don’t use this whole three-step pattern for every unit in these classes. We do something similar for chatbots, but that’s really it. Teaspoon languages in these classes are about getting started, to get past the “I like computers. I hate coding” stage (as described by Paulina Haduong in a paper I cite often). We use the latter two steps in the pattern more often — each class has an ebook with four or five chapters. The Snap to Python steps are about increasing the authenticity for the block-based programming and developing confidence that students can transfer their knowledge.
I developed this pattern to give non-STEM (arts and humanities) students a gradual, scaffolded approach to program, but it could be useful in other contexts:
- We originally developed teaspoon languages for integrating computing into other subjects. The first two steps in this process might be useful in non-CS classes to create a path into Snap programming.
- The latter two steps might be useful to promote transfer from block-based into textual programming.
Participatory Design to Set Standards for PCAS Courses
My main activity for the last year has been building two new courses for our new Program in Computing for the Arts and Sciences (PCAS), which I’ve blogged about recently here (with video of a talk about PCAS) and here where I described our launch. Here are the detailed pages describing the courses (e.g., including assignments and examples of students’ work):
- COMPFOR 121: Computing for Creative Expression
- COMPFOR 111: Computing’s Impact on Justice: From Text to the Web
When we got the go-ahead to start developing PCAS last year, the first question was, “Well, what should we teach?” The ACM/IEEE Computing Curriculum volumes weren’t going to be much help. They’re answering the question “What do CS, Software Engineering, Information Technology, etc. majors need to know?” They’re not answering the question, “What do students in liberal arts and sciences need to know about Computing for Discovery, for Expression, and for Justice?”
My starting place was the Computing Education Task Force (CETF) report (see link here) which summarized dozens of hours of interviews and survey results from over 100 faculty. We decided that the first two courses would be on Expression and Justice. There already were classes that introduced programming in a Discovery framing in some places on campus (and my colleague, Gus Evrard, has taken that even farther now — but that’s another blog post). There was nothing for first year students to introduce them to coding in an Expression or Justice context.
When faced with a design problem, I often think “WWBD” — “What Would Betsy Do.” I learned about participatory design working with Betsy DiSalvo at Georgia Tech, and now I reach for those methods often. I created participatory design activities so that Expression and Justice faculty in our College of Literature, Science, and the Arts (LSA) could set the standards for these courses.
I created three Padlets, shared digital whiteboards. A group of people edit a whiteboard, and everyone can see everyone else’s edits.
- One of them was filled with about 20 learning goals derived from the CETF report. These aren’t well-formed learning goals (e.g., not always framed in terms of what students should be able to do). These were what people said when we asked them “What should students in LSA learn about computing?” I wasn’t particularly thorough about this — I just grabbed a bunch that interested me when I reviewed the document and thought about what I might teach.
- I created two more Padlets with possible learning activities for students in these classes. Yvette Granata had recommended several books to me on coding in Expression and Justice contexts, so a lot of the project ideas came out of those. These were things that I was actively considering for the courses.
I ran two big sessions (with some 1:1 discussions afterwards with advisors who couldn’t make the big sessions), one for Expression and one for Justice. These were on-line (via Zoom) with me, Aadarsh Padiyath (PhD student working with me and Barbara Ericson), and a set of advisors. The advisors were faculty who self-identified as working in Computing for Expression or Computing for Justice. The design sessions had the same format.
- I gave the advisors a copy of the learning goals Padlet. (Each session started with the same starting position.) I asked them as a group to move to the right those learning goals they wanted in the class and to move to the left those learning goals that they thought were less important. They did this activity over about 20-30 minutes, talking through their rationale and negotiating placement left-to-right.
- I then gave the advisors a copy of the learning activities Padlet. Again, I asked them to sort them right is important and left is less important.. Again, about 20-30 minutes with lots of discussion.
We got transcripts from the discussion, and Aadarsh produced a terrific set of notes from each session. These were my standards for these courses. This guided me in deciding what goes in and what to de-emphasize in the courses.
Below are the end states of the shared whiteboards. There’s a lot in here. Three things I find interesting:
- Notice where the computer science goals like “Write secure, safe, and robust code” end up.
- Notice what’s in the upper-right corner — I was surprised in both cases.
- Notice that building chatbots is right-shifted for both Expression and Justice. Today, you’d say “Well, of course! ChatGPT!” But I held these sessions in February of 2022. The classes have a lot about chatbots in them, and that put us in a good place for integrating discussions about LLMSs this last semester.
Expression Learning Goals
(To see the full-res version, right-click/control-click on the picture, and open the image a new tab.)

Justice Learning Goals

Expression – Learning Activities

Justice – Learning Activities

These are Standards
The best description of how I used these whiteboards and the discussion notes is that these are my standards. My advisors said very clearly during the sessions — there are too many learning objectives and activities for one course here. Things on the left are not unimportant. They’re just not as important.. I can’t possibly get to everything on these whiteboards in a single semester class designed for arts and humanities students (as the primary audience) with no programming background and with some hesitancy about mathematics.
My advisors were designing in a vacuum. They weren’t going to actually teach this course. Most of them had never seen a course that tried to achieve these objectives for this student audience. So they told me (for Justice), “Yeah, use Jupyter notebooks, and teach HTML and databases and code, all in one semester. And don’t make students install anything on their computers — do it all in the browser.” But they didn’t really have an idea how this might work, or if it was possible. They also didn’t articulate, “You’ll probably to teach about data and iteration and conditionals in here, too.”
It was my job to use these standards as priorities, cover what I could, and fill in with the computer science knowledge to make these do-able. We are also using these to inform future classes, the next classes we make for PCAS. You can compare these whiteboards back up to the course pages at the top of this post to decide how well we did.
Overall, we use participatory design methods a lot as we design for PCAS, to get the input of faculty outside of CS, because these aren’t computer science classes. They are not CS0, CS1, or CS0.5, all of which presume a linear progression towards the goal of being a CS major. Yes, we’re teaching computer science knowledge and skills, but these are classes in Computing for Expression and Computing for Justice. The faculty in those areas are the authorities in what we should teach. They decide what’s important.
Side note: I’ve had these data for over a year, and even presented some of them in a poster at ITiCSE last year. I have trying to figure out how to share them. Maybe this could have been a peer-reviewed publication (conference or journal)? I don’t know. It’s a design activity, and I learned a lot from it, but I don’t know how to write about it as scholarship. I finally decided to write this blog post so that I could share the whole big Padlet whiteboards. Traditional publication venues would be unlikely to let me put these big pictures out there, but I can in a blog post.
My many thanks to my advisors for these classes: Yvette Granata, Catherine Griffiths, M. Remi Yergeau, Tony Bushner, Justin Schell, Jan Van den Bulck, Justin Joque, Sarita Schoenebeck , Nick Henricksen, Maggie Frye, Anne Cong-Huyen, and Matt Bui
Putting a Teaspoon of Programming into Other Subjects (May 2023 Communications of the ACM): About Teaspoon Languages
In May, my students and I published a paper in Communications of the ACM, “Putting a Teaspoon of Programming into Other Subjects” (see link here) about our work with teaspoon languages. (Submitted version, non-paywalled is here.) It’s a short Viewpoint, but we were able to squeeze into our 1800 word limit description of a couple of teaspoon languages, a definition of them, a description of our participatory design process for them, and some of the research questions we’re exploring with them, like what drives teacher adoption of teaspoon languages, use multilingual keywords to engage emerging bilingual students, and identifying challenges to even our simplified notions of programming.
My students helped me to be consistent with our language in this piece, which was so helpful. I’ve been talking about teaspoon languages for awhile, and my language has likely changed over that time. They’re challenging me to be more exact about what I mean.
For example, we use the phrase “teaspoon languages” and not “teaspoon programming languages.” The term “teaspoon” comes from the shorthand “TSP” for “Task-Specific Programming.” So, the “programming” bit is already in there. But in particular, I don’t want to generate the reaction, “But, hey, that doesn’t look like a real programming language…”
Programming languages are used to create software — preferably, software that is reliable, robust, safe, and secure. The programming languages research community works to make programming more effective for people who are using those languages to create software. Programming as an activity can also be used to solve problems and explore domains. We’re building languages for that latter purpose. Much of the programming that scientists and others do to solve problems and explore domains happens to be in programming languages that can also be used to create software (e.g., Python, R, Mathematica, MATLAB). Teaspoon languages (so far) can’t really be used to create software for someone else to execute. They’re not general. I don’t think any of the teaspoon languages that we have created are Turing complete. But teaspoon languages are used to define the behavior of a computational agent. It’s still programming.
Another question we hear quite a bit is “Isn’t this just a domain-specific language?” We tried to answer that in the piece. Yes, teaspoon languages are a kind of domain-specific language, but for a very small domain — a single task. The most critical part of teaspoon languages is “They can be used by students for a task that is useful to a teacher.” DSLs are so much bigger than teaspoon languages. Maybe we can use DSL tools one day to make teaspoon languages, but so-far, we’ve built unique user interfaces and unique languages for each one. The focus is on meeting the need now, and we’ll see if we ever get to generalizability and tools later.
The issues we study in our research with teaspoon languages don’t have much overlap with the programming languages research community. I don’t have good answers to questions like, “How do you support type safety?” or “Why can’t I define a lambda in Pixel Equations?” So, we’ll just call them “teaspoon languages” — and let the “programming” word be silent in there.
Recent Comments