Posts tagged ‘computing for everyone’
England: Time to replace Computer Science with Computing
This is policy wonk stuff, but I find policy fascinating. As a researcher, it’s hard to figure out “How are most people (students, faculty, whatever) in this field thinking about X?” Policy-makers have to figure that out, too, and then have to respond. A change in policy is like a research paper that says, “We found that the status quo wasn’t working anymore.”
The English government has just conducted an independent review of all their school curricula (see report here). The review is critical of how Computer Science is working in English schools today. They say that Computing now pervades all disciplines and “digital literacy” should be taught in an integrated manner. I recommend reading the report — it’s accessible and covers a bunch of important issues, like who is taking CS and where there’s a split between policy and practice.
One of the explicit recommendations is that the government:
Replaces GCSE Computer Science with a Computing GCSE which reflects the full breadth of the Computing curriculum and supports students to develop the digital skills they need.
The government response (linked here) agrees:
We agree with the Review that the computing curriculum should be the main vehicle for teaching about digital literacy, and we are confident that delivering the computing recommendations will provide more pupils with valuable digital skills that are essential for the future.
It is also clear that, in some subjects, digital methods now influence the content and how it is taught. We will work with experts to assess the validity of digital practice in these subjects, the evidence of whether this can be done robustly and whether it merits inclusion in the new curriculum. Where it does, we will include a requirement for the relevant digital content in those subjects’ programmes of study and we will ensure that it aligns with the computing curriculum, to reduce the risk of duplication.
We will also replace the computer science GCSE with a broader offer that reflects the entirety of the computing curriculum whilst continuing to uphold the core principles of computer science such as programming and algorithms, and explore the development of a level 3 qualification in data science and AI.
Bottomline: CS just isn’t the thing anymore. Computing and computing across the curriculum is what is needed.
As a director of a Program in Computing for the Arts and Sciences, and someone who spent 25 years in a College of Computing, I wholly endorse this change and welcome it. As I described in a blog post from a couple of years back, “computer science” was originally invented to be a broad subject to be taught to everyone. Over the last 60 years, “computer science” has become more narrow (e.g., overly emphasizing algorithms while de-emphasizing building and creativity and social impacts, as Sue Sentance describes in this blog post, while “computing” represents a broader perspective. When we think about what should be taught to everyone in secondary school, Computing (and digital literacy, as the reports suggest) are more appropriate than what we now mean when we say Computer Science.
Defining Learner-Centered Design of Computing Education: What I did on my sabbatical
My planned activity during my sabbatical was to revise my 2015 book “Learner-Centered Design of Computing Education.” One of the fixes I wanted to make was a better definition of what “learner-centered design” was. In the new edition, I wrote some formal defining stuff, and then I wrote the below — an extended metaphor to make distinctions between different kinds of “centering” in education. I’m sharing that section here (in its pre-reviewed and pre-edited state). It comes right after defining what the Zone of Proximal Development is and what student performance means.
There are many different kinds of teaching activity that can help a student reach a more sophisticated level of performance. A teacher can model successful performance. The teacher can give feedback on the student’s performance. The teacher can coach or guide a student while attempting a task. They can set expectations in the class which create a social context for success. They can use teaching methods that have a proven research record in promoting engagement and student performance.

Figure 1: A metaphor for teaching contrasting learner-centered and standards-centered
Consider teaching from the top or bottom of the ZPD. Here is a metaphor to make distinctions between two kinds of support in order to create a geography of teaching. Imagine the ZPD as a climbing wall (Figure 1). The student is at the bottom and wants to reach the top. Depicted as grayscale images in this figure, here are two ways a teacher might support the student in scaling this wall:
- The supporter at the bottom can help the student get started, giving them a “boost” or “leg up.”
- The supporter at the top can reach down, and get them the rest of the way to the top of the wall.
The supporter at the bottom is more flexible than the one at the top. She can move to where the student is actually standing. She can help the student scale different parts of the wall or even reach different goals along the wall. She can bend even further if the student is shorter.
But a disadvantage for the supporter at the bottom is that she cannot be absolutely sure that the learner reaches the top. She can meet the student where they are when they first face the wall. She can help them get started on whatever path they choose on the wall.
The supporter at the top can help students who are almost at the top of the wall. He can be sure that students actually reach the learning objective. When he is reaching down, he is in a fixed position. He can help the student reach the objective where he is at, the level that he has already achieved. He can also be sure when a student does not reach this standard – he can see the students who fall, or who do not make it to his level. He is in a better position to decide whether the student is going to achieve the desired objectives.
The supporter at the bottom is more learner-centered. The supporter at the top is more standards-centered. Neither supporter is particularly strong at helping the student in the middle, when the student is challenged to persist, to stay engaged, and to maintain motivation. If the student is not particularly interested in achieving the top of the wall, they are satisfied making it part-way to the objective, then the learner-centered teacher has the most to offer.
Learner-centered teaching is concerned with helping students where they are, helping them to get started, and getting them engaged and motivated to tackle the mid-part. Low enrollment and high withdrawal or failure rates (sometimes called WDF rates) are issues that learner-centered teaching addresses. Learner-centered teaching also addresses issues of diversity, with the goal that all kinds of students can succeed in the class — even those who think that they cannot succeed or do not have the prior background to succeed.
Standards-centered teaching is concerned about making sure that students have what they need to go on, in their studies or in their career. Students who fail the second class because they did not learn enough in the first class is an issue for standards-centered teaching. Talking to industry partners about the desired out- comes is standards-centered. Concern about what graduates can do and achieve is a standards-centered teaching issue.
(I’m skipping some text here about teacher-centered, classroom-centered, and other forms of structuring education.)
I am splitting hairs a bit between child-centered and learner-centered. Learner-centered also starts from the students’ interests and considers the learner’s needs, and is very much about student construction of knowledge in their own minds, since that is how learning takes place. As described in Chapter 2, the knowledge to be learned in learner-centered education is defined by the community of practice. That is external to the learner.
Within the metaphor, I am describing three kinds of teaching: Learner-centered (supporter at the bottom), standards-centered (supporter at the top), and maintaining motivation and engagement (in the middle). Of course, teachers and students have to address all these issues, but it is sometimes useful to focus on one part. Consider this metaphor: If you have heart problems, it is important to go to a cardiovascular specialist. That does not mean that you do not need to care about skeleton, digestion, and skin; you need all of those, but sometimes you can address critical issues or fix problems by specializing. I focus on the first one because it is the most important. I like the way my colleagues Amy Bruckman and Betsy diSalvo put it
Computer science is not that difficult, but wanting to learn it is.
Dr. Tamara Nelson-Fromm defends her dissertation: What Debugging Looks like in Alternative Endpoints
In May, Tamara Nelson-Fromm defended her dissertation “A Qualitative Exploration of Programming Instruction for Alternative Endpoints in Post-Secondary Computing Education.”
I’ve talked about Tamara’s work a few times in this blog.
- One of her early projects was a teaspoon language to help history teachers to build history timelines (blog post).
- At PLATEAU 2024, she presented our paper suggesting that there was transfer from the Pixel Equations teaspoon language into building Image filters in Snap! (blog post).
- She presented our paper at SIGCSE 2025 on how we designed the PCAS courses oriented towards creative expression and social justice (blog post). Tamara worked with me on that design process, particularly on how to meet justice scholars desire for their students to learn about databases, HTML, and SQL (blog post) and on helping students to understand how a computer might generate language (blog post).
Tamara has published a lot more than that during her PhD work in part because she became an expert on reflexive thematic analysis. She worked with several other students on using RTA. At SIGCSE 2026, she and Aadarsh Padiyath will present their paper on how to use RTA for computing education research. I’ve read the paper and loved it — I have been recommending it widely.

Tamara with her committee: Valerie Barr (on Zoom), (from right) Nikola Banovic, Barry Fishman, Tamara, and me
I want to tell you about her dissertation, but I don’t want to divulge too much — only the first study has been published so-far. The big idea that drives her work is alternative endpoints. She and I have talked a lot about the paper by Mike Tissenbaum and his colleagues. The big question that she’s helping to answer is “What will CS education look like as we move beyond producing more software developers?”
Study #1: New CS Teachers learning Debugging: Her first study investigated how we develop new CS teachers. From the start of her PhD, she has been interested in how students learn to debug. Her method was novel (and hard to get past reviewers). Instead of studying new CS teachers and how they learned debugging, she interviewed expert teachers of new CS teachers. She interviewed the people who run professional training, summer workshops, and many the other ways that teachers learn CS. Rather than track individuals (who might not struggle with debugging, or who might not be representative of new teachers), she talked to people who have been doing this for years. What do they do to teach debugging?
Here was the amazing answer: Avoid it. In hindsight, it makes all the sense in the world. Imagine: You’ve got a teacher new to CS in your workshop. In the first workshops (which is often all you get with teachers), you want them to succeed.. You want them to come back for more workshops. So, you do all that you can to avoid bugs. Since bugs will happen still, you provide checklists and “Here’s what to look for if it doesn’t work” guidance.
Of course, really learning to debug comes later…or does it? Tamara raises the intriguing possibility that maybe that’s enough. Maybe for what these teachers are doing (especially in primary school), maybe it’s enough to just have checklists. Again, it’s about alternative endpoints — what does a K-12 teacher need to know about debugging? The paper on her first study will appear at SIGCSE 2026 in February.
Study #2 and #3: PCAS Students: Her second and third studies involved PCAS students. In her second study, she looked at why arts, sciences, and humanities students would want to take courses involving programming. In her third study, she returned to the theme of the first study — how do PCAS students debug?
I don’t want to say too much about these studies, but I do want to tell one story from Study #3 that connects strongly to the story about teachers in Study #1. One of the ways that Tamara saw PCAS students debugging was the way that your modern mechanic fixes your car.
Mechanics today do not need to how your car actually works. Instead, they plug it into the diagnostic machine, and they get a code. The code tells the mechanic where the problem is. The mechanic then follows a procedure or (more likely) replaces a part — whatever the manufacturer guidance is for that code. They then try it again.
That’s how some of the PCAS students debugged. Each assignment for the arts and humanities classes was open-ended, and I gave them completely working examples. The students would write their programs and try them. If they didn’t work, they’d check that they didn’t make a simple mistake. If they couldn’t figure it out, they would go back to one of the worked examples and copy-paste the part that worked and did about the same thing. Then they’d test again. If they still couldn’t get it to work, they’d explore changing what they were trying to do, so that they still met the requirements — but they could get it working.
Is this problem? Do the students need to learn better debugging skills? Let’s go back to alternative endpoints again. Not everyone needs to have a strong mental model of the working program.
Tamara wasn’t prescriptive in her dissertation. She didn’t make judgements of good or bad. Rather, she described the world as she found it, and raises the reasonable possibility that what she saw is working just fine.
Tamara’s dissertation is important. The alternative endpoints paper suggested that we should think about different audiences learning to program for different purposes than software development. Tamara showed us what that is looking like.
Three stories about how CS is overwhelming, and ideas for how we can do better
When we looked at how PCAS students thought about our classes (for our SIGCSE 2025 experience report), I was surprised at students’ use of the word “overwhelming” when talking about CS classes. I was pleased that they positively contrasted our courses with CS classes that they had taken previously, but I didn’t realize how much baggage the students brought with them — how negatively they perceived computer science. Some students told us how emphatically did not want a job in the Technology industry and didn’t want to take CS classes. When Tamara Nelson-Fromm interviewed the PCAS students from our first semester, she told me that every student she interviewed had tried to learn programming (via formal or informal means) and failed. That’s why they were trying the PCAS courses. Most weren’t looking for a sense of belonging in CS — they had their identities as artists or scientists or managers.
My guess is that students made this choice away from CS pretty early on. Some studies support the proposition that students make career decisions by late elementary school. We know that less than 10% of US high school students take a CS class each year (State of CS Ed Report). And while undergraduate CS classes and majors have grown, the majority of students at any University are not choosing CS. And as I described in an earlier post in this series, the kind of computing that students use outside of CS classes is different from what’s inside of CS classes.
I shouldn’t have been surprised about what students were telling Tamara. I had read about the #techLash. There is a lot of literature about how much CS overwhelms students. There’s also literature on how we can do better. Here are three of my favorite papers in this space.
“‘I like computers. I hate coding’: a portrait of two teens’ experiences” by Paulina Haduong communicates the punchline in the title. This is a rich, qualitative study of students who love to use computers, but who hated their experiences with Hour of Code, with Scratch, and with formal and informal education around programming. In the end, though, this is a positive paper. As Paulina writes, “These learners’ experiences illuminate the ways in which identity, community and competence can play a role in supporting learner motivation in CS education experiences.”
“‘I Always Feel Dumb in Those Classes’: A Narrative Analysis of Women’s Computing Confidence” by Amanda Ross and Sara Hooshangi is another paper that tells the punchline in the title. Amanda is completing an Engineering Education PhD at Virginia Tech, and has accepted a job at Rose-Hulman (congratulations both to her and Rose-Hulman!). For this paper, she interviewed women who succeeded in introductory CS and had high self-efficacy, but still dropped out of computer science.
Results show that while participants were highly successful in their course (reporting a high mark in the class) and had relatively high self-efficacy when discussing specific programming problems, they lacked computing self-concept in whether or not they were good at programming in general.
These first two papers show women who are interested in programming and who are good at it, but struggle to succeed in CS education. Why? Maybe it’s because of how we frame the field of computer science.
Being a software developer is a hard job — you translate requirements to code, and you aim for the code to be robust and secure. Most people who program (scientists, artists, end-user programmers, critical computing scholars, etc.) are programming for themselves, to achieve a goal of their own or to express themselves. It’s only the minority of programmers, the professional software developers, who code primarily for others. So, I understand why classes to prepare future software developers are about the hard task of precisely converting specifications to well-tested program code. But that’s maybe 10% of people who program.
The field of computer science has developed a narrow frame. We could have a broader one, one that includes the way that other disciplines use programming. I argued in an earlier post that we can broaden participation in computing by making computing education broader than just what CS and the Tech industry wants.
I have been telling people about this talk by Felienne Hermans from SPLASH 2024, “A Case for Feminism in Programming Language Design.” I highly recommend her paper with co-author Ari Schlesinger (which you can find here), but if you are interested in how computing became so male and so uncomfortable for female students, you must watch this talk. It’s a compelling and thought-provoking story, which I found both emotional and insightful. Watch through the Q&A if you want to get some additional evidence that Felienne is right about how she describes the field and how computer scientists push against a broader framing.
I appreciated Felienne’s point that computer science has confused “hard” with “interesting” or “valuable.” We overly value things that are hard to do, which leads us to undervalue things that are interesting, valuable, or useful but are not necessarily hard to do (e.g., studying how people build in Excel is interesting and valuable, even if it’s not as “hard” as studying programmers building million LOC systems). I have heard this sentiment voiced lots of times. “The study was really not that much. I don’t see why it’s interesting.” “The system wasn’t hard to do. Anyone could have built it. It’s not really a contribution.” “Anyone could have thought of that.” An academic contribution should be judged by what we learn, not by how hard it was to do or invent. That focus on being hard is part of what drives students away from computer science.
Felienne and Ari’s paper helps to explain tension between Computer Science departments and Computing Education Researchers. CER work doesn’t look like CS “hard” work. My students don’t typically build a big piece of code that gets used by thousands. Some of my students tested educational psychology theories in a computer science context (see blog post about Brianna Morrison’s work as an example). Some of these experiments are replication studies. There’s an obvious hypothesis — that what was seen by educational psychologists in other fields would likely be true in computer science, too. Whether the replication worked or not, the findings are novel contributions to CER because they tell us something that we didn’t know before.
Making computer science classes more welcoming and inviting isn’t about changing the nature of computer science. Paulina, Amanda, and Felienne are talking about and with people who love working with computing. The goal is not to reduce rigor. The goal is to remove unnecessary constraints. We can allow students to express themselves in computer science classes. We don’t have to make students feel dumb in computer science classes. We need to be open to broader definitions of what counts and is important in CS. We need a larger frame for the field of computer science and the goals of computing education.
I started this blog post series in February, describing how we designed the PCAS courses for arts and humanities students. The next post described how computing education was different than CS education. I offered two posts on computing education in the arts and and in the sciences. My previous post was a recommendation that CSTA (and primary and secondary school overall) focus more on computing education for everyone and less on CS education. I’m ending the series here with a post on how to make computing education work for all students, whether aimed at technology for their career or not. I’m making an argument for computing education for all and even more specifically programming for all. The way we get there is by looking at how the whole word uses computing and programming, not just what the computer scientists want.
School teachers don’t need to recruit students into CS: An alternative model for K-12 computing education
The Computer Science Teachers Association (CSTA) is in the process of updating their influential standards. It’s a long process, and it started last summer with a visioning document called “Reimagining CS Pathways.” Part of their new reimagining includes dispositions which are meant to cross all content and skill areas.

I am arguing here that “Sense of Belonging in CS” should not be in that set. The role of computing education in K-12 should be to introduce students to computing for whatever they are going to do in life. As I’ve been pointing out in the last few blog posts, the uses of programming in science, arts, and humanities don’t look like CS classes. Computing education is different than CS education, as I described in the blog post that started this series. K-12 computing education should not be about convincing students that they belong in CS, but should be about giving them the confidence that they can use computing in whatever career they choose.
I made a critique of this dispoition on social media, and the responses suggested that I look more carefully at the document. I was told that there is strong research supporting the need for “sense of belonging in CS.”
Here’s the text of the document introducing this disposition:
A sense of belonging, or the “personal involvement (in a social system) to the extent that the student feels that they are an indispensable and integral part of the system” (Anant, 1967, p. 391), is one of the more widely researched dispositions in CS education. Its importance is linked to its relationship to a student’s sense of their own ability in (Veilleux et al., 2013) and interest in persisting in their studies (Hansen et al., 2023). Sense of belonging is an important facet of ensuring equity in CS, since this sense often differs by student demographic group (Krause-Levy et al., 2021).
That’s a really strong definition of “sense of belonging.” I don’t think that even the most welcoming undergraduate CS classes and programs aim to make students feel that they are “an indispensable and integral part” of computer science. I looked up all three of these references (and linked each of them above). Here is the first sentence of each of those papers:
- Veilleux et al., 2013: “Retaining students in computer science majors has been a persistent topic among computer science educators for almost two decades.”
- Hansen et al., 2023: “The purpose of this longitudinal investigation was to examine the effectiveness of a comprehensive, integrated curricular and co-curricular program designed to build community, provide academic and social support, and promote engagement in academically purposeful activities resulting in more equitable environments for historically underrepresented, low-income science, technology, engineering, and mathematics (STEM) information technology (IT) students.”
- Krause-Levy et al., 2021: “Students’ sense of belonging has been found to be connected to student retention in higher education.”
None of these are about K-12 education. All of them are about CS, IT or STEM majors in higher education. The goal of “sense of belonging in CS” is undergraduate retention in the higher-education major in these papers, not K-12 students persistence at learning computing that is useful for them. The goal of K-12 education is to prepare students to be CS, IT, or STEM majors — or arts, humanities, business, or anything else majors, or to be successful citizens in a technological society, even if they don’t go to college. I don’t see any literature cited in the document that tells us how important a “sense of belonging in CS” is to K-12 students and their success in learning about computing.
An Alternative: Everyday Computing
We need a model for K-12 computing education that recognizes the value of alternative endpoints, as Tissenbaum, Weintrop, Holbert, and Clegg have described it (BJET link, UIUC repository link). K-12 CS education should not be a jobs program for technology companies.
Here’s an alternative model. The University of Chicago has a mathematics curriculum called “Everyday Mathematics.” Here’s how they describe it:
Everyday Mathematics is a research-based and field-tested curriculum that focuses on developing children’s understandings and skills in ways that produce life-long mathematical power.
The Everyday Mathematics curriculum emphasizes:
Use of concrete, real-life examples that are meaningful and memorable as an introduction to key mathematical concepts.
…
Each grade of the Everyday Mathematics curriculum is carefully designed to build and expand a student’s mathematical proficiency and understanding. Our goal: to build powerful mathematical thinkers.
I didn’t study Everyday Mathematics when I was a kid, but the description resonates with how I remember my own math classes. I saw math problems relating to cooking, engineering, craft, orienteering, map making, and lots of other domains. The mathematics education was contextualized to support students in seeing the connections and relevance of mathematics to their lives, to what they thought was important. My mathematics teachers were preparing us to be mathematical thinkers, not necessarily mathematicians. I enjoy mathematics, and use it often, but I don’t think of myself belonging in math.
We need Everyday Computing. Our goal in K-12 education should be to build powerful computational thinkers. We need to relate computing education to the things in students’ everyday lives, and that they’re likely to see in their lives. At the second and post-secondary level, we should help students to think about the computing that they’re already using, like R and Python, with vector operations and a focus on data over algorithms. We need school teachers to show students that computing is for them, not that they belong in computing.
Recently, the CS for California organization rewrote their mission and vision statements. I like them as a model for what CS education should be nationally.
Our New Mission: Advance equitable computer science education for all California students by fostering inclusive engagement and community partnerships, strengthening support for educators, and informing policy through data.
Our Vision for the Future: California students are equipped with foundational computing competencies to become innovative thinkers and creators of a just and inclusive future.
It’s not about creating computer scientists or technology workers, though those are possible endpoints. It’s about supporting students whatever they want to become and using computing to get there.
How scientists learn computing and use LLMs to program: Computing education for scientists and for democracy
Gabi Marcu is a professor in the University of Michigan School of Information who studies technologies to promote health (see her website). She’s also an improv performer, and a friend. She asked me to participate in her project to combine research and improv called “Extra Credit.” She asks researchers to explain their research in 10 minutes, then a group of improv artists riff on the research for another 10 minutes.
The first speaker in the session I participated in was Joyojeet Pal, who studies social media and politics. He was hilarious — I heard one of the improv performers quip, “Wait — it’s our job to be funny!”
I talked about what everyone needs to know about computing to support democracy, with a focus on our recent course on AI. Barb recorded it and allowed me to share it.
I learned the most from hearing and meeting Elle O’Brien (see her website). Elle is a computational neuroscientist who decided to go meta. She now studies how scientists learn and use computational methods.
She had a paper in Harvard Data Science Review last year on how scientists learn computational methods, “In the Academy, Data Science Is Lonely: Barriers to Adopting Data Science Methods for Scientific Research.” In her study, it didn’t go well:
These scientists quickly identified that they lacked the expertise to confidently implement and interpret new methods. For most, independent study was unsuccessful, owing to limited time, missing foundational skills, and difficulty navigating the marketplace of educational data science resources.
I was surprised how much the scientists in her study needed more curation. There’s no lack of ways of learning data science — videos, tutorial, MOOCs, books, bootcamps, and on and on. But Elle was talking to working scientists. They were busy professionals. They struggled to find the right learning materials for their level of knowledge that matched what their field used.
Elle and I have both noticed how many different computational cultures there are across the sciences and liberal arts. These scientists use R, and these others use Python — even in the same department. They talk together about their science, but not really about code. Computational artists I’ve met at Michigan only use Processing or Unity. I’ve learned that Economists at Michigan mostly use Stata, a tool that I’d never heard of before my informants (two Economics faculty and PhD student) told me about it. While programming is common across the sciences, actually taking CS classes is rare among scientists that we’ve worked with. Most of the programming science faculty we met are self-taught, or learned through apprenticeship from the labs and groups they came up through.
Elle observed that computational scientists she works with are increasingly multi-lingual. They might use Python for some of their tasks (data processing, data cleaning, modeling, and/or simulation), then use R for statistics and visualizations. They are making choices for programming languages based on the libraries and communities that use those tools, not on the characteristics of the languages themselves. I’ve worked with some scientists who also work in multiple language ecosystems, but within the constraint that they’re trying to optimize their time. They’re not trying to transfer their knowledge of programming from Python to R — they’re just trying to get their work done. “Recipes” of how to do things in R are just fine for them.
Elle tries to convince some of her scientists to consider using version control systems, but they don’t see much benefit. Few scientists that either of us work with are inventing new abstractions. They write code (often, no more than a screenful) to get a job done, then throw the code away. They care about the data and the results, not the code. If you don’t invent new abstractions and you don’t reuse code, what does Github buy you?
Elle has a new paper appearing in CHI 2025 that is fascinating and relevant: “How Scientists Use Large Language Models to Program.“ She finds that “scientists often use code generating models as an information retrieval tool for navigating unfamiliar programming languages and libraries.” Again, they are busy professionals who are trying to get their job done, not trying to learn a programming language.
I was impressed with how much effort the scientists that she studied put into checking what the LLMs produced. One scientist ran code in a familiar system to compare to the results generated by the LLM-generated code. They all wisely distrusted the LLM code, more than I usually see computer scientists (and especially computer science students) who may not check LLM-generated code.
And yet, the LLMs still inserted bugs that the scientists missed. LLMs are absolutely nefarious in how and where they generate bugs. Elle raises the possibility that LLMs are having a negative influence on the scientific enterprise.
Elle is engaging in computing education research, though I don’t think that she thinks of herself that way. She’s not likely going to submit anything to ICER or the SIGCSE Symposium anytime soon, but computing education researchers need to know about work like hers. She’s studying scientists from the lens of being a scientist who uses computing, not a computer scientist. She knows more about what scientists need from programming and how they learn programming than most computer scientists or computing education researchers I know.
My Favorite ICER 2024 Paper: How Media Artists Teach Computing
I’m hesitant to state a preference for my favorite paper at the International Computing Education Research (ICER) Conference in 2024. There were so many cool papers (including some by my students!). But it’s an easy choice if I use the heuristic, “Which paper have I still been thinking and talking about the most after the conference?”
My favorite paper of ICER 2024 was Alice Chung and Philip Guo’s paper “Perpetual Teaching Across Temporary Places: Conditions, Motivations, and Practices of Media Artists Teaching Computing Workshops.” It’s a study of real media artists who teach computing in workshops. The first sentence of the paper is “Why and how do new media artists teach computing?” I love this question, and the answers are fascinating.
One of their observations is that media artists teach as part of their practice. They’re always learning new tools and practices, and also always sharing them. Let’s contrast this with software engineering. How many professional software developers also teach software development? How many consider it integral to their practice? Or swap the question — how many CS1 instructors are also professional software engineers?
Our study finds that artists strategically understand and respond to these conditions, developing what we call perpetual teaching – reframing the internalized duty or responsibility of perpetual training into pedagogical frameworks
So why? Why would media artists spend their time teaching? It’s about trying to be critical about what they’re doing.
We found that artist-educators are motivated by creating spaces to unlearn ineffective conventions and incubate new cultures rather than by technical knowledge transfer alone. Furthermore, they intended to design their workshop materials (e.g., prompts, activities, reading lists) to prepare participants to create critical interpretations of computing outside of mainstream tech career pipelines.
This is such an interesting goal and a contrast with computer science education. Artist-educators want to make new things and explicitly contrast with traditional technology paths. They want their students to be media artists who are critical of what’s happening in the rest of computing. Explicitly, media artist-educators are focused on alternative endpoints in computing education.
The paper goes into much more depth with examples and quotes from the artist-educators about their goals and motivations. I highly recommend reading the whole paper. It’s well-written and grounded in education literature.
I have had more conversations about this paper than any previous ICER paper that I am not co-author of. In most of the conversations, a computing education researcher was critiquing the paper, and I was defending it. The biggest critique I heard is that the paper does not speak to CS educators’ issues and offers them no solutions to their problems.
I mostly agree, but that’s what’s why I’m so excited about this paper.
The International Computing Education Research (ICER) conference should be about more than computer science education. Of course, it’s important to study CS1 classes, CS majors, and how to produce great software developers. We need good CS education, and we need research on what’s going on in CS education and how to make it more successful — which includes studies of teachers. But there will be far more people programming than will ever take a CS1. Studying how people learn computing beyond CS and how to make their learning successful is important for our modern society. That’s computing education, and ICER needs to have more papers like this one that explores the much larger world beyond traditional CS education.
But in the best possible world, this paper does speak to CS educators, too. Alice and Philip write:
New media artists view teaching as a means to promote greater diversity in computing cultures, emphasizing education’s role in broadening participation and challenging traditional narratives.
Wouldn’t we wish that to be true of all CS educators, too?
CS doesn’t have a monopoly on computing education: Programming is for everyone
I participated at the first SIGCSE Virtual Conference last December. I was on a panel “Assessments for Non-CS Major Computing Classes” (see the ACM DL paper here). The panelists were excellent. I was excited to meet Permit Chilana who came up with the idea of conversational programmer in her 2015 VL/HCC paper. Her talk was particularly relevant to me because she emphasized how she is studying business students, not computer science students — her research is about how non-CS students interact with computing and programming. Jinyoung (Jina) Hur was our organizer, and she ran the panel, which it left to her advisor, Katie Cunningham to present their fascinating work contrasting conversational programmers and end-user programmers in the CS classroom, which appeared at ICER 2024 (see paper here). Katie also shared some of her studies of conversational programmers starting from her dissertation work. Dan Garcia presented his work (with Armando Fox) on mastery learning which gives even non-CS majors the chance to get top grades in introductory CS classes (see nice piece that Berkeley Engineering wrote about their effort).
My talk was about what we’re accessing non-CS majors on. My claim is this: Computing education for non-CS majors is different than what we teach CS majors. It is important to figure out why non-CS majors are taking courses designed for CS majors (maybe they want to be conversational programmers or end-user programmers?) and to make sure that they can succeed (including getting good grades) when they are in those classes. However, it’s even more important to figure out the learning needs of the non-CS majors around computing and how to meet those — and then, how to assess the learning in meeting those needs. Education for CS majors is different from what non-CS majors need.
Here are a few examples of what I mean:
- When I asked Social Justice scholars what they wanted their students to know about computing, the top learning objective was for their students to understand that websites can be built from databases (see blog post about that story and our recent SIGCSE 2025 paper). Most CS majors probably don’t learn this.
- My colleague Gus Evrard led the effort to build our Python Programming for Sciences classes. He got four different departments (who were already teaching Python) to collaborate to define this new course. The course is about SciPy, cleaning data, Numpy, and building data visualizations with libraries like MatPlotLib. Most data science programs cover these topics, but most computer science programs don’t.
- I’m pretty sure that the most popular programming language (in terms of number of people using it) on most campuses is R. All of Statistics is taught in R. It’s very common in Psychology, Anthropology, and Sociology. Natural Sciences (chemistry, biology, physics) is increasingly using R for statistics and visualizations, even if they use Python for data management, modeling and simulation. I haven’t found a computer science program yet that teaches R or computer science through R, e.g., explaining to students the computer science that is most relevant to them.
- End-user programmers most often use systems where they do not write loops. Instead, they use vector-based operations — doing something to a whole dataset at once. (Kind of like the higher-order functions that Kathi Fisler used to beat the Rainfall Problem.) Many scientists use R and Numpy on Python. Many Engineers use MATLAB. Yet, we teach FOR and WHILE loops in every CS1, and rarely (ever?) teach vector operations first. The K-12 US state CS standards I’ve read explicitly include loops — teachers have to teach loops to meet the standard. End-user programmers likely outnumber traditional software developers (see some estimates). So why are we first teaching the stuff that fewer people use (hard-coded loops), requiring students to learn the harder forms?
Here are the points that I want to make over the next few blog posts. Many, many people are programming today. A minority of them are professional software developers. Learning to program is a form of computing education, but computer science is not typically teaching the things that non-CS majors need to program, so computing education is moving away from Computer Science (field, departments, teachers). Computer science no longer has a monopoly on computing education.
Here’s how I’m using these terms. Computer science education is teaching students about computer science. For the most part, CS education has become focused on developing professional software developers and other workers for the technology industry. Computer science (as a field or a department) has a lot of definitions, some of which I present when I give talks (below, and in this blog post). (Notice that the K-12 definition still includes “impact on society” but ACM/IEEE dropped that out in the 2021 Computing Curriculum volume.)

My favorite is the original one from Perlis, Newell, & Simon (1967): “The study of computers and the phenomena surrounding them.” But most computer scientists balk at how broad that one is. So, let’s call that “computing” and preparing students to work with computing (explicitly including programming) in whatever field they do computing education.
Computer science departments should offer computer science educationWe obviously need lots of people who know computer science, including many professional software developers. But most people who program will not be computer science majors (e.g., see this 2017 Oracle study). The needs for computing education must also be met.
(Side note: It is an interesting question: If students’ computing education needs are not being met, whose job is it to figure out a solution? Here at Michigan, individual departments were making classes to teach students the programming needed in their discipline, but now we’re combining them in PCAS. I’d wish that computer scientists would work to meet those needs, but computer science today is mostly about developing future technology workers. I am grateful that U-M’s College of Literature, Science, and the Arts started PCAS.)
The breakup of the CS monopoly is a particularly good thing for computing education researchers. There is SO much to do! So much of our research in the SIGCSE community is about CS1 for future CS majors. But computing education research doesn’t have to be about CS majors, and doesn’t have to be about CS1. There is so much more to study and explore when we think about how artists, scientists, business people, designers, architects, and humanities scholars, and everyone else learns about and uses programming.
Dr. Bahare Naimipour defended her dissertation
I’m on sabbatical this semester, so I finally have time to catch up on some long overdue blog posts.

Bahare at her defense with her committee: From left, Barbara Ericson, Shanna Daly, James Holly Jr., Bahare, me, and Tammy Shreiner.
Dr. Bahare Naimipour successfully defended her Engineering Education Research dissertation this last August 2024, Supporting Social Studies Data Literacy Education: Design of Technology Tools and Insights from Expert Teachers and Teacher Change Journeys.
I’ve posted about Bahare’s work over the years. She had a poster in 2019 about our first participatory design sessions aimed at understanding what social studies teachers wanted in data visualization tools (see post here). She has been working on the NSF grant that Tammy Shreiner and I received in 2020 to study how social studies teachers adopted data literacy (announcement of that grant here). Bahare had a paper at FIE 2020 (presented virtually, as that was during the pandemic) on how social studies teachers interacted with programming-based data visualization tools (post here). She compared programming and non-programming tools at SITE 2021 (post here). The tool that we created, DV4L, was the first of what we later called teaspoon languages — here is the post where we talked about a couple of teaspoon languages for social studies education.
Bahare’s dissertation is made of three related studies. The abstract from her dissertation is below. Here’s my quickie summary of the three studies, framed for a computing education audience.
First, Bahare describes the long process of developing DV4L — across multiple participatory design sessions, both in-person in Tammy’s pre-service classes before the pandemic, and on-line with in-service teachers during the pandemic. She articulates the features of DV4L which are specific to social studies teachers and describes how they were developed in response to teacher needs. This chapter has been accepted as a paper in J-PEER.
Second, Bahare followed three teachers for two years as they (slowly) developed data literacy plans for their classrooms that used technology. This is such a rich story. Bahare frames it in terms of Guskey’s Model of Teacher Change. Guskey said that teachers don’t change because of professional development. They have to have some interest in change, or they wouldn’t be taking the professional learning opportunities seriously. They actually change when they try something in the classroom and the students’ response convinces the teacher that a new approach might work. Bahare watched that happen, but found that it was even more iterative than Guskey describes. Her teachers took multiple professional development sessions before they might even try something. She saw teachers try something…and get it wrong, and with some encouragement from Bahare, try again. This study really gives you a sense for what it’s going to take to achieve CS for All across the curriculum.
Finally, Bahare interviewed exemplary social studies teachers (selected by some pretty tough criteria) and asked them how they implemented data literacy in their classrooms. Bahare saw patterns across what the teachers were doing, and those data literacy design patterns are going to feed into future professional learning opportunities. The amazing thing for this audience is almost none of them used any computational tools. They liked our tools when Bahare demonstrated them, and maybe some might adopt — but I doubt it. They are excellent teachers recognized for their skill, and they got there without computation. Why would they change now? Maybe if we showed them how much more they could do with computational tools. Maybe if we showed them how easy it could be. Those are possibilities for future studies.
All told, Bahare has written a remarkable dissertation. It’s about data literacy in social studies education, but more, it’s about the challenges that face us as we bring the power of computing beyond the STEM classroom.
Abstract
This dissertation aims to contribute to the K-12 engineering education literature in a social studies context. Data literacy (DL) is the ability to understand and interpret what data means by drawing conclusions from patterns, trends, and correlations in data visualizations (DVs). DL is part of K-12 U.S. social studies standards making it relevant for engineering education researchers since it intersects both engineering and social studies. All K-12 students take social studies classes, yet most people are not data literate. Research suggests that social studies teachers have insufficient resources for teaching DL, so not all social studies teachers teach it. The goal of this dissertation is to shed light on the topic of K-12 DL in social studies by exploring three research questions:
- When designing engineering tools for non-STEM social studies teachers, what design considerations should be met?
- How do K-12 social studies teachers choose to explore data literacy in their pedagogy after participating in a data literacy professional learning opportunity (PLO)?
- How do expert social studies teachers use and explore DVs in their pedagogy, describe their data literacy pedagogical strategies, and explore/use technology tools to support their data literacy pedagogy?
To answer my first research question (Study 1), a participatory design (PD) approach was used to learn what social studies teachers (both pre-service and in-service) want in their classrooms by testing the usability of real tools with participants. Through three design phases, pre and in-service teacher groups informed the design and development of learning tools for social studies DL. Using a Social Construction of Technology lens, I describe the scaffolding embedded in the resulting tool DV4L by considering: 1) teachers’ perceptions of usefulness and usability in the DL tools they explored, and 2) how PD sessions with pre- and in-service teacher groups evolved over time beginning with their interactions with existing tools and leading to our current DV4L prototype tools.
I addressed my second research question through a longitudinal study (Study 2) that delved into how three K-12 social studies teachers explored DL during and after a PLO. Narrative methods were used to describe how three social studies teachers changed their DL practices. The journeys began with teachers as they explored a DL focused PLO, incorporated DL in their lesson plan(s), and include their reflections after implementing the lesson(s) in their classrooms. I used Guskey’s Model for Teacher Change as my analytical lens to understand each teacher’s DL journey.
My experiences in Study 1 and Study 2 made me wonder how expert teachers were meeting their DL learning goals. I used Shulman’s Pedagogical Content Knowledge framework to design Study 3 and address my third research question. I looked at how expert teachers explored DVs and described their DL pedagogical strategies and technology uses through a think aloud and semi-structured interview. Findings describe how five expert teachers made meaning of data and DVs through the practices and strategies they used or described using in their pedagogy.
This dissertation informs the design of curriculum, PLOs, and technology tools to support social studies teachers reach their DL learning goals. It has already informed the design of two socially constructed DL tools for K-12 social studies. Such tools provide teachers pedagogical power in their graphing activities in ways that support their DL learning goals while also promoting engineering skills and thinking.
PCAS Expansion, Growth, Research, and SIGCSE 2024 Presentations
The ACM SIGCSE Technical Symposium is March 20-23 in Portland (see website here). I rarely blog these days, but the SIGCSE TS is a reminder to update y’all with what’s going on in the College of Literature, Science, & the Arts (LSA) Program in Computing for the Arts and Sciences (PCAS). PCAS is my main activity these days. Here’s the link to the PCAS website, which Tyrone Stewart and Kelly Campbell have done a great job creating and maintaining. (Check out our Instagram posts on the front page!)
PCAS Expansion
I’ve blogged about our first two courses, COMPFOR (COMPuting FOR) 111 “Computing’s Impact on Justice: From Text to the Web” and COMPFOR 121 “Computing for Creative Expression.” Now, we’re up to eight courses (see all the courses described here). As I mentioned at the start of PCAS, we think about computing in LSA in three themes: Computing for Discovery, Expression, and Justice. Several of these courses are collaborations with other departments, like our Discovery classes with Physics, Biophysics, Ecology and Evolutionary Biology, and Linguistics.

This semester, I’m teaching two brand new courses. That means that I’m creating them just ahead of the students. I did this in Fall 2022 for our first two courses (see links to the course pages here with a description of our participatory design process), and I hope to never do this again. It’s quite a sprint to always be generating material, all semester long, for about a hundred students.
One course is like the Media Computation course I developed at Georgia Tech, but in Python 3: COMPFOR 221: Digital Media with Python. The course title has changed. When we first offered it, we called it “Python Programming for Digital Media,” and at the end of registration, we had only five students enrolled! We sent out some surveys and found that we’d mis-named it. Students read “Python Programming” and skipped the rest. The class filled once we changed our messaging to emphasize Digital Media first.

When we taught Media Computation at Georgia Tech, we used Jython and our purpose-built IDE, JES. Today, there’s jes4py that provides the JES media API in Python 3. I had no idea how hard it was to install libraries in Python 3 today! I’m grateful to Ben Shapiro at U-W who helped me figure out a bunch of fixes for different installation problems (see our multi-page installation guide).
The second is more ambitious. It’s a course on Generative AI, with a particular focus on how it differs from human intelligence. We call it Alien Anatomy: How ChatGPT Works. It’s a special-topics course this semester, but in the future, it’ll be a 200-level (second year undergraduate) course with no pre-requisites open to all LSA students, so we’re relying on teaspoon languages and Snap! with a little Python. I’m team-teaching with Steve Abney, a computational linguist. Steve actually understands LLMs, and I knew very little. He’s been a patient teacher and a great partner on this. I’ve had to learn a lot, and we’re relying heavily on the great Generative AI Snap! projects that Jens Mönig has been creating, SciSnap from Eckart Modrow, and Ken Kahn’s blocks that provide an API to TensorFlow.

As of January, we are approved to offer two minors: Computing for Expression and Computing for Scientific Discovery. We have about a half dozen students enrolled so-far in the minors, which is pretty good for three months in.

PCAS Growth
When I offered the first two courses in Fall 2022, we had 11 students in Expression and 14 in Justice. Now, we’re up to 308 students enrolled. That’s probably our biggest challenge — managing growth and figuring out how to sustain it.

Research in PCAS
We’re starting to publish some of what we’re learning from PCAS. Last November, Gus Evrard and I published a paper at the Koli Calling International Conference on Computing Education Research about the process that we followed co-chairing the LSA Computing Education Task Force to figure out what LSA needed in computing education. That paper, Identifying the Computing Education Needs of Liberal Arts and Sciences Students, won a Best Discussion Paper Award. Here’s Gus and me at the conference banquet when we got our award.

Tamara Nelson-Fromm just presented a paper at the 2024 PLATEAU Workshop on evidence we have suggesting transfer of learning from teaspoon languages into our custom Snap! blocks. I’ll wait until those papers are released to tell you more about that.
SIGCSE 2024 Presentations
We’re pretty busy at SIGCSE 2024, and almost all of our presentations are connected to PCAS.
Thursday morning, I’m on a panel led by Kate Lehman on “Re-Making CS Departments for Generation CS” 10:45 – 12:00 at Oregon Ballroom 203. This is going to be a hardball panel. Yes, we’ll talk about radical change, but be warned that Aman and I are on the far end of the spectrum. Aman is going to talk about burning down the current CS departments to start over. I’m going to talk about giving up on traditional CS departments ever addressing the needs of Generation CS (because they’re too busy doing something else) and that we need more new programs like PCAS. I’m looking forward to hear all the panelists — it’ll be a fun session.

Thursday just after lunch, Neil Brown and I are presenting our paper, Confidence vs Insight: Big and Rich Data in Computing Education Research 13:45 – 14:10 at Meeting Room D135. It’s an unusual computing education research paper because we’re making an argument, not offering an empirical study. We’re both annoyed at SIGCSE reviewers who ask for contextual information (Who were these students? What programming assignments were they working on? What was their school like?) from big (millions of points) data, and then complaining about small sample sizes from rich data with interviews, personal connections, and contextual information. In the paper, we make an argument about what are reasonable questions to ask about each kind of data. In the presentation, the gloves come off, and we show actual reviews. (There are also costumes.)
We don’t really get into why SIGCSE reviewers evaluate papers with criteria that don’t match the data, but I have a hypothesis. SIGCSE reviewers are almost all CS teachers, and they read a paper asking, “Does this impact how I teach? Does it tell me what I need to do in my class? Does it convince me to change?” Those questions are too short-sighted. We need papers that answer those questions to help us with our current problems, but we also need to have knowledge for the next set of problems (like when we start teaching entirely new groups of students). The right question for evaluating a computing education research paper is, “Does this tell the computing education research community (not you the reviewer, personally, based on your experience) something we didn’t know that’s worth knowing, maybe in the future?”

At the NSF Project Showcase Thursday 15:45 – 17:00 at Meeting Rooms E143-144, Tamara Nelson-Fromm is going to show where we are on our Discrete Mathematics project. She’ll demonstrate and share links to our ebooks for solving counting problems with Python and with one of our teaspoon languages, Counting Sheets.

In the “second flock” of Birds of a Feather sessions Thursday 18:30 – 19:20 at Meeting Room D136, we’re going to be a part of Zach Dodd’s group on “Computing as a University Graduation Requirement”. There’s a real movement towards building out computing courses for everyone, not just CS majors, as we’re doing in PCAS. Zach is pushing further, for a general education requirement. I’m excited for the session to hear what everyone is doing.

On Saturday afternoon 15:30 – 18:30 at Meeting Rooms B113, I’ll offer a three-hour workshop on how we teach in our PCAS courses for arts and humanities students, with teaspoon languages, custom Snap! blocks, and ebooks. Brian Miller has been teaching these courses this year, and he’s kindly letting me share the materials he’s been developing — he’s made some great improvements over what I did. This workshop was inspired by a comment from Joshua Paley in response to our initial posts about how we’re teaching, where he asked if I’d do a SIGCSE workshop on how we’re teaching PCAS. Will do it on Saturday!

Participatory Design to Support University to High School Curricular Transition/Translation in FIE 2022
Here’s my second blog post on papers we presented during the first year of PCAS. Emma Dodoo is an Engineering Education Research PhD student working with me and co-advised with Lisa Lattuca. When she first started working with me, she wanted a project that supported STEM learning in high school. We happened upon this fascinating project which eventually led to an FIE 2022 paper (see link here).
The University of Michigan Marsal Family School of Education has a collaboration with the School at Marygrove in the Detroit Public Schools – Community District. The School at Marygrove requires all high school students to take a course in Engineering every year. The school is new, so when we came into the story, they were just starting to build an 11th grade Engineering curriculum. Where does an innovative K-12 school find curriculum for not-often-taught subjects like Engineering? It seemed natural to look to the partner university.
The University of Michigan has recently established a Robotics Department with an innovative undergraduate curriculum. The leadership in the U-M School of Education and the School at Marygrove decided to use some projects from the undergraduate curriculum for the 11th grade Engineering curriculum. Emma and I came in to run participatory design sessions to help with supporting the high school in adopting the university curriculum. We focused on one project in particular, where students would input data from a LIDAR sensor on a robot, then visualize the results. What we wanted to know was: What are the issues that come up when using a university curriculum to inform an innovative high school curriculum?
We started out with a set of interviews. We talked to undergraduates who had been in the Robotics curriculum and asked them: What was hard about the robotics projects? What were things that you wished you knew before you started? They gave us a list of issues, like the realization that equations on a plane could be used to define regions of a picture, that colors could be mapped to numbers and equations, and that pixels could be queried for their colors.
We talked to high school educators about what they wanted students to learn from the project. They were pretty worried about the mathematics required for the project, especially after the students had spent all of 10th grade on-line during the pandemic.
Emma took these objectives and concerns, and generated a set of possible activities to be used in the class. She used Desmos, Geogebra, and our new Pixel Equations teaspoon language. (I mentioned back in this blog post that we were using Pixel Equations in participatory design sessions — that’s when this study was happening.)
Pixel Equations was developed explicitly for the concerns that the undergraduates were raising and that the math educators cared about. Users specify the pixels they want to manipulate (leftmost column) by providing an equation on a plane or an equation based on the RGB channels in the color in the pixel. They specify the desired color changes in terms of the red, green, and blue channels (three columns on the right). The syntax for the boolean expressions and the equations for calculating colors is the same as what students would see in Java, C, or JavaScript. But there are no explicit loops, conditionals, or data — it’s a teaspoon language.

Emma ran participatory design sessions with stakeholders, including teachers from the School at Marygrove. Her goal was to identify the features that the stakeholders would find valuable, and in particular, to identify the concerns of the high school educators that may not be addressed in the university curriculum. She identified four sets of issues that were important to the stakeholders when transferring curriculum from the university to the high school:
- Prior Knowledge: The students knew Desmos. Using a tool they used before would help a lot when dealing with the new robotics concepts.
- Priming: The computing must be introduced so that there is time for the students to become familiar with it.
- Motivational Play: High school students need more opportunities to see the fun in an activity than undergraduate students.
- Self-efficacy: It’s important for students to feel that they can succeed at the activities.
That’s where the paper ends
All of this design and development happened during the pandemic. We didn’t hear much about what was going on in the high school, and we couldn’t visit. When we talked to our contacts at the School of Education, we found that they didn’t have much news either. It wasn’t until much later that we saw this news item. The 11th grade Engineering class actually didn’t do any of the mathematics activities that we’d helped with. Instead, the school got a grant for a bunch of robots, and the classes focused on directly programming the robots instead. It’s disappointing that they didn’t use any of the things that we worked on, but as I’ve been mentioning in this blog, we find that adoption is really hard. Other factors (like grants and the wow factor of programming a robot) can change priorities.
The paper is interesting for investigating stakeholder issues when transferring activities from university to high schools. Those are useful issues to know about, but even if you address all the issues, you still might not get adoption.
Getting feedback on Teaspoon Languages from CS educators and researchers at the Raspberry Pi Foundation seminar series
In May, I had the wonderful opportunity to speak at the Raspberry Pi Foundation Seminar series. I’ve attended some of these seminars before. I highly recommend them (see past seminars here). It’s a terrific format. The speaker presents for up to a half hour, then everyone gets put into a breakout room for small group discussions. The participants and speaker come back for 30-35 minutes of intensive Q&A — at least, it feels “intensive” from the speaker’s perspective. The questions you get have been vetted through the breakout room process. They’re insightful, and sometimes critical, but always in a constructive way. I was excited about this opportunity because I wanted to make it a hands-on session where the CS teachers and researchers who attended might actually use some Teaspoon Languages and give me feedback on them. I have rarely had the opportunity to work with CS teachers, so I was excited for the opportunity.
Sue Sentance wrote up a very nice blog post describing my talk (thank you!) — see here. The video of the talk and discussion is available. You can watch the whole thing, or, you can read the blog post then skip ahead to where the conversation takes place (around 26:00 in the video). If you have been wondering, “Why isn’t Mark just using Logo, Scratch, Snap, or NetLogo? We already have great tools! Why invent new languages that are clearly less powerful than what we already have?”, then you should jump to 34:38 and see Ken Kahn (inventor of ToonTalk) push me on this point.
The whole experience was terrific for me, and I hope that it’s valuable for the viewer and attendees as well. The questions and comments indicated understanding and appreciation for what I’m trying to do, and the concerns and criticisms are valuable input for me and my team. Thanks to Sue, Diana Kirby, the Raspberry Pi Foundation, and all the attendees!
Updates: Workshop on Contextualized Approaches to Introduction to Computing, from the Center for Inclusive Computing at Northeastern University
From Nov 2020 to Nov 2021, I was a Technical Consultant for the Center for Inclusive Computing at Northeastern University, directed by Carla Brodley. (Website here.) CIC works directly with CS departments to create significant improvements in female participation in computer science programs. I’m no longer in the TC role, but I’m still working with CIC and Carla. I’ll be participating in a workshop that they’re running on Monday March 21. I’ll be talking about Media Computation in Python, and probably show some of the things we’re working on for the new classes here at Michigan.
https://www.khoury.northeastern.edu/event/contextual-approaches-to-introduction-to-computing/
Contextual Approaches to Introduction to Computing
Monday 3/21/22, 3pmEST/12pmPST
Moderator: Carla Brodley; Speakers: Valerie Barr, Mark Guzdial, Ben Hescott, Ran Libeskind-Hadas, Jakita Thomas
Brought to you by the Center for Inclusive Computing at Northeastern University
In this 1.5 hour virtual workshop, faculty from five different universities in the U.S. will present their approach to creating and offering an introductory computer science class (CS0 or CS1) for students with no prior exposure to computing. The key differentiator of these approaches is that the introduction is contextualized in one area outside of computing throughout the semester. Using the context of areas such as cooking, business, biology, media arts, and digital humanities, these courses appeal to students across the university and have realized spectacular results for student retention in CS0/CS1, persistence to taking additional CS courses, and declaring a major or minor in computing. The importance of attracting students to computing after they enter university is critical to moving the needle on increasing the demographic diversity of students who graduate in computing. Interdisciplinary introductory computing classes provide a pathway to students discovering and enjoying computing after they start university. They also help students with no prior coding experience gain familiarity with computing before taking additional courses required for the CS major. The workshop will begin with a short presentation by each faculty member on their approach to contextualized CS0/CS1 and will touch upon the university politics involved in its creation, the curriculum, and the outcomes. We will then split into smaller breakout sessions five times to enable participants to meet with each of the five presenters for questions and more in-depth conversations.
Updates: Developing the University of Michigan LSA Program in Computing for the Arts and Science
This blog is pretty old. I started it in June 2009 — almost 13 years ago. The pace of posting has varied from every day (today, I can’t understand how I ever did that!) to once every couple of months (most recently). There are things happening around here that are worth sharing and might be valuable to some readers, but I’m not finding much time to write. So, the posts the rest of this week will be quick updates with links for more information.
During most of the pandemic, I co-chaired (with Gus Evrard, a Physics professor and computational cosmologist) the Computing Education Task Force (website) for the University of Michigan’s College of Literature, Science, and the Arts (LSA). LSA is huge — about 20K students. (I blogged about this effort in April of last year.) Our job was to figure out what LSA was doing in computing education, and what else was needed. Back in November, I talked here about the three themes that we identified as computing education in LSA:
- Computing for Discovery: Think computational science, or data science + modeling and simulation.
- Computing for Expression: Think chatbots to Pixar to social media to Media Computation.
- Computing for Justice: Think critical computing and everything that C.P. Snow and Peter Naur warned us about computing back in the 1960’s.
Our report was released last month. You can see the release statement here, and the full report here. It’s a big report, covering dozens of interviews, a hundred survey responses, and a huge effort searching over syllabi and course descriptions to find where computing is in LSA. We made recommendations about creating a new program, new courses, new majors and minors, and coordinating computing education across LSA.
Now, we’re in the next phase — acting on the recommendations. LSA bought me out of my teaching for this semester, and it’s my full-time job to define a computing education program for LSA and to create the first courses in the program. We’re calling it the Program for Computing in the Arts and Science (PCAS). I’m designing courses for the Computing for Expression and Computing for Justice themes, in an active dialogue (drawing on the participatory design methods I learned from Betsy DiSalvo) with advisors from across LSA. (There are courses in LSA that can serve as introductions to the Computing for Discovery theme, and Gus is leading the effort to coordinate them.) The plan is to put up the program this summer, and I’ll start teaching the new courses in the Fall.
Computer Science was always supposed to be taught to everyone, and it wasn’t about getting a job: A historical perspective
I gave four keynote talks in the last two months, at SIGITE, Models 2021 Educators’ Symposium, VL/HCC, and CSERC. I’m honored to be invited to them, but I do suspect that four keynotes in six weeks suggest some “personal issues” in planning and saying “No.” Some of these were recorded, but I don’t believe that any of them are publicly available
The keynotes had a similar structure and themes. (A lot easier than four completely different keynotes!) My activities in computing education these days are organized around two main projects:
- Defining computing education for undergraduates in the University of Michigan’s College of Literature, Science, and Arts (see earlier blog post referencing this effort);
- Participatory design of Teaspoon languages (mentioned most recently in this blog post).
My goal was to put both of these efforts in a historical context. My argument is that computer science was originally invented to be taught to everyone, but not for economic advantage. I see the LSA effort and our Teaspoon languages connected to the original goals for computer science. The talks were similar to my SIGCSE 2019 keynote (blog post about that talk here, and video version here), but puts some of the early history in a different perspective. I’m not going to go into the LSA Computing Education effort or Teaspoon languages here. I’m writing this up because I hope that it’s a perspective on the early history that might be useful to others.
I start out with C.P. Snow.

My PhD advisor, Elliot Soloway, would have all of his students read this book, “The Two Cultures.” Snow was a scientist who bemoaned the split between science and humanities in Western culture. Snow mostly blamed the humanities. That wasn’t Elliot’s point for having us read his book. Elliot wanted us to think about “Who could use what we have to teach, but might not even enter our classroom?”

This is George Forsythe. Donald Knuth claims that George Forthye first published the term “computer science” in a paper in the Journal of Engineering Education in 1961. Forsythe argued (in a 1968 article) that the most valuable parts of a scientific or technical education were facility with natural language, mathematics, and computer science.

In 1961, the MIT Sloan School held a symposium on “Computers and the World of the Future.” It was an amazing event. Attendees included Gene Amdahl, John McCarthy, Alan Newell, and Grace Hopper. Martin Greenberger’s book in 1962 included transcripts of all the lectures and all the discussants’ comments.

C.P. Snow’s chapter (with Norbert Wiener of Cybernetics as discussant) predicted a world where software would rule our lives, but the people who wrote the software would be outside the democratic process. He wrote, “A handful of people, having no relation to the will of society, having no communication with the rest of society, will be taking decisions in secret which are going to affect our lives in the deepest sense.” He argued that everyone needed to learn about computer science, in order to have democratic control of these processes.
In 1967, Turing laureate Peter Naur made a similar argument (quoting from Michael Caspersen’s paper): “Once informatics has become well established in general education, the mystery surrounding computers in many people’s perceptions will vanish. This must be regarded as perhaps the most important reason for promoting the understanding of informatics. This is a necessary condition for humankind’s supremacy over computers and for ensuring that their use do not become a matter for a small group of experts, but become a usual democratic matter, and thus through the democratic system will lie where it should, with all of us.” The Danish computing curriculum explicitly includes informing students about the risks of technology in society.

Alan Perlis (first ACM Turing Award laureate) made a different argument in his chapter. He suggested that everyone at University should learn to program because it changes how we understand everything else. He argued that you can’t think about integral calculus the same after you learn about computational iteration. He described efforts at Carnegie Tech to build economics models and learn through simulating them. He was foreshadowing modern computational science, and in particular, computational social science.
Perlis’s discussants include J.C.R. Licklider, grandfather of the Internet, and Peter Elias. Michael Mateas has written a fascinating analysis of their discussion (see paper here) which he uses to contextualize his work on teaching computation as an expressive medium.

In 1967, Perlis with Herb Simon and Alan Newell published a definition for computer science in the journal Science. They said that CS was “the study of computers and all the phenomena surrounding them.” I love that definition, but it’s too broad for many computer scientists. I think most people would accept that as a definition for “computing” as a field of study.
Then, we fast forward to 2016 when then-President Obama announced the goal of “CS for All.” He proposed:
Computer science (CS) is a “new basic” skill necessary for economic opportunity and social mobility.
I completely buy the necessity part and the basic skill part, and it’s true that CS can provide economic opportunity and social mobility. But that’s not what Perlis, Simon, Newell, Snow, and Forsythe were arguing for. They were proposing “CS for All” decades before Silicon Valley. There is value in learning computer science that is older and more broadly applicable than the economic benefits.

The first name that many think of when talking about teaching computing to everyone is Seymour Papert. Seymour believed, like Alan Perlis, “that children can learn to program and learning to program can affect the way that they learn everything else.”
The picture in the lower right of this slide is important. On the right is Gary Stager, who kindly shared this picture with me. On the left is Wally Feurzeig who implemented the programming language Logo with Danny Bobrow, when Seymour was a consultant to their group at BBN. In the center is Cynthia Solomon who collaborated with Seymour on the invention of the Turtle (originally a robot, seen at the top) and the development of Logo curriculum.
Cynthia was the lead author of a recent paper describing the history of Logo (see link here), which included the example of early Logo use on the upper right of this slide, which generates random sentences. Logo is named for the Greek word logos for “word.” The first examples of Logo were about manipulating natural language. Logo has always been used as an expressive medium (music, graphics, storytelling, and animation), as well as for learning mathematics (see the great book Turtle Geometry).

This is the context in which I think about the work with the LSA Computing Education Task Force. Our question was: At an R1 University with a Computer Science & Engineering undergraduate degree and an undergraduate BS in Information (with tracks in information analysis and user experience (UX) design), what else might undergraduates need? What are the purposes for computing that are broader and older than the economic advantages of professional software development? We ended up defining three themes of what LSA faculty do with computing and what they want their students to know:
- Computing for Discovery – LSA computational scientists create models and simulate them (not just analyze data that already exists), just as Alan Perlis suggested in 1961.
- Computing for Expression – Computing has created new ways for humans to express themselves, which is important to study and to use to explore, invent, and create new forms of expression, as the Logo community did starting in the 1960’s.
- Computing for Justice – LSA scholars investigate how computing systems can encode and exacerbate inequities, which requires some understand of computing, just as C.P. Snow talked about in 1961.
We develop our Teaspoon languages to meet the needs of teachers in teaching non-CS and even non-STEM classes. We argue that there are computing education learning objectives that we address with Teaspoon languages, even if they don’t include common languages features like for, while, and if statements. A common argument against our work in Teaspoon languages is that we’re undertaking a Sisyphean task. Computing is what it is, programming languages are what they are, and education is not going to be a driving force for changing anything in computing.
And yet, that’s exactly how the desktop user interface was invented.

Alan Kay (another Turing laureate in this story), Adele Goldberg, and Dan Ingalls led the development of Smalltalk in Xerox PARC in the 1970’s. The goal for Smalltalk was to realize Alan’s vision of a Dynabook, using the computer as a tool for learning. The WIMP (overlapping Windows, Icons, Menus, and mouse Pointer) interface was invented in order to achieve computing education goals. For the purposes of education, the user interface that you are using right now was invented.
The Smalltalk work tells us that we don’t have to accept computing as it is. Computing education today focuses mostly on preparing students to be professional software developers, using the tools of professional software development. That’s important and useful, but often eclipses other, broader goals for learning computing. The earliest goals for computing education are different from those in most of today’s computing education. We should question our goals, our tools, and our assumptions. Computing for everyone is likely going to look different than the computing we have today which has been defined for a narrow set of goals and for far fewer people than “all.”
Recent Comments