I’ve been blogging since 2005, but not all old posts have been imported here.
We recently launched tag2upload, aka cloud dgit or dgit-as-a-service. This was something of a culmination of work I’ve been doing since 2016 towards modernising Debian workflows, so I thought I’d write a short personal retrospective.
When I started contributing to Debian in 2015, I was not impressed with how packages were represented in Git by most package maintainers, and wanted a pure Git workflow. I read a couple of Joey Hess’s blog posts on the matter, a rope ladder to the dgit treehouse and upstream git repositories and made a bug report against dgit hoping to tie some things together.
The results of that early work were the git-deborig(1) program and the dgit-maint-merge(7) tutorial manpage. Starting with Joey’s workflow pointers, I developed a complete, pure Git workflow that I thought would be suitable for all package maintainers in Debian. It was certainly well-suited for my own packages. It took me a while to learn that there are packages for which this workflow is too simple. We now also have the dgit-maint-debrebase(7) workflow which uses git-debrebase, something which wasn’t invented until several years later. Where dgit-maint-merge(7) won’t do, you can use dgit-maint-debrebase(7), and still be doing pretty much pure Git. Here’s a full, recent guide to modernisation.
The next most significant contribution of my own was the push-source
subcommand for dgit. dgit push required a preexisting .changes file
produced from the working tree. I wanted to make dgit push-source prepare
that .changes file for you, but also not use the working tree, instead
consulting HEAD. The idea was that you were doing a git push – which
doesn’t care about the working tree – direct to the Debian archive, or as
close as we could get. I implemented that at DebConf18 in Taiwan, I think,
with Ian, and we also did a talk on git-debrebase. We ended up having to
change it to look at the working tree in addition to HEAD to make it work as
well as possible, but I think that the idea of a command which was like doing
a Git push direct to the archive was perhaps foundational for us later wanting
to develop tag2upload. Indeed, while tag2upload’s client-side tool
git-debpush does look at the working tree, it doesn’t do so in a way that is
essential to its operation. tag2upload is dgit push-source-as-a-service.
And finally we come to tag2upload, a system Ian and I designed in 2019 during a two-person sprint at his place in Cambridge, while I was visiting the UK from Arizona. With tag2upload, appropriately authorised Debian package maintainers can upload to Debian with only pure Git operations – namely, making and pushing a signed Git tag to Debian’s GitLab instance. Although we had a solid prototype in 2019, we only finally launched it last month, February 2026. This was mostly due to political delays, but also because we have put in a lot of hours making it better in various ways.
Looking back, one thing that seems notable to me is that the core elements of the pure Git workflows haven’t changed much at all. Working out all the details of dgit-maint-merge(7), designing and writing git-debrebase (Ian’s work), and then working out all the details of dgit-maint-debrebase(7), are the important parts, to me. The rest is mostly just large amounts of compatibility code. git-debrebase and dgit-maint-debrebase(7) are very novel but dgit-maint-merge(7) is mostly just an extrapolation of Joey’s thoughts from 13 years ago. And yet, adoption of these workflows remains low.
People prefer to use what they are used to using, even if the workflows have significant inconveniences. That’s completely understandable; I’m really interested in good workflows, but most other contributors care less about it. But you would expect enough newcomers to have arrived in 13 years that the new workflows would have a higher uptake. That is, packages maintained by contributors that got involved after these workflows became available would be maintained using newer workflows, at least. But the inertia seems to be too strong even for that. Instead, new contributors used to working purely out of Git are told they need to learn Debian’s strange ways of representing things, tarballs and all. It doesn’t have to be that way. We hope that tag2upload will make the pure Git workflows seem more appealing to people.
I miss the US more and more, and have recently been trying to perfect Southern Biscuits using British ingredients. It took me eight or nine tries before I was consistently getting good results. Here is my recipe.
Ingredients
- 190g plain flour
- 60g strong white bread flour
- 4 tsp baking powder
- ¼ tsp bicarbonate of soda
- 1 tsp cream of tartar (optional)
- 1 tsp salt
- 100g unsalted butter
- 180ml buttermilk, chilled
- If your buttermilk is thicker than the consistency of ordinary milk, you’ll need around 200ml.
- extra buttermilk for brushing
Method
- Slice and then chill the butter in the freezer for at least fifteen minutes.
- Preheat oven to 220°C with the fan turned off.
- Twice sieve together the flours, leaveners and salt. Some salt may not go through the sieve; just tip it back into the bowl.
- Cut cold butter slices into the flour with a pastry blender until the mixture resembles coarse crumbs: some small lumps of fat remaining is desirable. In particular, the fine crumbs you are looking for when making British scones are not wanted here. Rubbing in with fingertips just won’t do; biscuits demand keeping things cold even more than shortcrust pastry does.
- Make a well in the centre, pour in the buttermilk, and stir with a metal spoon until the dough comes together and pulls away from the sides of the bowl. Avoid overmixing, but I’ve found that so long as the ingredients are cold, you don’t have to be too gentle at this stage and can make sure all the crumbs are mixed in.
- Flour your hands, turn dough onto a floured work surface, and pat together into a rectangle. Some suggest dusting the top of the dough with flour, too, here.
- Fold the dough in half, then gather any crumbs and pat it back into the same shape. Turn ninety degrees and do the same again, until you have completed a total of eight folds, two in each cardinal direction. The dough should now be a little springy.
- Roll to about ½ inch thick.
- Cut out biscuits. If using a round cutter, do not twist it, as that seals the edges of the biscuits and so spoils the layering.
- Transfer to a baking sheet, placed close together (helps them rise). Flour your thumb and use it to press an indent into the top of each biscuit (helps them rise straight), brush with buttermilk.
- Bake until flaky and golden brown: about fifteen minutes.
Gravy
It turns out that the “pepper gravy” that one commonly has with biscuits is just a white/béchamel sauce made with lots of black pepper. I haven’t got a recipe I really like for this yet. Better is a “sausage gravy”; again this has a white sauce as its base, I believe. I have a vegetarian recipe for this to try at some point.
Variations
- These biscuits do come out fluffy but not so flaky. For that you can try using lard instead of butter, if you’re not vegetarian (vegetable shortening is hard to find here).
- If you don’t have a pastry blender and don’t want to buy one you can try not slicing the butter and instead coarsely grating it into the flour out of the freezer.
- An alternative to folding is cutting and piling the layers.
- You can try rolling out to 1–1½ inches thick.
- Instead of cutting out biscuits you can just slice the whole piece of dough into equal pieces. An advantage of this is that you don’t have to re-roll, which latter also spoils the layering.
- Instead of brushing with buttermilk, you can take them out after they’ve started to rise but before they’ve browned, brush them with melted butter and put them back in.
Notes
- I’ve had more success with Dale Farm’s buttermilk than Sainsbury’s own. The former is much runnier.
- Southern culture calls for biscuits to be made the size of cat’s heads.
- Bleached flour is apparently usual in the South, but is illegal(!) here. Apparently bleaching can have some effect on the development of the gluten which would affect the texture.
British plain flour is made from soft wheat and has a lower percentage of protein/gluten, while American all-purpose flour is often(?) made from harder wheat and has more protein. In this recipe I mix plain and strong white flour, in a ratio of 3:1, to emulate American all-purpose flour.
I am not sure why this works best. In the South they have soft wheats too, and lower protein percentages. The famous White Lily flour is 9%. (Apparently you can mix US cake flour and US all-purpose flour in a ratio of 1:1 to achieve that; in the UK, Shipton Mill sell a “soft cake and pastry flour” which has been recommended to me as similar.)
This would suggest that British plain flour ought to be closer to Southern flour than the standard flour available in most of the US. But my experience has been that the biscuits taste better with the plain and strong white 3:1 mix. Possibly Southeners would disprefer them. I got some feedback that good biscuits are about texture and moistness and not flavour.
- Baking powder in the US is usually double-acting but ours is always single-acting, so we need double quantities of that.
I finally figured out how to have an application launcher with my usual Emacs completion keybindings:
This is with Icomplete. If you use another completion framework it will look different. Crucially, it’s what you are already used to using inside Emacs, with the same completion style (flex vs. orderless vs. …), bindings etc..
Here is my Sway binding:
bindsym p exec i3-dmenu-desktop \
--dmenu="dmenu_emacsclient 'Application: '", \
mode "default"
(for me this is inside a mode { } block)
The dmenu_emacsclient script is
here.
It relies on the function spw/sway-completing-read from my
init.el.
As usual, this code is available for your reuse under the terms of the GNU GPL. Please see the license and copyright information in the linked files.
You also probably want a for_window directive in your Sway config to enable floating the window, and perhaps to resize it. Enjoy having your Emacs completion bindings for application launching, too!
I spent eight years doing teaching and research in Philosophy at the University of Arizona, in Tucson, Arizona, from 2015 to 2023. I now have a love for America and its people, even though I am not sure I could ever live there again. Americans would say that Tucson is an outlier, an odd post-frontier town which is not reflective of the rest of the nation’s cities. And I only really visited New York, the Bay Area, and two towns in Mississippi, so I mostly take them at their word. But I could see something in common between these places that’s distinct from where else I’ve lived. I will not seek to capture that here, but instead focus on how life in Tucson was, and some things I learned.
When I first arrived I was very unsure about whether it would be a good idea to stay. I was ambivalent about reentering academia, and uneasy with the contractual terms under which I would be able to study there without paying any money for it. Once I did decide to stay for at least one semester, I tried to get myself set up with a daily routine that would be suitable for making progress with my classes, while also allowing me time to pursue my other interests. So I went to check out the library, that being where I’d done all my work as an undergraduate. I was appalled to find that there wasn’t a culture of silence. Supposedly the upper floors were designated as quiet, but the only way I could feel confident in not being interrupted was to find one of the small study desks sequestered in far corners, with those moveable shelves of books they have in university libraries between me and everyone else.
This initial problem with finding quiet and concentration somewhat epitomises a lot of my academic experiences in Tucson. I felt that the academic culture in the US was a noisy one: talking loudly to each other was valued a lot more highly than it had been in the UK, and real deep reading and thinking was something that people did on their own, at home, and didn’t talk about much. You talked about all the writing you had been doing, and indeed about what people you’d read had said, but with the latter it was as though the actual reading had happened outside of time, and the things happening within time were on-campus activities, and the hours of writing. You might say, well, it was grad school, of course the focus is more on producing one’s own work. But we did read a lot, in fact, and it’s not as though undergraduate Philosophy at Oxford didn’t involve regularly spending a lot of time writing, even if tute essays are something strange and staccato when compared to what we tried to write in grad school. And this is not to say that I didn’t learn and develop a great deal from many of those loud conversations, both in and out of seminar, but I think a productive campus needs more quiet, too.
We had two kinds of classes, lecture-style with both undergrad and graduate students, though in smaller groups than undergrad, and seminars with almost exclusively graduate students. Many people would take as many seminars as they were allowed to, and we all continued to join seminars once we’d completed coursework. But a few of us, including me, joined as many lectures as we could, even after completing coursework. I just love listening to masters of their domains of study. This was distinctly uncool – you’ve got to practice producing in order to become a philosopher yourself, would go the thought. But it’s not as if I didn’t produce too. And you can’t be disdainful of continuing to pump good philosophy into your head. Perhaps my attraction to the lecture classes was because it was somewhat closer to the deep reading with which I was familiar, that proved elusive on my American campus. You have to do the hard work to make philosophical progress, but you can’t engage with philosophy only by doing what feels like hard graft if you want to succeed, I think. You have to engage with it in other ways too, like just by listening.
A quality about the Americans I knew well which struck me early was their generosity with time, friendliness and just materially. I mean to include here peers who were my friends, as well as people who were part of my life for extended periods, but with whom I didn’t have enough in common for friendship. When I first arrived in Tucson I lived in a house in the Sam Hughes neighbourhood, owned by the parents of one of my two roommates, Nick. He was from Phoenix, and was taking a second undergraduate degree after deciding that he didn’t really want to follow in his father’s footsteps and become a doctor, but wanted to be a programmer. Nick and I would drive to the supermarket together every Saturday in his big Ford truck, and we developed a habit of listening to The Eagle’s Take It Easy on the ride back. I never signed a lease for living in that place. At one point I was short of American money after spending a lot on a summer trip, and I asked whether I could pay my rent erratically for a while, as my stipend came in over the following academic year, rather than transferring savings from the UK. It was no problem to do this. One Spring Break and one Thanksgiving, I joined Nick in driving up to Paradise Valley, Phoenix, to stay with his family. His mother had sat in the state House of Representatives as a Republican, and had two very yappy chihuahuas, traumatised as they had been by a previous owner. At one point they had to stay with us down in Tucson for a few days. One of them refused to walk on the tile floor, and we had to create a bridge of doormats between the carpeted room in which it was sleeping and the front door.
Nick introduced me to the American love for pulp cinema, which we don’t really have in the UK. Once Nick graduated and I developed closer friendships in my department, I watched a lot more such films with philosophers.
After living with Nick I lived alone, for nine months, in a small terraced bungalow, for barely any rent. The people around me were mostly economically deprived retirees, and some young people working jobs like driving some kind of tractor around on the extended grounds of the airport, on his own, far away from the planes. At one point a different corporation took over management of the properties, and they tried to make us pay an additional fee for the laundry room that had until then been included. They did this by installing a lock, and telling us we had to come down to the office to pay the new fee and receive a key. My neighbour Wilma and I took the bus down to the office and objected, and eventually got keys for free. Now that I think about it, I don’t know whether other existing tenants ended up paying for it. I improved my understanding of how the economically deprived, even in the West, can get casually abused by businesses, from this.
Wilma would sit behind her screen door in the evening, without the lights on, and a disembodied greeting would float out to me, among the crying cicadas, as I biked up to my own place. I had a nine month lease and I left that place right after because I was fed up with the insects infesting the place. But at the same time, living there was when I figured out how to be happy with my life in Tucson, and I maintained that happiness from then until the pandemic, when everything got hard for most everyone. Wilma was generous like Nick.
Before I said goodbye to Nick and moved in next door to Wilma, I tried to live a life involving the kind of variety that my life in Korea had had, before I went to Tucson. I was continually frustrated in this, because it was too distant from the lives that the people around me led for me to be able to figure out how to do it there, and more mundanely, because of how car-centric Tucson is. When I moved into my place on my own I somehow decided that I would try focusing entirely on my university work, and I also expanded that work a bit by registered for a seminar in Japanese literature up at the East Asian Studies department. My future PhD thesis supervisor Julia joined me for that seminar and one more the next semester, and I was able to draw upon some novels we read for my thesis.
I didn’t have Internet access at my little place, and we had finally got some designated-silent shared offices for grad students, in addition to the noisy ones where people held office hours, and talked loudly about philosophy. Suddenly my life got a lot more focused and quieter. I would get up and scramble an egg with some cheese and black pepper, and have it in a pitta bread-like thing which I sliced, froze, and defrosted in the toaster. I’d head to campus, early, and write. I’d do my classes and reading. Then I’d go swim in the big outside pool the university had, in the dark. I’d do one or two lengths at a time and then hold onto the edge and just think hard. I especially did this after my literature classes. They ran until 6pm, I think, and then I’d go to the pool, and do my lengths interspersed with thinking hard about the literature we’d discussed. Then after a long time out I’d go home late, and listen to pre-downloaded tabletop roleplaying podcasts. I slept the best I ever have, in the quiet among the noises of insects – it really was quieter despite all that noise – on this wonderful Japanese floor bed I’d found on Amazon. What I discovered during that time was the power of a simple life, I think. Or perhaps it was more about not trying to live a more complex life than the place you live allows. Or perhaps it wasn’t anything more than about the benefits of giving up fighting against a prevalent culture of workaholism – but at least, it was giving in to that situation in a way which strongly benefitted me. Going with the flow, or something.
I tried to build upon my new focus with the next phase of time in Tucson. I moved into the university’s grad student dorms, living right next to campus, in the middle of a commercial district for students that felt like one had left Tucson and gone somewhere more contemporary. This was a change I appreciated a lot, having, as I said, grown tired with all the bugs. At this time I got to know my now-fiancée Ke. I had finished with class credits but sat in on so many classes and reading groups, while still continuing to write a lot, that my work life didn’t change too much. While most people would start teaching their own classes at this point, I asked if I could continue to be assigned teaching assistant roles instead; I started teaching on my own only during the pandemic. My social life, aside from time with Ke and her roommate, mostly involved cycling East for forty minutes or so, to a house in which three fellow philosophers lived. I loved those evening rides there and nighttime rides back. Tucson is a dark city for the astronomy, and it’s also flat and bike-friendly, so for most of that journey I was on a route where various things had been set up to discourage cars from staying on the same roads as cyclists. The friends I had who lived in that house, Brandon, Tyler and Nathan, and later Nathan’s partner Meg and Tyler’s partner Amanda, were now the humblingly generous Americans in my life. We got two tabletop roleplaying groups going, with me and Nathan running a game each, and playing in each other’s. Later we were a pandemic pod, watching through Terrace House: Opening New Doors together.
I also significantly ramped up my involvement in Debian at around this time. Each Saturday morning I would visit a local coffee roasters, Caffe Lucè, have an excellent bagel and a couple of cups of coffee with half-and-half, and work on my packages.
I’ve described how a built for myself something of a sense of belonging studying Philosophy in Tucson. But ultimately, it did not compare in this regard to the place I was most content, which was in Balliol, my Oxford college. The Arizona grad students would go out for beer at a nice place called Time Market on some Friday nights, and while it was often a very good time, I would walk home with this heavy feeling of disappointment. I can now identify this as the lack of a sense of camraderie and belonging which I thought was essential to a productive academic environment. I can now also see that I had an intellectual kinship with Julia, Nathan, Tyler, Ke and others which was just as valuable, but it was still something had only with individuals, lacking a sense of being part of something not only bigger but also concrete, actually in the world. The pressures of professional academia in the US didn’t seem to leave us enough space to have what I remember us having had at Balliol. Not that the Balliol I inhabited still exists – it was dependent as much on the place as the people I was there with.
The advent of the pandemic, and the remainder of my time in Tucson after the pandemic, eroded this life I’d figured out. Our department eroding too was part of that – a lot of people moved away to be with their partners or families when lockdowns began, and faculty retired (and in one case tragically died), and so we lost a critical mass of intellectually energetic individuals. This hit me hard, and I did not have the emotional resources remaining, post-pandemic, to try to kick start things again, as previous versions of myself might have tried to do. I find, though, that most of my memories of life and Philosophy in Tucson are of the good times, and I find it easy, now at least, to write a post like this one.
When I think back to all the classes I took, discussions I had and essays I wrote and revised, I can see significant intellectual development. At the same time, it was as though my development in other senses was put on hold for those eight years, in a way that it had not been at Oxford and in Korea. (I even find myself wanting to say that my whole life was put on hold, but that would be hyperbolic even if it felt that way sometimes, for as I have said, I developed many important friendships.) Postgraduate Philosophy was just too consuming. I don’t know if it could have been other way, but I knew all along that it had to stop at some point; I knew that I couldn’t put all the other respects in which I wanted to grow on hold forever. Somehow, Oxford got this balance right: it managed to be just as satisfyingly intense and thrilling, without being quite all-consuming. Of course, I probably have rose-tinted glasses. It does seem, though, that European hard work manages to be more balanced, at least for what I seek to achieve, than American hard work.
During my final year, a current postdoc at Oxford happened to visit Tucson to speak at a political philosophy conference. Our quiet (to her), old-fashioned, relatively informal academic life out in the desert as grad students seemed to have a lot of advantages over hers in Oxford, despite how she had graduated from her doctorate and had obtained an academic job, and we were students. Until I met her, I had taken for granted, I think, all the ways that academic life in Tucson was quite like Balliol undergrad had been – she told me how her colleagues are all on Twitter, but none of us were, really. When I first arrived in Tucson I found it distressing how much more of an ivory tower it seemed, with Oxford being such a politically engaged place. In the end I am very glad I did a humanities PhD where I did, and am deeply grateful to America.
Ian suggested I share the highly involved build process for my doctoral dissertation, which I submitted for examination earlier this year. Beyond just compiling a PDF from Markdown and LaTeX sources, there are just two, simple-seeming goals: produce a PDF that passes PDF/A validation, for long term archival, and replace the second page with a scanned copy of the page after it was signed by the examiners. Achieving these two things reproducibly turned out to require a lot of complexity.
First we build dissertation1.tex out of a number of LaTeX and Markdown files, and a Pandoc metadata.yml, using Pandoc in a Debian sid chroot. I had to do the latter because I needed a more recent Pandoc than was available in Debian stable at the time, and didn’t dare upgrade anything else. Indeed, after switching to the newer Pandoc, I carefully diff’d dissertation1.tex to ensure nothing other than what I needed had changed.
dissertation1.tex: preamble.tex \
citeproc-preamble.tex \
committee.tex \
acknowledgements.tex \
dedication.tex \
contents.tex \
abbreviations.tex \
abstract.tex \
metadata.yaml \
template.latex \
philos.csl \
philos.bib \
ch1.md ch1_appA.md ch2.md ch3.md ch3_appB.md ch4.md ch5.md
schroot -c melete-sid -- pandoc -s -N -C -H preamble.tex \
--template=template.latex -B committee.tex \
-B acknowledgements.tex -B dedication.tex \
-B contents.tex -B abbreviations.tex -B abstract.tex \
ch1.md ch1_appA.md ch2.md ch3.md ch3_appB.md ch4.md ch5.md \
citeproc-preamble.tex metadata.yaml -o $@
With hindsight, I think that I should have eschewed Pandoc in favour of plain LaTeX for a project as large as this was. Pandoc is good for journal submissions, where one is responsible for the content but not really the presentation. However, one typesets one’s own dissertation, without anyone else’s help. I decided to commit dissertation1.tex to git, because Pandoc’s LaTeX generation is not too stable.
We then compile a first PDF. My Makefile comments say that pdfx.sty requires this particular xelatex invocation. pdfx.sty is supposed to make the PDF satisfy the PDF/A-2B long term archival standard … but dissertation1.pdf doesn’t actually pass PDF/A validation. We instead rely on GhostScript to produce a valid PDF/A-2B, at the final step. But we have to include pdfx.sty at this stage to ensure that the hyperlinks in the PDF are PDF/A-compatible – without pdfx.sty, GhostScript rejects hyperref’s links.
dissertation1.pdf: \
dissertation1.tex dissertation1.xmpdata committee_watermark.png
xelatex -shell-escape -output-driver="xdvipdfmx -z 0" $<
xelatex -shell-escape -output-driver="xdvipdfmx -z 0" $<
xelatex -shell-escape -output-driver="xdvipdfmx -z 0" $<
As I said, the second page of the PDF needs to be replaced with a scanned version of the page after it was signed by the examiners. The usual tool to stitch PDFs together is pdftk. But pdftk loses the PDF’s metadata. For the true, static metadata like the title, author and keywords, it would be no problem to add them back. But the metadata that’s lost includes the PDF’s table of contents, which PDF readers display in a sidebar, with clickable links to chapters, and the sections within those. This information is not static because each time any of the source Markdown and LaTeX files change, there is the potential for the table of contents to change. So we have to extract all the metadata from dissertation1.pdf and save it to one side, before we stitch in the scanned page. We also have to hack the metadata to ensure that the second page will have the correct orientation.
SED = /^PageMediaNumber: 2$$/ { n; s/0/90/; n; s/612 792/792 612/ }
KEYWORDS = virtue ethics, virtue, happiness, eudaimonism, good lives, final ends
dissertation1_meta.txt: dissertation1.pdf
printf "InfoBegin\nInfoKey: Keywords\nInfoValue: %s\n%s\n" \
"${KEYWORDS}" "$$(pdftk $^ dump_data)" \
| sed "${SED}" >$@
Now we can stitch in the signed page, and then put the metadata back. You can’t do this in one invocation of pdftk, so far as I could see.
dissertation1_stitched_updated.pdf: \
dissertation1_stitched.pdf dissertation1_meta.txt
pdftk dissertation1_stitched.pdf \
update_info dissertation1_meta.txt output $@
dissertation1_stitched.pdf: dissertation1.pdf
pdftk A=$^ \
B=$$HOME/annex/philos/Dissertation/committee_signed.pdf \
cat A1 B1 A3-end output $@
Finally, we use GhostScript to reprocess the PDF into two valid PDF/A-2Bs, one optimised for the web. This requires supplying a colour profile, a PDFA_def.ps postscript file, a whole sequence of GhostScript options, and some raw postscript on the command line, which gives the PDF reader some display hints.
GS_OPTS1 = -sDEVICE=pdfwrite -dBATCH -dNOPAUSE -dNOSAFER \
-sColorConversionStrategy=UseDeviceIndependentColor \
-dEmbedAllFonts=true -dPrinted=false -dPDFA=2 \
-dPDFACompatibilityPolicy=1 -dDetectDuplicateImages \
-dPDFSETTINGS=/printer -sOutputFile=$@
GS_OPTS2 = PDFA_def.ps dissertation1_stitched_updated.pdf \
-c "[ /PageMode /UseOutlines \
/Page 1 /View [/XYZ null null 1] \
/PageLayout /SinglePage /DOCVIEW pdfmark"
all: Whitton_dissert_web.pdf Whitton_dissert_gradcol.pdf
Whitton_dissert_gradcol.pdf: \
PDFA_def.ps dissertation1_stitched_updated.pdf srgb.icc
gs ${GS_OPTS1} ${GS_OPTS2}
Whitton_dissert_web.pdf: \
PDFA_def.ps dissertation1_stitched_updated.pdf srgb.icc
gs ${GS_OPTS1} -dFastWebView=true ${GS_OPTS2}
And here’s PDFA_def.ps, based on a sample in the GhostScript docs:
% Define an ICC profile :
/ICCProfile (srgb.icc)
def
[/_objdef {icc_PDFA} /type /stream /OBJ pdfmark
[{icc_PDFA}
<<
/N 3
>> /PUT pdfmark
[{icc_PDFA} ICCProfile (r) file /PUT pdfmark
% Define the output intent dictionary :
[/_objdef {OutputIntent_PDFA} /type /dict /OBJ pdfmark
[{OutputIntent_PDFA} <<
/Type /OutputIntent % Must be so (the standard requires).
/S /GTS_PDFA1 % Must be so (the standard requires).
/DestOutputProfile {icc_PDFA} % Must be so (see above).
/OutputConditionIdentifier (sRGB)
>> /PUT pdfmark
[{Catalog} <</OutputIntents [ {OutputIntent_PDFA} ]>> /PUT pdfmark
Phew!
I’ve just realised Consfigurator 1.3.0, with some readtable enhancements. So now instead of writing
(firewalld:has-policy "athenet-allow-fwd"
#>EOF><?xml version="1.0" encoding="utf-8"?>
<policy priority="-40" target="ACCEPT">
<ingress-zone name="trusted"/>
<egress-zone name="internal"/>
</policy>
EOF)
you can write
(firewalld:has-policy "athenet-allow-fwd" #>>~EOF>>
<?xml version="1.0" encoding="utf-8"?>
<policy priority="-40" target="ACCEPT">
<ingress-zone name="trusted"/>
<egress-zone name="internal"/>
</policy>
EOF)
which is a lot more readable when it appears in a list of other properties. In addition, instead of writing
(multiple-value-bind (match groups)
(re:scan-to-strings "^uid=(\\d+)" (connection-connattr connection 'id))
(and match (parse-integer (elt groups 0))))
you can write just (#1~/^uid=(\d+)/p (connection-connattr connection 'id)).
On top of the Perl-inspired syntax, I’ve invented the new trailing option p
to attempt to parse matches as numbers.
Another respect in which Consfigurator’s readtable has become much more useful
in this release is that I’ve finally taught Emacs about these reader macros,
such that unmatched literal parentheses within regexps or heredocs don’t cause
Emacs (and especially Paredit) to think that the code couldn’t be valid Lisp.
Although I was able mostly to reuse propertising algorithms from the built-in
perl-mode, I did have to learn a lot more about how parse-partial-sexp
really works, which was pretty cool.
The emacsclient(1) program is used to connect to Emacs running as a daemon. emacsclient(1) can go in your EDITOR/VISUAL environment variables so that you can edit things like Git commit messages and sudoers files in your existing Emacs session, rather than starting up a new instances of Emacs. It’s not only that this is usually faster, but also that it means you have all your session state available – for example, you can yank text from other files you were editing into the file you’re now editing.
Another, somewhat different use of emacsclient(1) is to open new Emacs frames
for arbitrary work, not just editing a single, given file. This can be in a
terminal or under a graphical display manager. I use emacsclient(1) for this
purpose about as often as I invoke it via EDITOR/VISUAL. I use emacsclient
-nc to open new graphical frames and emacsclient -t to open new text-mode
frames, the latter when SSHing into my work machine from home, or similar. In
each case, all my buffers, command history etc. are available. It’s a real
productivity boost.
Some people use systemd socket activation to start up the Emacs daemon. That
way, they only need ever invoke emacsclient, without any special options,
and the daemon will be started if not already running. In my case, instead,
emacsclient on PATH is a wrapper
script that checks
whether a daemon is running and starts one if necessary. The main reason I
have this script is that I regularly use both the installed version of Emacs
and in-tree builds of Emacs out of emacs.git, and the script knows how to
choose what to launch and what to try to connect to. In particular, it
ensures that the in-tree emacsclient(1) is not used to try to connect to the
installed Emacs, which might fail due to protocol changes. And it won’t use
the in-tree Emacs executable if I’m currently recompiling Emacs.
I’ve recently enhanced my wrapper script to make it possible to have the primary Emacs daemon always running under gdb. That way, if there’s a seemingly-random crash, I might be able to learn something about what happened. The tricky thing is that I want gdb to be running inside an instance of Emacs too, because Emacs has a nice interface to gdb. Further, gdb’s Emacs instance – hereafter “gdbmacs” – needs to be the installed, optimised build of Emacs, not the in-tree build, such that it’s less likely to suffer the same crash. And the whole thing must be transparent: I shouldn’t have to do anything special to launch the primary session under gdb. That is, if right after booting up my machine I execute
% emacsclient foo.txt
then gdbmacs should start, it should then start the primary sesion under gdb, and finally the real emacsclient(1) should connect to the primary session and request editing foo.txt. I’ve got that all working now, and there are some nice additional features. If the primary session hits a breakpoint, for example, then emacsclient requests will be redirected to gdbmacs, so that I can still edit files etc. without losing the information in the gdb session. I’ve given gdbmacs a different background colour, so that if I request a new graphical frame and it pops up with that colour, I know that the main session is wedged and I might like to investigate.
First attempt: remote attaching
My first attempt, which was running for several weeks, had a different
architecture. Instead of having gdbmacs start up the primary session, the
primary session would start up gdbmacs, send over its own PID, and ask gdbmacs
to use gdb’s functionality for attaching to existing processes. In
after-init-hook I had to code to check whether we are an Emacs that just
started up out of my clone emacs.git, and if so, we invoke
% emacsclient --socket-name=gdbmacs --spw/installed \
--eval '(spw/gdbmacs-attach <the pid>)'
The --spw/installed option asks the wrapper script to start up gdbmacs using
the Emacs binary on PATH, not the one in emacs.git/. (We can’t use the
server-eval-at function because we need the wrapper script to start up
gdbmacs if it’s not already running.)
Over in gdbmacs, the spw/gdbmacs-attach function then did something like
this:
(let ((default-directory (expand-file-name "~/src/emacs/")))
(gdb (format "gdb -i=mi --pid=%d src/emacs" pid))
(gdb-wait-for-pending (lambda () (gud-basic-call "continue"))))
Having gdbmacs attach to the existing process is more robust than having
gdbmacs start up Emacs under gdb. If anything goes wrong with attaching, or
with gdbmacs more generally, you’ve still got the primary session running
normally; it just won’t be under a debugger. More significantly, the wrapper
script doesn’t need to know anything about the relationship between the two
daemons. It just needs to be able to start up both in-tree and installed
daemons, using the --spw/installed option to determine which. The
complexity is all in Lisp, not shell script (the wrapper is a shell script
because it needs to start up fast).
The disadvantage of this scheme is that the primary session’s stdout and
stderr are not directly accessible to gdbmacs. There is a function
redirect-debugging-output to deal with this situation, and I experimented
with having the primary session call this and send the new output filename to
gdbmacs, but it’s much less smooth than having gdbmacs start up the primary
session itself.
I think most people would probably prefer this scheme. It’s definitely cleaner to have the two daemons start up independently, and then have one attach to the other. But I decided that I was willing to complexify my wrapper script in order to have the primary session’s stdout and stderr attached to gdbmacs in the normal way.
Second attempt: daemons starting daemons
In this version, the relevant logic is shifted out of Lisp into the wrapper
script. When we execute emacsclient foo.txt, the script first determines
whether the primary session is already running, using something like this:
[ -e /run/user/1000/emac/server \
-a -n "$(ss -Hplx src /run/user/1000/emacs/server)" ]
The ss(8) tool is used to determine if anything is listening on the socket.
The script also uses flock(1) to have other instances of the wrapper script
wait, in case they are going to cause the daemon to exit, or something. If
the daemon is running, then we can just exec emacs.git/lib-src/emacsclient
to handle the request. If not, we first have to start up gdbmacs:
installed_emacsclient=$(PATH=$(echo "$PATH" \
| sed -e "s#/directory/containing/wrapper/script##") \
command -v emacsclient)
"$installed_emacsclient" -a '' -sgdbmacs --eval '(spw/gdbmacs-attach)'
spw/gdbmacs-attach now does something like this:
(let ((default-directory (expand-file-name "~/src/emacs/")))
(gdb "gdb -i=mi --args src/emacs --fg-daemon")
(gdb-wait-for-pending
(lambda ()
(gud-basic-call "set cwd ~")
(gdb-wait-for-pending
(lambda ()
(gud-basic-call "run"))))))
"$installed_emacsclient" exits as soon as spw/gdbmacs-attach returns,
which is before the primary session has started listening on the socket, so
the wrapper script uses inotifywait(1) to wait until /run/user/1000/server
appears. Then it is finally able to exec ~/src/emacs/lib-src/emacsclient to
handle the request.
A particular kind of complexity
The wrapper script must be highly reliable. I use my primary Emacs session
for everything, on the same laptop that I do my academic work. The main way I
get at it is via a window manager shortcut that executes emacsclient -nc to
request a new frame, such that if there is a problem, I won’t see any error
output until I open an xterm and tail ~/.swayerr/~/.xsession-errors. And
as starting gdbmacs and only then starting up less optimised, debug in-tree
builds of Emacs is not fast, I would have to wait at least ten seconds without
any Emacs frame popping up before I could suppose that something was wrong.
This is where the first scheme, where the complexity is all in Lisp, really seems attractive. My emacsclient(1) wrapper script has several other facilities and convenience features, some of which are general and some of which are only for my personal usage patterns, and the code for all those is now interleaved with the special cases for gdbmacs and the primary session that I’ve described in this post. There’s a lot that could go wrong, and it’s all in shell, and its output isn’t readily visible to the user. I’ve done a lot of testing, and I’m pretty confident in the script in its current form, but if I need to change or add features, I’ll have to do a lot of testing again before I can deploy to my usual laptop.
Single-threaded, readily interactively-debuggable Emacs Lisp really shines for
this sort of “do exactly what I mean, as often as possible” code, and you find
a lot of it in Emacs itself, third party packages, and peoples’ init.el
files. You can add all sorts of special cases to your interactive commands to
make Emacs do just what is most useful, and have confidence that you can
manage the resulting complexity. In this case, though, I’ve got piles of just
this sort of complexity out in an opaque shell script. The ultimate goal,
though, is debugging Emacs, such that one can run yet more DJWIM Emacs Lisp,
which perhaps justifies it.
I’ve come up with a new reprepro wrapper for adding rebuilds of existing Debian packages to a local repository: reprepro-rebuilder. It should make it quicker to update local rebuilds of existing packages, patched or unpatched, working wholly out of git. Here’s how it works:
Start with a git branch corresponding to the existing Debian package you want to rebuild. Probably you want
dgit clone foo.Say
reprepro-rebuilder unstable, and the script will switch you to a branchPREFIX/unstable, where PREFIX is a short name for your reprepro repository, and updatedebian/changelogfor a local rebuild. If the branch already exists, it will be updated with a merge.You can now do any local patching you might require. Then, say
reprepro-rebuilder --release. (The command from step (2) will offer to release immediately for the case that no additional patching is required.)At this point, your reprepro will contain a source package coresponding to your local rebuild. You can say
reprepro-rebuilder --wanna-buildto build any missing binaries for all suites, for localhost’s Debian architecture. (Again, the command from step (3) will offer to do this immediately after adding the source package.)
Additionally, if you’re rebuilding for unstable, reprepro-rebuilder will offer to rebuild for backports, too, and there are a few more convenience features, such as offering to build binaries for testing between steps (2) and (3). You can leave the script waiting to release while you do the testing.
I think that the main value of this script is keeping track of the distinct
steps of a relatively fiddly, potentially slow-running workflow for you,
including offering to perform your likely next step immediately. This means
that you can be doing something else while the rebuilds are trundling along:
you just start reprepro-rebuilder unstable in a shell, and unless additional
patching is required between steps (2) and (3), you just have to answer script
prompts as they show up and everything gets done.
If you need to merge from upstream fairly regularly, and then produce binary
packages for both unstable and backports, that’s quite a lot of manual steps
that reprepro-rebuilder takes care of for you. But the script’s command line
interface is flexible enough for the cases where more intervention is
required, too. For example, for my Emacs snapshot builds, I have another
script to replace steps (1) and (2), which merges from a specific branch that
I know has been manually tested, and generates a special version number. Then
I say reprepro-rebuilder --release and the script takes care of preparing
packages for unstable and bullseye-backports, and I can have my snapshots on
all of my machines without a lot of work.
The ThinkPad x220 that I had been using as an ssh terminal at home finally developed one too many hardware problems a few weeks ago, and so I ordered a Raspberry Pi 4b to replace it. Debian builds minimal SD card images for these machines already, but I wanted to use the usual ext4-on-LVM-on-LUKS setup for GNU/Linux workstations. So I used Consfigurator to build a custom image.
There are two key advantages to using Consfigurator to do something like this:
As shown below, it doesn’t take a lot of code to define the host, it’s easily customisable without writing shell scripts, and it’s all declarative. (It’s quite a bit less code than Debian’s image-building scripts, though I haven’t carefully compared, and they are doing some additional setup beyond what’s shown below.)
You can do nested block devices, as required for ext4-on-LVM-on-LUKS, without writing an intensely complex shell script to expand the root filesystem to fill the whole SD card on first boot. This is because Consfigurator can just as easily partition and install an actual SD card as it can write out a disk image, using the same host definition.
Consfigurator already had all the capabilities to do this, but as part of this project I did have to come up with the high-level wrapping API, which didn’t exist yet. My first SD card write wouldn’t boot because I had to learn more about kernel command lines; the second wouldn’t boot because of a minor bug in Consfigurator regarding /etc/crypttab; and the third build is the one I’m using, except that the first boot runs into a bug in cryptsetup-initramfs. So as far as Consfigurator is concerned I would like to claim that it worked on my second attempt, and had I not been using LUKS it would have worked on the first :)
The code
(defhost erebus.silentflame.com ()
"Low powered home workstation in Tucson."
(os:debian-stable "bullseye" :arm64)
(timezone:configured "America/Phoenix")
(user:has-account "spwhitton")
(user:has-enabled-password "spwhitton")
(disk:has-volumes
(physical-disk
(partitioned-volume
((partition
:partition-typecode #x0700 :partition-bootable t :volume-size 512
(fat32-filesystem :mount-point #P"/boot/firmware/"))
(partition
:volume-size :remaining
(luks-container
:volume-label "erebus_crypt"
:cryptsetup-options '("--cipher" "xchacha20,aes-adiantum-plain64")
(lvm-physical-volume :volume-group "vg_erebus"))))))
(lvm-logical-volume
:volume-group "vg_erebus"
:volume-label "lv_erebus_root" :volume-size :remaining
(ext4-filesystem :volume-label "erebus_root" :mount-point #P"/"
:mount-options '("noatime" "commit=120"))))
(apt:installed "linux-image-arm64" "initramfs-tools"
"raspi-firmware" "firmware-brcm80211"
"cryptsetup" "cryptsetup-initramfs" "lvm2")
(etc-default:contains "raspi-firmware"
"ROOTPART" "/dev/mapper/vg_erebus-lv_erebus_root"
"CONSOLES" "ttyS1,115200 tty0"))
and then you just insert the SD card and, at the REPL on your laptop,
CONSFIG> (hostdeploy-these laptop.example.com
(disk:first-disk-installed-for nil erebus.silentflame.com #P"/dev/mmcblk0"))
There is more general information in the OS installation tutorial in the Consfigurator user’s manual.
Other niceties
Configuration management that’s just as easily applicable to OS installation as it is to the more usual configuration of hosts over SSH drastically improves the ratio of cost-to-benefit for including small customisations one is used to.
For example, my standard Debian system configuration properties (omitted from the code above) meant that when I was dropped into an initramfs shell during my attempts to make an image that could boot itself, I found myself availed of my custom Space Cadet-inspired keyboard layout, without really having thought at any point “let’s do something to ensure I can have my usual layout while I’m figuring this out.” It was just included along with everything else.
As compared with the ThinkPad x220, it’s nice how the Raspberry Pi 4b is silent and doesn’t have any LEDs lit by default once it’s booted. A quirk of my room is that one plug socket is controlled by a switch right next to the switch for the ceiling light, so I’ve plugged my monitor into that outlet. Then when I’ve finished using the new machine I can flick that switch and the desk becomes completely silent and dark, without actually having to suspend the machine to RAM, thereby stopping cron jobs, preventing remote access from the office to fetch uncommitted files, etc..
I’d like to share some pointers for using Gnus together with notmuch rather than notmuch together with notmuch’s own Emacs interface, notmuch.el. I set about this because I recently realised that I had been poorly reimplementing lots of Gnus features in my init.el, primarily around killing threads and catching up groups, supported by a number of complex shell scripts. I’ve now switched over, and I’ve been able to somewhat simplify what’s in my init.el, and drastically simplify my notmuch configuration outside of Emacs. I’m always more comfortable with less Unix and more Lisp when it’s feasible.
The basic settings are
gnus-search-default-enginesandgnus-search-notmuch-remove-prefix, explained in(info "(gnus) Searching"), and an entry for your maildir ingnus-secondary-select-methods, explained in(info "(gnus) Maildir"). Then you will haveG GandG gin the group buffer to make and save notmuch searches.I think it’s important to have something equivalent to
notmuch-saved-searchesconfigured programmatically in your init.el, rather than interactively adding each saved search to the group buffer. This is because, as notmuch users know, these saved searches are more like permanent, virtual inboxes than searches. You can learn how to do this by looking at howgnus-group-make-search-groupcallsgnus-group-make-group. I have some code running ingnus-started-hookwhich does something like this for each saved search:(if (gnus-group-entry group) (gnus-group-set-parameter group 'nnselect-specs ...) (gnus-group-make-group ...))The idea is that if you update your saved search in your init.el, rerunning this code will update the entries in the group buffer. An alternative would be to just kill every nnselect search in the group buffer each time, and then recreate them. In addition to reading
gnus-group-make-search-group, you can look in~/.newsrc.eldto see the sort ofnnselect-specsgroup parameters you’ll need your code to produce.I’ve very complicated generation of my saved searches from some variables, but that’s something I had when I was using notmuch.el, too, so perhaps I’ll describe some of the ideas in there in another post.
You’ll likely want to globally bind a function which starts up Gnus if it’s not already running and then executes an arbitrary notmuch search. For that you’ll want
(unless (gnus-alive-p) (gnus)), and not(unless (gnus-alive-p) (gnus-no-server)). This is because you need Gnus to initialise nnmaildir before doing any notmuch searches. Gnus passes--output=filesto notmuch and constructs a summary buffer of results by selecting mail that it already knows about with those filenames.When you’re programmatically generating the list of groups, you might also want to programmatically generate a topics topology. This is how you do that:
(with-current-buffer gnus-group-buffer (gnus-topic-mode 0) (setq gnus-topic-alist nil gnus-topic topology nil) ;; Now push to those two variables. You can also use ;; `gnus-topic-move-matching' to move nnmaildir groups into, e.g., ;; "misc". (gnus-topic-mode 1) (gnus-group-list-groups))If you do this in
gnus-started-hook, the values for those variables Gnus saves into~/.newsrc.eldare completely irrelevant and do not need backing up/syncing.When you want to use
M-gto scan for new mail in a saved search, you’ll need to have Gnus also rescan your nnmaildir inbox, else it won’t know about the filenames returned by notmuch and the messages won’t appear. This is similar to thegnusvs.gnus-no-serverissue above. I’m using:beforeadvice tognus-request-group-scanto scan my nnmaildir inbox each time any nnselect group is to be scanned.If you are used to linking to mail from Org-mode buffers, the existing support for creating links works fine, and the standard
gnus:links already contain the Message-ID. But you’ll probably want opening the link to perform a notmuch search for id:foo rather than trying to use Gnus’s own jump-to-Message-ID code. You can do this using:aroundor:overrideadvice fororg-gnus-follow-link: look atgnus-group-read-ephemeral-search-groupto do the search, and then callgnus-summary-goto-article.
I don’t think that the above is especially hacky, and don’t expect changes to Gnus to break any of it. Implementing the above for your own notmuch setup should get you something close enough to notmuch.el that you can take advantage of Gnus’ unique features without giving up too much of notmuch’s special features. However, it’s quite a bit of work, and you need to be good at Emacs Lisp. I’d suggest reading lots of the Gnus manual and determining for sure that you’ll benefit from what it can do before considering switching away from notmuch.el.
Reading through the Gnus manual, it’s been amazing to observe the extent to which I’d been trying to recreate Gnus in my init.el, quite oblivious that everything was already implemented for me so close to hand. Moreover, I used Gnus ten years ago when I was new to Emacs, so I should have known! I think that back then I didn’t really understand the idea that Gnus for mail is about reading mail like news, and so I didn’t use any of the features, back then, that more recently I’ve been unknowingly reimplementing.