nb: links will be updated in this blog to point at the paper and supporting material, as/when such become apparent
Well, it finally landed, and I can’t say that I am terribly surprised:

Ian Levy and Crispin Robinson have produced a 70 page document which blends describing the processes of identifying and investigating online child abuse, the impact that end-to-end encryption may have on those processes, how that impact may be minimised through implementing backdoors client-side scanning and “ghost” conversation interception, how malfeasants might go about circumventing those backdoors lawful intercept and access mechanisms, and how that circumvention might be prevented… all without discussing whether their proposed approach is liberal, proportionate, cost-effective in comparison to alternatives, and apparently — aside from one allusion to a third party child-protection survey — proper in a democratic society.
My summary for another publication was as follows:
The authors argue that the Government is losing “existing mitigations” – actually, systemic weaknesses – as tools to fight crime, as if platforms developing E2EE messengers weren’t actually trying to expunge such weaknesses for adding both privacy risk and management cost to a service’s bottom line.
…
They write extensively about “scale”, painting technology as presenting a “needle in haystack” challenge for them to protect a few thousands of people… yet entirely ignore “scale” risks of their proposals endangering the privacy of billions of people worldwide.
Finally, it’s weird that they frame abuse as a “societal problem” yet demand only technological solutions for it. A cost-benefit analysis of their approach to managing the cited “550,000 [to] 850,000” potential offenders – is absent. Perhaps it would be more effective to use their funding to adopt harm-reduction approaches, hiring more social workers to implement them?
There is a considerable amount of special pleading, strawmanning, and No-True-Scotsmanning fallacy — e.g. (see the highlighted text in the screencap) that big platforms like Facebook and Google are not offering “real” end-to-end encryption because they are profiting from metadata, or that end-to-end encryption cannot be considered scientifically and separately from the application by which and the purposes for which it is being used.
This is all strawman argument: it’s very easy for Ian and Crispin to argue within a defined subset of what they think constitutes “end to end encryption”, but much harder for them when they have to argue with other people in the real world. Hence equally quotes like “We do not subscribe to the attempted assessment of safety systems as academic cryptosystems, or the binary presentation of the potential availability of ‘safe solutions’ to this set of problems” — again, it is generally very easy to jump over hurdles which you yourself have made, but somewhat more challenging to jump over others’.
The charge on page 41 that (e.g.) Facebook Messenger End-to-End Encryption is somehow not “real” is one which I dealt with at length in my recent Primer on End to End Encryption. The authors deny users the freedom to define their own threat model, where they may choose to accept a “free” service with advertising, or else choose a charity-supported platform without advertising, yet both are equally providing end-to-end privacy guarantees for content. Denying users this agency is illiberal, prescriptive, and (literally) unjustified in the text.
Ian and Crispin propose (at least?) two arbitrary and subjective boundaries around end-to-end encryption. Considering Facebook Messenger as an example, they would say that Messenger E2E Encryption is not “real” firstly because Facebook has a business and might (e.g.) advertise to the user, and secondly that there is some sort of social or legal “contract” between the user and Facebook. It’s unclear whether these two issues are meant to be considered as Boolean-AND’ed or Boolean-OR’ed together, but no matter; it’s pretty clear that in association with “…in the context of the service into which it is integrated” and reference to the “unhelpful tendency to consider ‘end-to-end encrypted services’ as academic cryptosystems”, it’s pretty clear that the overall goal of the text is to restrict the anointment of “real” end-to-end encrypted systems to… Ian and Crispin.
If “existence of a contract” is sufficient to disqualify a system as being end-to-end encrypted, then iMessage’s use as a fabric for sharing payment card details amongst all of a user’s devices, is at precarious risk.
But the thing which shocks me most is the word “sociotechnical”, and what I extrapolate from the use of that word. I want to thank Ian and Crispin for bringing some actual numbers into the public discussion, numbers which are marginally more useful than the vastly over-inflated “numbers of reports” which NCMEC publishes, and which Ian and Crispin themselves criticise at considerable length: (extracts)
- “the number of reports received by NCMEC which amounted to 29.4 million in 2021”
- “In the same year the NCA received 102,842 reports from NCMEC” [after filtering]
- “Of the 102,842 reports, 20,038 were referred to local police forces and started (or contributed to) investigations”
- “In the same year, over 6,500 individuals were arrested or made voluntary attendances due to offences related to child abuse and over 8,700 children were safeguarded”
- “We would like to be able to show the causal link between individual CyberTips and convictions. However, this is not currently possible”
- [further down] “there are estimated to be between 550,000 and 850,000 people in the UK with varying degrees of sexual interest in children who pose a concomitant level of risk to children”
So that’s 103,000 reports translating into 6500 interviews or arrests, 8700 safeguardings, with an unclear set of results but some certainly smaller number of prosecutions and convictions, how many are repeat offenders, how many were not prosecuted at all, nor is it clear how long or impactful the safeguardings were.
But all of these numbers pale in comparison to the impact upon privacy and security of nearly three billion people, worldwide. It’s easy to say: “if we can save the life of one child then surely any minor inconvenience is worth the cost…” — except that’s not true, otherwise we would be making a permanent state of “lockdown” in the UK for reducing the rate of road fatalities by 26% after nearly a decade of being stuck at around 1900 deaths per annum.
The paper’s Executive Summary includes:
Child sexual abuse is a societal problem that was not created by the internet and combating it requires an all-of-society response. However, online activity uniquely allows offenders to scale their activities, but also enables entirely new online-only harms, the effects of which are just as catastrophic for the victims.
…
…and there’s that word, “scale”, again; but what is not recognised is twofold:
- That a small number of offenders — a few thousand, as above? — can only “scale” their efforts by a relatively small amount; whereas when three billion basically decent and honest people “scale” their efforts — some of which benefit from robust security — truly astronomical numbers will result
- Combating a societal problem like child sexual abuse might require an all-of-society response, but not to the exclusion of society’s other interests such as democracy, liberality, and privacy; and to use the fight against child sexual abuse as an excuse to build a “sociotechical” panopticon, is none of those things.
Leave a Reply