<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://devbydemi.com/feed.xml" rel="self" type="application/atom+xml" /><link href="https://devbydemi.com/" rel="alternate" type="text/html" /><updated>2026-03-15T00:10:26+00:00</updated><id>https://devbydemi.com/feed.xml</id><title type="html">Demi’s personal blog</title><subtitle>The personal blog of Demi Marie Obenour</subtitle><entry><title type="html">Securely Connecting To Local Devices: Cryptographically Generated Domain Names</title><link href="https://devbydemi.com/cryptographically-generated-domains.html" rel="alternate" type="text/html" title="Securely Connecting To Local Devices: Cryptographically Generated Domain Names" /><published>2025-05-02T20:42:02+00:00</published><updated>2025-05-02T20:42:02+00:00</updated><id>https://devbydemi.com/cryptographically-generated-domain-names</id><content type="html" xml:base="https://devbydemi.com/cryptographically-generated-domains.html"><![CDATA[<p>Have you ever had a device, such as a printer, with a web interface?
Have you ever tried to connect to that web interface via HTTPS?  If you
have, you’ll almost certainly have gotten an “untrusted certificate” warning,
pointing out that you are vulnerable to a monster-in-the-middle (MITM) attack.
Unless you are willing to set up and trust your own certificate authority
and upload a custom certificate, it’s almost impossible to avoid this.  I
strongly suspect almost all people don’t know how to fix the problem or
aren’t willing to put in the work to mitigate the risk.  If you are
interested in a solution to this problem, read on!</p>

<h2 id="whats-the-problem">What’s The Problem?</h2>

<p>The problem is that the only way to get a certificate that a web browser
will trust is to prove possession of a domain name.  However, the device
does not own any globally-valid domain names!  It owns a <code class="language-plaintext highlighter-rouge">.local</code> domain
name, but <code class="language-plaintext highlighter-rouge">.local</code> domains are not globally valid, so a publicly-trusted
certification authority cannot issue certificates for them.  One can
work around this by using a private certification authority, but this is
infeasible for non-technical users.  The result is that most local
communication is either unencrypted or uses self-signed unpinned
certificates, neither of which are secure.  Furthermore, this provides
an incentive for device manufacturers to use cloud-based solutions,
which support HTTPS without any hurdles.</p>

<h2 id="https-for-every-device">HTTPS For Every Device</h2>

<p>This problem can be solved by embedding the fingerprint of a public key
in the domain name, producing a <em>cryptographically-generated domain
name</em>, or CGDN.  Only the owner of the corresponding private key can
issue a certificate for such a domain or sign DNSSEC RRSIGs for them.
Such domain names are no longer human-memorable, but phones nowadays
generally have a camera, which allows for the CGDN to be included in a
QR code.  QR codes are public, but critically the optical signal cannot
be practically tampered with except by physical alteration of the hardware.</p>

<p>The domain name is generated by hashing a public key.  The private half
of this key can be used for two purposes:</p>

<ol>
  <li>Signing X.509 certificates and CRLs.</li>
  <li>Producing RRSIGs for DNSSEC records, acting as a Key Signing Key (KSK).</li>
</ol>

<p>It can, but usually should not be, used for these purposes too:</p>

<ol>
  <li>Acting as an SSHv2 host key.</li>
  <li>Acting directly as a TLS1.3 secret key.</li>
  <li>Making SSH signatures according to the specification published by OpenSSH.</li>
</ol>

<p>It is cryptographically sound to use the same key for all five purposes
because all five require that the data to be signed have a distinct prefix.
The prefix is:</p>

<ol>
  <li>“SSH-2.0-“ (the start of the client identification string) for SSHv2.</li>
  <li>“SSHSIG” for OpenSSH signatures.</li>
  <li>” “ (ASCII-encoded and repeated multiple times) for TLSv1.3.</li>
  <li>0x00 0x30 (48, the type of a DNSKEY record, encoded as a 16-bit big-endian integer)
for an RRSIG of one or more DNSKEY records.</li>
  <li>0x10 (the type byte of an ASN.1 SEQUENCE) for X.509.</li>
</ol>

<p>The private key MUST NOT be used for any purpose not listed here unless
an extension to the specification explicitly permits such use.  Forbidden
purposes include, but are not limited to:</p>

<ul>
  <li>Signing handshakes for TLS1.2 or below, or for any version of SSL.  TLSv1.2
allows the client to request that data with an arbitrary prefix be signed,
and older TLS versions are obsolete and insecure anyway.</li>
  <li>Signing DNS records of types other than DNSSEC.  These use a different
Type Covered field and so might result in prefix collisions.</li>
  <li>RSA decryption, due to the ROBOT attack.</li>
  <li>OpenPGP signing, which allows the attacker to provide the entire prefix
of the data to be signed.</li>
</ul>

<p>An implementation that uses the key for any purpose not listed in this specification
MUST be assumed to have a security vulnerability unless there is a sound cryptographic
reason to believe that no exploit will ever be possible, even in the presence of future
extensions to this protocol.  If it is necessary to sign additional data,
the SSH signature format with a specific namespace can be used.</p>

<h2 id="the-format">The Format</h2>

<p>A cryptographically generated domain can use one of the following forms:</p>

<ul>
  <li><code class="language-plaintext highlighter-rouge">&lt;hash of public key&gt;.local</code></li>
  <li><code class="language-plaintext highlighter-rouge">SOMETHING.&lt;hash of public key&gt;.local</code></li>
</ul>

<p><code class="language-plaintext highlighter-rouge">.local</code> is reserved for multicast DNS, which is suitable for home networks.
The device itself would advertise the CGDN.  CGDNs should also be supported
on the public Internet, with one of the following formats for the domain name:</p>

<ul>
  <li><code class="language-plaintext highlighter-rouge">&lt;hash of public key&gt;.keyhash.arpa</code></li>
  <li><code class="language-plaintext highlighter-rouge">SOMETHING.&lt;hash of public key&gt;.keyhash.arpa</code></li>
</ul>

<p><code class="language-plaintext highlighter-rouge">.keyhash.arpa</code> is a placeholder; the real domain name chosen might be different.
For CGDNs to be reachable over the public Internet, the owner of the corresponding
public key must be able to set DNS records for them.  I have
not yet figured out what the mechanism for this should be.</p>

<h2 id="computing-the-public-key-hash">Computing The Public Key Hash</h2>

<p>Computing the public key hash is trickier than it sounds, because there are multiple
distinct formats in which the public key will be received.  Therefore, a canonical
format must be chosen.  This version of this document chooses the encoding used by
SSH as canonical and requires all other formats to be converted to it.</p>

<p>SSH uses a simple and extensible encoding scheme, whereas X.509 requires
complicated ASN.1 DER encoding.  DNSSEC also uses a simple encoding scheme, but
DNSSEC is extremely severely impacted by large signature and public key sizes,
so its future in a post-quantum world is less clear.</p>

<p>The hash includes only the algorithm field and the public key.  It is encoded
into a domain name the same way that Tor v3 onion service domain names are encoded,
except that the 32-byte public key is a SHA256 hash.  Security of this scheme depends
on second-preimage resistance (<em>not</em> collision resistance!) of the hash function,
so SHA256 will be secure for a very, <em>very</em> long time.</p>

<h2 id="prior-art">Prior Art</h2>

<p>Tor onion service domains are already cryptographically generated.  However,
onion sevices are only available on the Tor network, which is a dealbreaker
for general use.  Furthermore, v3 onion services <a href="https://spec.torproject.org/rend-spec/encoding-onion-addresses.html">always use ed25519</a>,
which is not post-quantum secure.</p>

<p>Reserving special domain names for CGDNs also started with the Tor network’s
use of <code class="language-plaintext highlighter-rouge">.onion</code>, which has been officially reserved <a href="https://www.rfc-editor.org/rfc/rfc7686.html">by RFC7686</a>.</p>

<p><a href="https://blog.pastly.net/papers/secdev19-satdomains.pdf">Self-authenticating traditional domain names</a> have been proposed before,
but these proposals are tied to traditional DNS names rather than only using
a hashed public key for the domain name.  In contrast, this proposal focuses
on use-cases that do not require any human-memorable domain name to be present,
which significantly simplifies the design.</p>

<h2 id="open-questions">Open Questions</h2>

<ul>
  <li>
    <p>How should registration work?  The intent is that anyone will be able to register
a CGDN by proving possession of the corresponding secret key.  Is DNS TSIG the best
way to implement this?</p>
  </li>
  <li>
    <p>What is the best way to get this adopted?  Writing up an idea is easy, but getting
it implemented by major browsers is much harder.  Is this something that should
be hard-coded in their TLS libraries?</p>
  </li>
  <li>
    <p>Who should run the public registration service for non-<code class="language-plaintext highlighter-rouge">.local</code> CGDNs?
At the very least, one or more such services need to exist so that domain
names can be resolved to IP addresses.</p>
  </li>
  <li>
    <p>Should there be a public relay service that avoids devices needing to
accept incoming connections themselves?  This would make life easier for
those behind NAT, but someone needs to run the relay service, and there
are privacy and abuse concerns.</p>
  </li>
  <li>
    <p>Is there experience from Tor onion services that is relevant?
Anonymity is explicitly <em>not</em> a goal of CGDNs, mostly because Tor
already serves that role, but also for performance reasons.</p>
  </li>
  <li>
    <p>Due to their limited computing power, embedded devices are very vulnerable
to denial of service attacks.  Is there a good solution to this?  Since
CGDNs implicitly use certificate pinning, any DDoS-prevention solution that
does not require access to plaintext traffic can be used without risking
confidentiality or integrity, except against traffic analysis attacks.</p>
  </li>
</ul>

<h2 id="credits">Credits</h2>

<p>Thanks to Matthew Finkel of Apple for helpful feedback.</p>]]></content><author><name></name></author><category term="IoT" /><category term="Security" /><summary type="html"><![CDATA[Have you ever had a device, such as a printer, with a web interface? Have you ever tried to connect to that web interface via HTTPS? If you have, you’ll almost certainly have gotten an “untrusted certificate” warning, pointing out that you are vulnerable to a monster-in-the-middle (MITM) attack. Unless you are willing to set up and trust your own certificate authority and upload a custom certificate, it’s almost impossible to avoid this. I strongly suspect almost all people don’t know how to fix the problem or aren’t willing to put in the work to mitigate the risk. If you are interested in a solution to this problem, read on!]]></summary></entry><entry><title type="html">Easily and Securely Setting Up Future Wireless Devices</title><link href="https://devbydemi.com/simple-secure-provisioning.html" rel="alternate" type="text/html" title="Easily and Securely Setting Up Future Wireless Devices" /><published>2025-05-02T20:42:02+00:00</published><updated>2025-05-02T20:42:02+00:00</updated><id>https://devbydemi.com/secure-provisioning</id><content type="html" xml:base="https://devbydemi.com/simple-secure-provisioning.html"><![CDATA[<p>Have you ever paired your smartphone or computer with a Bluetooth
device, added a ZigBee device to your home ZigBee network, or used
a mobile app to set up an IoT device?  Have you ever wondered if your
connection is secure or if there is an attacker intercepting all the
traffic?  Are you scared that you might be pairing with a malicious
device that will inject keystrokes, rather than with the device you
actually intended to pair with?  Unless one has a Faraday cage (and
sometimes even if one does), it’s almost impossible to know the
answers to these for certain.  If you are interested in a solution,
read on!</p>

<h2 id="whats-the-problem">What’s The Problem?</h2>

<p>Bluetooth’s pairing process guarantees that you securely connected to
<em>some</em> device.  However, it does not guarantee that you securely
connected to the <em>correct</em> device!  If one pairs with a malicious or
compromised device, the device might be able to act as a keyboard and
inject keystrokes.  Other devices have even worse provisioning
protocols, some of which require vendor-provided apps (eww!).  If you
are lucky, you have to hope that nobody is performing an active
monster-in-the-middle (MITM) attack on the pairing process.  If you
aren’t, you have to hope that nobody is sniffing on your wireless
communication, either during setup (for the somewhat-bad devices) or at
all (for the really bad ones).</p>

<h2 id="what-went-wrong">What Went Wrong?</h2>

<p>The reason that secure setup is so hard, and so rarely implemented, is
that many devices have incredibly poor I/O capabilities.  You can’t
reasonably enter a passcode on a device with no keyboard, and a device
without a display can’t display a code.</p>

<h2 id="how-should-things-work">How Should Things Work?</h2>

<p>The device comes with a QR code.  You scan it with your phone, and
it tells what kind of device the device is.  You press the “Connect”
button, and your phone tells you to push a button on the device.
Afterwards, your phone and the device are connected.  You can then
control the device via standardized Bluetooth protocols or via a web
interface.</p>

<h2 id="fixing-this">Fixing This</h2>

<p>This proposal is to fix the problem once and (hopefully) for all, by
using a QR code on the device containing the device’s public key.  It
requires only that the device have two buttons and (unlike SmartStart)
has no timing dependencies at all.  This means that you can take a break
and come back, and the device will always be in the exact state you left
it as far as the protocol is concerned.  Furthermore, it is secure
against an attacker who can perform monster-in-the-middle attacks on all
traffic except for scanning a QR code.  Since visual MITM is easy to
exclude with the human eye, that’s enough.</p>

<p>How does this work?  The key idea is for the controlling device, or
<em>controller</em> (such as a laptop or smartphone), to establish a secure
connection to the device being controlled.  Since the controlled device
accepts only one controller at a time, a controller knows that there are
no other controllers connected.  Therefore, the controller can tell the
user to press a button labeled “Confirm Connection,” which tells the
device that it can trust the host and accept commands from it.</p>

<h2 id="the-formal-state-machine">The Formal State Machine</h2>

<ol>
  <li>Device starts in ready to provision state.</li>
  <li>User uses the hosts camera to scan a QR code on the device.
On error, go to step 1.</li>
  <li>Host makes secure connection with device.
This fails if there is a different host’s key stored in non-volatile
memory.  In this case, user must use reset to return device to step 1.</li>
  <li>Device stores the host’s long-term key in non-volatile memory.</li>
  <li>The host displays that it is securely connected and directs the user
to press the “add controller” button on the device.</li>
  <li>User presses “add controller” button on device to confirm.</li>
  <li>Device marks the host long-term key as trusted.</li>
</ol>

<h3 id="device-side">Device Side</h3>

<p>The device either has zero or one long-term host keys stored in
its non-volatile memory.  The host key may either be <em>trusted</em>
(able to send commands to the device) or <em>untrusted</em> (not able to
send commands).  A device with no host key stored will store the host
key of the first host that connects, but MUST NOT mark it as trusted
until the user presses a button on the device.  Otherwise, <em>any</em> host
could take control of the device.  If the device has a stored host key,
it MUST NOT accept connections from any host with a different key.
Otherwise, another host could connect to the device before the user
presses the “Confirm Connection” button.  However, it MUST accept
connections from a host with the same key, ensuring that an interrupted
connection can always be resumed.  A device MUST NOT accept commands
from a host with a key that is not marked as trusted, as <em>any</em> host
can connect to a device if no other host has connected already.</p>

<p>Devices MUST allow the user to clear a host key.  This SHOULD take the
form of a button on the device.  This MUST NOT require that the host
be present or available, as the host might not be exist anymore, and
MUST require physical or otherwise privileged access to the device.
The only exception is if allowing one to reuse the device with
a different host without authorization would create a specific,
exploitable security vulnerability.  For instance, an alarm control
panel might not consider anyone with physical access to be trusted.</p>

<p>If the key provided by the host is different than the one the device
has stored, the device MUST indicate this in its reply to the host.
This allows the host to display a useful error message, such as
“Device is already paired to a different host.  Please press button
ABC to reset the device and connect anyway.”  Resetting the device
MUST erase any sensitive information that is accessible to the host
for security reasons.</p>

<h3 id="host-side">Host Side</h3>

<p>The host side is simpler.  The host obtains the device key and some
other information (such as the connection type) by scanning a QR code
on the device itself.  Hosts MUST prompt the user with the type of
connection being used and the type of device that will be connected
to before they continue with a connection attempt, unless the host
will only be checking if the device has a stored host key.</p>

<h2 id="open-questions">Open Questions</h2>

<ul>
  <li>
    <p>What should the format of the public key be?  What should the format
of the protocol messages be?  This is just a high-level description,
and doesn’t provide any of the details needed for a concrete
implementation.</p>
  </li>
  <li>
    <p>Should devices be required to expose their status via LEDs, or is it
sufficient for them to expose their status to any host that asks them?</p>
  </li>
  <li>
    <p>Should there be mandatory rotation of host and device keys?</p>
  </li>
  <li>
    <p>What algorithms should be supported?</p>
  </li>
  <li>
    <p>Should the long-term device key be required to be in a secure hardware
module, such as a secure element?  This is logical as it is a
long-term key, but might create obstacles to adoption.</p>
  </li>
</ul>]]></content><author><name></name></author><category term="IoT" /><category term="Security" /><summary type="html"><![CDATA[Have you ever paired your smartphone or computer with a Bluetooth device, added a ZigBee device to your home ZigBee network, or used a mobile app to set up an IoT device? Have you ever wondered if your connection is secure or if there is an attacker intercepting all the traffic? Are you scared that you might be pairing with a malicious device that will inject keystrokes, rather than with the device you actually intended to pair with? Unless one has a Faraday cage (and sometimes even if one does), it’s almost impossible to know the answers to these for certain. If you are interested in a solution, read on!]]></summary></entry></feed>