There are a bunch of people who are opining off-of the attached tweet; critical opinions generally can be characterised along the lines of:
the tech industry cannot police itself / look at the history of social media / facebook was bad, this will be worse / the internet is bad / imagine if airlines ran the aviation regulators / regulation is fine so long as you get independent experts to help it along / only lunatics should run asylums / it’s like oil-tobacco-asbestos all over again / white male privilege incarnate / even they don’t know how it works / jurassic park with terminators / …
So: back in the 1990s I watched Governments hand down regulations on cryptography, only for them to be infuriated when the proposed regulations flopped, or as/when open source communities blew straight past those regulations.
AI technologies are going to be open-source, ubiquitous and home-deployed. What will happen if/when open-source AI projects do the same to this fresh hell of regulation?
To be frank: I feel that a lot of this is not about AI, instead it’s about “#BigAI” and corporations. I have written at length elsewhere about the foolishness of regulating the shape of code; it would be wise for folk now to stop attempting that, and instead to regulate intents and outcomes, because “regulations on AI” are not going to go anywhere than cack-handed repression of software projects.

Leave a Reply