Thoughts on Regulating AI
The other day, AI came up in a conversation and someone asked if I was in favor of regulating AI and, to be honest, I was hesitant to say yes. Was it my proclivities towards anarchism telling me that regulation is bad? Well, as a dialectical libertarian, I’m not opposed to regulation on principle, so that can’t be it. It was hard to formulate an exact reason in the moment but, after further reflection, I think I can articulate why I had that initial gut reaction.
Most talk among regulators about regulating AI has centered around intellectual property rights. They argue that it is somehow a violation of IP for LLMs to be trained on copyrighted material. This is clearly a bullshit argument. If the AI were a person, training on copyrighted material would clearly be covered under “fair use.” As an artist, I can freely draw inspiration from other people’s art. As a writer, I can read all kinds of copyrighted material and then write my own book that borrows ideas from what I’ve read. That’s just fair use. Drawing inspiration from, or learning from, copyrighted material is not a violation of IP rights. If AI were to be prohibited from fair use of copyrighted material, it would set a precedent for going after human artists and writers who draw inspiration from copyrighted material. That seems to me like a move in a dystopian direction.
On the other hand, there is the shady area of AI porn that uses the likenesses of real people. That’s something that was brought up in the conversation that I hadn’t really thought about since that’s not one of the things that I personally use AI for. It does pose an ethical problem but I’m still ambivalent about using this as a justification for regulating it. It seems to me that it would be nearly impossible to regulate. AI tools are widely available, and people can set up their own AI on their computers and modify it for their own purposes. Short of a huge invasion of privacy and a very dystopian despotism, it would be impossible to prevent people from making such AI porn. On the other hand, you could regulate the dissemination of such porn but this also seems very difficult. You could make it illegal for porn sites to host such content. However, I think it will be unlikely for sites hosting content to be able to adequately distinguish real content from AI-generated fakes. Also, I’d think that AI porn that features only fictional characters would need to be distinguished from AI fakes of real people. Internet pornography has been notoriously problematic and difficult to regulate as is. I’m skeptical that regulators can find any decent way to regulate AI porn that doesn’t end up infringing on people’s rights in the process.
Furthermore, I don’t think regulators are really concerned with the problem of AI porn that uses the likenesses of real people. Their chief concern with AI is its use of copyrighted and trademarked material and an individual’s likeness is not protected under IP laws. I think the problem with AI porn ties more into the broader problems of internet pornography and the only serious proposals that legislators are considering are things like banning pornography or making it difficult or unsafe to access.
Aside from the problem of AI porn, I generally like AI. I have 3D printers and I use the assistance of AI in order to create things. The AI can help by generating rough models that I can then sculpt on top of in Blender, essentially allowing me to be creative and create things that I would never have the time or capacity to do without such assistance. If a 3D print keeps failing, I can take a picture of the settings and of the print and show it to the AI. The AI can tell me what is wrong and what I need to do to fix it. Furthermore, I am a writer and I use AI to help me with my writings. I used to discuss my ideas on social media in order to hash things out as I was trying to write something. This would mean that it could take months for my idea to really come to fruition, which is no longer the case with AI assistance. Now, I write things and show it to the AI and ask for criticism. When I have a vague inkling of an idea, I can talk to the AI about it. But, most importantly, the AI has more information than the people I would normally discuss things with online. Sometimes my line of thinking draws on several obscure philosophies. This makes it hard to get good critical feedback from real people because they are almost always totally unfamiliar with some (or all) of the ideas I am drawing on. The AI, on the other hand, is not.
Overall, my concern with regulating AI is basically that regulation is likely to come in the form of a complete ban and/or dystopian intrusions into individual privacy. For instance, if they were to rule that AI has no right to “fair use” of copyrighted material, this would effectively amount to a ban. If AI can’t look at copyrighted things, it is effectively ruined and won’t be able to do most of the useful things that it currently does.
Where I do have concerns about AI and feel like something can be done is with regard to automation. AI can effectively replace a lot of jobs. If you need a program to do something, you can have AI write a script for you. This greatly reduces the amount of programming skills required to do a lot of jobs. Furthermore, virtually any data entry job can be done by AI now. I work in finance and accounting and used to do several different kinds of audits. It required a lot of skill and knowledge. It is now possible for AI to do that job. I had to learn the special language or code used by an accounting system in order to decipher information. I had to be able to do math in my head. None of the skills that I had developed in order to do that job are necessary anymore. In fact, I’m fairly confident that the last five jobs I have had can now be easily automated with the assistance of AI. There’s absolutely no reason to pay a person to do them. And if a real person is needed still after automation, it will certainly be one or two people needed rather than a hundred. When Andrew Yang warned that AI was coming for the jobs, he was fucking right!
If we are going to survive without finding ourselves in a totally dystopian landscape in the near future, we need to start implementing social dividends or basic income policies. We should look into imposing a Harberger tax or C.O.S.T. (Common-Ownership Self-assessed Tax) on IP and using the revenue to fund a social dividend. We should look into Negative Income Tax and Universal Basic Income schemes. We should look into reducing the number of hours in the workweek and increasing hourly pay so that incomes do not decline. These are the things that I worry about far more than the regulation of AI. If we fail to figure out how to regulate AI, we’ll probably survive. If we fail to implement some sort of dividend or basic income policy, we’re fucked as a species.