My opinions on AI


AI is a contentious topic. Though recent discourse has been focused on LLMs (Large Language Models) and Image Generation. The AI industry, however, is vastly larger than these topics. My stance on, and feelings regarding, AI are nuanced. So I’ll be covering my thoughts here.

Before we get started, I am not an AI professional. I do not have a complete understanding of the technology behind many AI models, but I am lucky enough to know people who are and have given me important insights into how things actually work. These are opinions, everything here is subjective.

The elephant in the room

Whenever discussion of AI comes up, many people have an instinctive negative reaction. Sharing opinions that aren’t black or white “I hate AI” or “I love AI” is a great way to piss off a lot of people. I am against frivolous uses of AI. Every technology has its place. Yes, even AI. If this initial paragraph gives you the ick, I’m encouraging you to resist that knee jerk reaction. Of course, you don’t need to do that. If this is enough reading for you to entirely discount this page, that’s fine. I’m not here to convince anyone of anything. If you’re open to the shades of grey between those black and white opinions, you might resonate with the rest of this page.

LLMs and Dead Internet Theory

Bots. We’ve all encountered them at some point, whether social media scambots saying the most unhinged shit you’ve seen while shilling scams or crawler bots siphoning off everything they can from your websites to train their ever expanding, data-greedy models. When bots start talking to bots, start training on things created by bots, you get the hellish ourobouros that powers the Dead Internet Theory. Over the past few years, we’ve seen this theory put into practice. Humans need to take illicit substances to hallucinate, to an LLM hallucination is just the daily grind. Repeat that on loop and you get the current state of affairs where LLMs will confidently lie to you, mishandle simple data conversions and gaslight you all the way. Bots existed before LLMs, will exist after LLMs but have greatly benefitted from the technology. I don’t think it’s a brave statement to say this is shitty. We can all agree this is bad.

Image Generation

Artists hate it. """Artists""" love it. I am broadly against Image Generation AI. I’m absolutely not alone in this, it’s not an unpopular opinion to say stealing the collective artistic works of humanity to boil it down into an optimized interface that churns out slop is bad. Many people have a visceral reaction to the idea of something artistic, something human, having the human element stripped out. Yet, take a picture with a modern smartphone and odds are there’s some Image Generation model getting its grubby hands all over the very real, very human image you’ve just made to “Optimize” and “Enhance” it. Tech companies are more eager for this than ever, with YouTube even fielding the idea of puttin every single shorts upload through an AI filter to make it harder to distinguish what is and isn’t AI. Yet AI images always have their hallmarks. A kind of uncanny valley. Machines have no concept of art, nor creativity. They don’t know what looks good, what looks correct and what looks like a horror you’d except to find in the backrooms. This is because Image Generation models do not have a creative process. They are a replacement for the creative process. Creativity is something inherently human and without that human element it’s not creative. It’s not art. Because it’s not made by a human, it’s also impossible to copyright, which is ironic considering the copyright laws Image Generation models have to violate to be able to output the images they do.

Companies can bark all day about ethical scraping all they like but one thing is clear, they directly profit off existing human works with no regard fot licencing. Many lawsuits have been initiated on this basis, and the huge companies running these models have responded by injecting into user prompts to “Pretty please do not generate anything copyrighted” which, as you can imagine, has mixed results. There is no net benefit to humanity or society from these models. It harms an incredibly important industry, many of humanity’s greatest achievements have been artistic endeavours and it paints a solemn image of the future that these great creative works can be replicated by something with no concept of creativity. I’ve seen people pose the argument that using Image Generation as a starting point for a human creative process is not the same, and not as bad as, using Image Generation to replace the entire process. If you spin up an Image Generation model on your computer, using a fully licenced dataset, to create a starting point, I think you have a leg to stand on. But that’s not the case the majority of these people make. They’ll simply visit an existing Image Generation model ran by a giant company making hundreds of thousands of dollars from putting human creativity into the copyright violation slop machine.I’m sure you can make a nice tasting meal from soylent green nutrient slop but at the end of the day, people will recognize the ingreedients and your creative process will be stunted by the bitter taste of machine optimized slop.

This is without even touching the many, many ways people use Image Generation models to outright break the law, with nonconsensual deepfake pornography or illicit images generated by training on incredibly illegal content. There is no place for this kind of content, and while it’s impossible to outright stop because anyone can spin up a model at home, large companies absolutely need to be held accountable for the outputs of their models. Safe harbour protections apply to user generated content. Image Generation outputs are not user generated, they are generated by the company at the request of a user, using their models and their datasets. It is your responsibility as a provider to make sure the slop machine isn’t willfully breaking the law. That is the bare minimum.

Medical Analysis

There are many time-sensitive conditions that require medical professionals to examine extensive medical scans and documentation. These conditions are often time-sensitive and humans can only work so fast, even Dr. House tier professionals have their limits. AI Models absolutely should not replace medical professionals, there are very real stakes involved and there needs to be a trained human making the important calls. But what if you could feed an MRI to a Computer Vision model? What if that model could point out anomolous areas in a brain scan? That takes this incredibly arduous and time-sensitive process and gives medical professionals hints at what potential issues could be present in a patient. This isn’t AI taking a job, this is AI assisiting professionals in daving lives. There’s no LLMs, here. No Image Generation. No frivolity. In an industry where workloads are already an incredible burden and time is the most important resource, this technology has the potential to save lives. And this isn’t just theory, this technology has been trialled and has genuine potential. If the difference between life and death is putting some data into a machine that exists purely to analyze that data, maybe that machine isn’t inherently bad. I’d imagine most people would agree with this, but not all, so welcome to the first shade of grey.

Translation, language and accessibility

Google Translate uses AI. This is nothing new. Duolingo uses AI. That’s kinda new. Screen narrators can use AI for OCR. That’s kinda new. Computer Vision models can use AI for describing images to blind people. That’s kinda new. Somewhere in this quagmire of AI you will find a usage of this technology that you are, at worst, ambivalent towards. And you will absolutely find some implementation of this technology that you’ve used, whether you’re aware of it or not. I’d imagine most people have used Google Translate at some point in their lives. Are you now an evil AI glazer? Probably not. If someone blind uses AI to make their day to day life easier and you aren’t blind, do you have a leg to stand on in judging their use of AI? If you’re abroad and need to communicate with someone in another language ugently, and you turn to Google Translate for both speech to text and translation, should you be made a pariah for using AI? There are times where the convenience and utility of this technology is more important than ethical opinions about that technology. It’s your decision what those times are, but I think it’s impossible to deny that those times do exist.

Vibe Coding and Agentic Assistants

First off, no, do not trust Claude to one-shot your full stack app. When someone reports a security vulnerability in your application you will be absolutely fucking clueless how to fix it because you did not create the code that you are entirely liable for and, believe me, you are liable. You cannot blame Claude for a data breach caused by code you allowed it to ship. That is your responsibility. And if you’re dragged into a lawsuit you will be up shit creek without a paddle. If you do not understand how to code something, the answer is not to tell the hallucination station to make it for you, even if you read the code it outputs you have no idea if that’s an optimal solutio, follows best practice or is error prone. You should strive to absolutely understand something before you hand off the task to an Agentic Assistant. That way, if there’s mistakes, you can identify and address them. There are LLM Models designed for this kind of work that are trained on licenced datasets that, of course, you can run locally. For data sovreignty, ethics and financial reasons that is how you should be approaching this use case. I’ve used LLMs to turn JSON data into descriptive types. That’s something I could absolutely do myself, something I understand completely, and something that an Agentic Assistant can often do just as well as a human. Most importantly, the stakes are low enough that if the agent fucks up, it’s not going to cause catastrophic isses. Welcome to the first shade of grey. P.S. if you are primarily a vibe coder I do not respect you.

Computer Vision taking the wheel

“Full self driving soon!” yells Elon Musk, for the 100th time since selling vapourware to his customers. You hop in your Waymo and doomscroll when someone puts a traffic cone on the hood of the car. Frustrated, you complain to the corporate overlords and some human somewhere takes over your car. I’m sure cars can be driven by Computer Vision alone, but there’s an infinitum of trolley problems this brings up. And a computer should not be answering trolley problems. But what about assisted driving? If your car can see a stop sign and warn you to stop for it, is that inherently bad? Does it encourage lazy or distracted driving? What if it saves a life by preventing a traffic accident? I do not think this technology is inherently bad, but I also don’t think you should hand the wheel to a machine, recline your seat and completely ignore the road ahead of you. This may be a more controversial opinion, but it’s one I hold. I see this restricted implementation of machine vision no differently than parking assist beeping at you while you double-park your SUV in the Walmart parking lot. It won’t make bad drivers good drivers, won’t make good drivers bad drivers, but it might help a driver in bad visual conditions drive a little safer.

Big tech makes you hate AI

Tech bubbles. There’s an infinitum of them. From the dot com boom to crypto grifts. AI is the latest in a long line of technologies companies have shoved down your throat for little to no reason while purporting how revolutionary their ChatGPT wrapper is. From LLM Therapists that give you incredibly harmful information to Sora 2 creating a massive misinformation machine. Every step of the way the people behind these products are making stacks of money and shouting about how amazing it is. Ignoring the fact that people get addicted to talking to LLMs, ignoring the parasocial relationships, the misinformation. Because at the end of the day, every user is money in their pocket and they’ll race to the bottom line every single time. If you hate AI, it’s probably because you hate companies like OpenAI, products like the “Friend” or Rabbit R1. These companies have turned an entire industry into a handful of buzzwords and you’re sick of it. You can hate these companies and products without hating AI. You can have both of these opinions.

Convenience vs Ethics

If you find $100 on the floor, you’re probably going to pocket it. If you find a wallet with $100 on the floor, maybe you take the money, maybe you return it. If you find an empty wallet on the floor, you probably return it. This is the simplest example of convenience vs ethics. The AI debate ultimately comes down to this exact dichotomy. Everyone has a point at which their personal convenience stacks up higher than their morals. Most of the online population will interact with AI at some point. You can hate it, you can avoid it, but you WILL interact with it at some point. It’s up to you how to feel about that, and at which point you can rationalize that you’re not doing anything bad by interacting with it. I have a lot of opinions, you’ve probably read them if you’ve made it this far. Maybe you agree with me, maybe you don’t. But you do have a line and you should know exactly where it sits and be able to justify it. This technology isn’t going anywhere and the more you can back up your stance on it, the more nuanced you’re able to be, the more you’ll be able to contribute to the conversation.

Powering the machine gods

The datacenters! The water! Giant datacenters existed before AI and will exist after AI. The future of humanity is digital and if you’re opposed to AI because of the environmental impact, you should also be opposed to all large social media applications, you should be opposed to internet infrastructure like Cloudflare or AWS. I won’t deny that while companies can pour infinite resources into their tech, they have no incentive to optimize. But there are much more flagrant misuses of our natural resources that are normalized because they’re not a talking point. There are products that have massive environmental impacts that you probably use every single day. The coatings on your smartphone screen might be manufactured in Baotou, Inner Mongolia, where industrial byproducts are dumped into a toxic, radioactive dam. But you value the convenience of these products. They are staples for your day to day life. You can absolutely make a strawman argument out of this, deflect from the power consumption of AI with this as a deflection. There’s some beautiful whataboutisms in here that can trip you up, but if you want to refute these points you should keep in mind: These products serve a purpose that have a net benefit. If a power hungry datacenter is training AI models that, from your conclusions in reading everything before this, are not inherently bad, does their environmental impact make them inherently bad? What if they’re trained by you at home? What if the datacenter powering training exlusively uses green energy? What if the solar panels used for that green energy have toxic manufacturing byproducts? Hard to draw a line. But you should introspect on your morals, your boundaries, draw your line if you can and justify it. I think the rapid growth of the digital realm is only going to get more power hungry, no matter the technologies involved. I think that technology will untimately progress and optimize when we hit a ceiling. I think companies will always do whatever makes them the most money and natural resources will always come secondary to that. And I think this issue is completely independent of discussuions of AI ethics.