The A.I. Post

AI gets talked about a lot these days. Like everyone else, I have Opinions™. I don’t want this to turn into a blog about AI, so I’m going to do my best to roll my thoughts up here.

What are we talking about?

The term AI is thrown around a lot, but is really an umbrella term for a lot of different technologies. This means it loses a lot of its semantic value, but that’s not stopped media and C-suite executives and marketers and others from just vomiting it out everywhere they can. Here’s a quick breakdown of a few of the major categories of AI that folks are talking about:

  • Generative AI: The most common type of generative AI right now uses an approach called a Large Language Model or LLM. The big companies you hear about like OpenAI (ChatGPT) and Anthropic (Claude) and Microsoft (Copilot) are based on LLMs. The (very high level) gist of how this works is that it takes a large body of existing data, slices it up, and then uses complicated math to figure out what sort of combination words and content would sound like an answer to a question it’s asked.Read that last part again, it’s important. While different systems try to weight data to be more reliable, or tries to add a fact-checking pass, the process itself is still effectively just putting together something that sounds right. This is why some folks call them Bullshit Machines. Because the algorithm ultimately doesn’t actually care if it’s right as long as it sounds right, this is why you get what people call “hallucinations” – it’s not a bug, it’s a feature (or at least a function of the system working how it does).
  • Machine Learning: Machine learning has actually been around for years, but didn’t sound sexy enough, so got rolled into the AI umbrella. It’s centered around the idea of developing algorithms and systems that can be taught a pattern or heuristic, and then apply that heuristic to other data. When you hear stories about some researchers using AI to discover a new protein sequence or identify a lost ancient city in the Sahara or similar, this is usually the type of AI they’re talking about. While it’s certainly possible that a machine learning system could be using an LLM as part of their process, it’s generally not really the same process at all.
  • Assistive AI: This is also sometimes referred to as “Artificial Narrow Intelligence” or “Weak AI”. Sometimes this is a custom algorithm, sometimes it’s piggybacking on an LLM with customized data in the model to make it more knowledgeable about specific topics. The point here is that it’s got specialized knowledge and expertise in a particular field or topic, which allows it to perform certain actions. Before the current AI craze, this is the sort of stuff you’d get marketspeak about how “our advanced algorithm allows you to do X”. Think recommendation systems on Youtube and similar, but also systems like self-driving cars also fall into this category. Heck, autocorrect even falls into this category.

There are technically other divisions and categorizations (natural language processing, “Strong AI” or “Artificial General Intelligence”, et cetera), and also even within the three I’ve brought up there ends up being overlap, but frankly if you can at least understand these three pillars, you’ll understand 90% of what is getting talked about when people bring up AI.

Is it a fad?

TLDR: Yep. But also, y’know, not

Basically, the current craze where it’s on the tip of everyone’s tongue, every press release is about how XYZ company is integrating some “exciting” new AI functionality, and even existing features are being “enhanced” (aka rebranded) as AI-enabled… yeah, that’s a fad. It’s venture capital over-leveraging themselves on a big bet that AI will be huge and basically take over the world. It’s CEOs and CTOs egging each other on and buying into the shiny new thing, and then imposing that adoption on their employees. The amount of abrupt, ill-justified, poorly rationalized rush into this space absolutely reeks of being a bubble – one that when (not if) it bursts, is going to cause a world of pain for a lot of people.

But that said, the stuff underneath the breathless announcements and fabricated excitement is actually pretty real. The technologies and techniques that have been developed aren’t going to suddenly evaporate even if the companies currently championing them disappear. It’s not just the C-suite and investors pushing it – a lot of consumers and workers also want at least the promise of what AI could offer: less drudge work, more doing the stuff you actually care about. If you go back to one of the original concepts of Keynesian economics, the notion would be that as technology improves productivity, we could work less to achieve the same goals. (Never mind the fact that rather than letting people earn a living in 15 hour work weeks, we opted to instead increase expectations of productivity – that’s a topic for another day. The point is that the working world loves the dangled carrot of being able to work less, even if societal expectations dictate we’ll actually just fill that time with more to do.)

So, yeah. The current craze is a fad. It’s overhyped and overvalued. The people selling it overpromise, and the people buying it have inflated expectations of it being this magical panacea. But as a technology and concept, that’s going to stick around in some fashion. Think about the classic bubble example, the Tulip mania of the 1630s – the bubble burst, but we still bought and buy tulips. There’s been enough work and enough interest among individuals and the open source community that it’ll carry on in some form even if there’s a total collapse.

The Good

It’s pretty clear to me that both the AI evangelists and the AI haters get more than a little hyperbolic in their stance on the topic. It’s not some pristine thing that is going to usher in a utopia, but it’s also not the end of the world writ large. There’s some good and there’s some bad.

So, the good (as I see it):

  • When it works, it can be fantastic. Especially in the spaces of machine learning, natural language processing, and assistive AI, it has already demonstrated real value in helping improve things for real people. We’re seeing significant leaps in identifying patterns, summarization and categorization, first-pass transcription and translation, and other similar areas.
    • Just to address something real quick: it’s worth noting that I call out transcription and translation, but that does not mean I think it’s a replacement for good transcriptionists and translators. Translation is a skill, and one I think people take for granted. It’s not just a matter of finding the right word in a different language, it’s also about understanding the intent, understanding cultural nuances (you have no idea how many colloquialisms we use in our writing, until you actually go to prep a document for translation). Likewise, if you’ve ever seen a good transcriptionist do live captioning at a conference, and compared it to, say, a Youtube automatic caption, you’ll realize just how much better the human is.That said, there are a lot of places that would benefit from at least a “good enough” translation, or video calls that would benefit from a transcript, where the choice was never “human or AI”, it was “none or AI”. That means more places where stuff is accessible that wouldn’t be otherwise. That’s a good thing.
  • As an assistant, it really can speed some things up. While there’s a lot of work that may feel like drudge work but is actually essential for really learning how something works, there’s a lot of other bits that just aren’t. Like, in programming, there’s often some structural scaffolding that needs to happen at the start of a project. There’s certainly some value in understanding that structure, but the value definitely diminishes the 15th time you start a new project and have to set things up. Even before AI, lots of frameworks were already trying to solve for this with helper scripts to set up the basics for you, creating templates and similar. That’s a great example of an area where continuing to do it by hand doesn’t actually bring value, and letting an AI help lets you get to the meat of what you’re trying to work on faster.
  • The makeshift reviewer: obviously a real reviewer or editor with expertise is a better option than an AI reviewer. But sometimes that’s just not an option – maybe you’re a solo developer, or a writer needing a second set of eyes. You’re not asking for it to rewrite what you created, just asking “does this make sense for my stated goal and intended audience.” If you don’t have another human to help with that, that can be really helpful.

The Bad

There’s also a lot bad about how current approach to AI:

  • We’re getting dumber, literally. Relying on AI too heavily shows a correlation to reduced cognitive capacity. There certainly could be additional factors at work, but the research seems to be showing a connection.
  • Relatedly: it often leads to a lot of extra bullshit attached to what should be simple tasks. While AI can be good at summarization, I can’t tell you how many times I’ve seen a plan or task that has 10 times the number of steps or details than it actually needs, and the answer generally to why has generally been “oh, I had AI help me create that”. This loops back to GenAI being a bullshit machine – it’s not looking at what that plan or task actually needs, it’s asking what a plan or task would potentially look like.
    • This is a bigger problem than it may look like on the surface. We’ve all had overly wordy, noisy documents in our lives, well before AI got involved (heck, some of that is why AI thinks those documents “should” look like that). But it applies to more than just planning documents and task lists. The same noise also creeps into other tasks like programming, where it can end up overly verbose, and when a human goes to read it, they end up with cognitive fatigue from wading through it. This means the work that ostensibly “should” have human oversight ends up just getting a rubber stamp approval without a more involved review (or has an AI do the review, so you’ve got an AI reviewing the AI-written code, and humans are effectively taken out of the equation entirely).
  • Investor and executive pressure to use AI was never about improving worker’s lives and productivity, and we’re starting to see the outcome of that with multiple large companies laying off significant portions of their workforce because (they claim) AI has made them unnecessary.
    • There’s so many reasons this is bad. It may not be AI’s fault (it’s just a tool), but it’s absolutely being used as a lever to further consolidate wealth and power into the hands of just a few, while fucking over the rest us. And to be clear, this isn’t a matter of folks just needing to re-train or shift industries because the market has changed – it is a literal contraction of the workforce, but one that is economically obfuscated by other factors (it makes investors happy, so the stock market goes up, which is pointed at as the metric for economic health, despite so many real humans being out of work and the number of available jobs literally shrinking).
  • Government policy and rules aren’t keeping up. Some of that is governmental policy always lags behind technology, because the impact has to be seen before a need for regulation is realized. It’s also because a lot of the regulatory bodies that should be providing oversight are currently compromised, especially in the US. This is how you get large companies like OpenAI and Anthropic and others slurping up every bit of content they can to feed their models, despite it being copyrighted work that the creators are neither compensated for, nor asked permission to use. There have been some lawsuits about this to mixed success, but the ramifications of this continue to be felt, and has had a chilling effect on the creator economy. (Organizations like the RIAA and MPAA have historically been all too happy to sue individuals for piracy, but when these corporations do it at a massive scale, and then use those pirated materials to make and sell knockoff variations of it, they’ve been largely quiet about it.)
    • Part of the pitch for generative AI in particular is that it could “democratize creation”, let folks who might be lacking in a particular skillset still make something. But in reality, a lot of this content ends up being slop, and floods a market that frankly was already suffering from a deluge of mediocre work. (If you’re curious about what I’m counting as slop, Hank Green has a pretty solid explainer, I think.)
  • Piggybacking on the lack of regulation and guardrails, it has been a cybersecurity nightmare, on multiple fronts. First, it’s just insanely easy to create a phishing site now that is pixel-perfect, missing a lot of the normal tell-tales that might tip someone off that something isn’t legit. Second, you can now use AI-driven video modification to impersonate others (“deepfakes”), making social engineering (already one of the most effective methods to compromise a system) massively more sophisticated. Third, there’s basically no oversight into what an AI is doing ostensibly on your behalf, which means if you give an AI agent control or access to your personal or financial data, a hacker could compromise the agent (frequently pretty easily) to hand that data over. This isn’t just hypothetical, either, there have already been numerous real world examples of these sorts of compromises.
  • It’s cooking the planet. Generative AI, especially at the scale and usage we’re currently seeing, requires a lot of computational power, which in turn both draws a lot of electricity to run, and a lot of electricity and water to keep the servers cool. We already have water shortages in a lot of regions, yet these companies are waving large sums of cash to get new data centers built to feed their computational needs. Sometimes they’re open about it, other times they quietly pay off officials and get it done through back channels. In both cases, the end result is less water for humans, agriculture, the environment in general.
    • There’s also been some real economic impacts on this data center push, as well – if you haven’t priced computer components recently, you may be shocked at how much the cost of RAM, hard drives, CPUs, and GPUs has risen. This is because the manufacturers of these components have contractually allocated multiple years of production to these firms looking to build data centers. (You may think, “fine, I can use my existing computer til things stabilize” – don’t forget just how much of your every day life either also uses these components, or is seeing shortages while manufacturers shift to focus on their other contracts. Why did car prices go up? Why did camera prices go up? Some of it is inflation, some of it is stupid illegal tariffs, but some of it is also this component shortage.)

So how do I feel about it?

I feel like it’s a mixed bag. I think the current craze is terrible, but that the technology in principle can be useful. I think it’s opening up a Pandora’s box of issues, largely because we as a society aren’t mature enough to not abuse it. I dislike generative AI in general and feel it leads to us getting stupider and is too frequently used without ethical or moral consideration. I think assistive AI and machine learning are useful, but are a tool for humans, not a replacement – and should be treated as such. I hope every single company that lays off humans because they think AI will do the work well enough get a resounding comeuppance. As far as the technology itself goes, I think it can be a useful tool, but the ways we’ve been approaching it have some deep fundamental flaws that make society actively worse.

A New Yorker cover from 2023, showing a robot "helping" a human with a task but actually making garbage.

2 thoughts on “The A.I. Post

  1. Thanks so much for writing this. Even though there are a million-and-one posts on this subject out there in the world, I’m always comforted when I find another one that aligns with my own feelings about AI. And of course it’s nice to have multiple sources to point to if someone asks me for my take on AI! Even better, you don’t simply agree with me, you have a subtly different take that challenges my own stance.

    One thing that you didn’t bring up (directly) that personally bothers me a lot? Forced AI integrations, whether it’s a manager creating top-down “AI-first” initiatives at work or companies like Microsoft forcing AI into every nook and cranny of their products. While in general I agree with the idea that AI can be useful in the right contexts, my repeated (always negative) experiences with Forced AI have really soured me on even some decent use cases. At some point the bullshit text, even in small doses, builds up into full-blown cognitive fatigue, like mercury buildup in the bloodstream from eating large numbers of lightly-contaminated fish. Thanks to these experiences, I’ve adjusted my position and I’m solidly anti-AI these days… but posts like this are a good reminder that I shouldn’t let one bad apple spoil the bunch, proverbially.

    Anyway, thanks for a great article. Keep em coming via RSS! Viva la distributed computing hand-crafted artisinal blog revolution!

    1. You’re totally right – I actually wanted to talk more about the top-down AI mandates, but didn’t manage to squeeze it in. It’s deeply problematic on multiple fronts, not the least of which being the artificial demand for it – it’s clear that there is investor and executive pressure to adopt it to prove out the significant financial bet they’ve placed on the technology. This creates a lot of really messed up incentives within organizations, and rather than letting it grow at an organic and (comparatively) sustainable rate, there’s this pressure to shoehorn it into every aspect of your work, including in places it absolutely should not be. It also causes this “gold rush” mentality across the board, where everyone is scrambling to build more data centers, buy more proverbial shovels, and everyone and their brother trying to come up with a somewhat novel use for the AI platforms (which frequently end up just being a tweaked or reskinned wrapper around an existing tool). So, you have folks flooding the market space with hustles, you’ve got hoarders trying to scoop up all the prime (data center) real estate, and you’ve got folks selling shovels (the actual compute, like Nvidia, or the models, like Anthropic). But it’s all basically a game of hot potato, and seeing what money you can scoop up before the music stops.

      My hope is that the forced AI push will just lead to people getting sick of it faster – I’d rather get on to the part of picking up the rubble and seeing what parts of all the nonsense were actually worthwhile.

      Thanks for the comments and kind words!

Leave a Reply

Your email address will not be published. Required fields are marked *