Mandy Musings

What we (don’t) talk about when we talk about AI

Wasps swarming on a nest
Photo by Anna Evans on Unsplash

Note: This is hopefully the first essay in a series about generative AI.

Allow me to start with a somewhat obvious assertion: you, the unknown-to-me person reading this essay, already have some beliefs about AI1.

And quite probably—especially if you’re reading this essay—you’ve had some discussions with folks who don’t share those beliefs.

Think about some of the conversations you had with folks during earlier hype cycles in tech. How do those compare with your discussions about AI?

A pain assessment scale ranging from 0 (left) to 10 (right) with six text labels evenly spaced across the top: "No Pain", "Slight", "Mild", "Moderate", "Severe", and "Worst Pain". Below the numeric labels are faces from popular line drawn memes, starting with annoying orange and trollface on the left and escalating towards rage faces on the right

It’s worse, right? Like, it’s not just me?

On the surface, a lot of the AI discourse looks like the same sort of thing that happens with every tech hype cycle:

  • Tools that don’t live up to their initial promises2.

  • Breathless claims from early adopters that anyone not getting on board will be left behind3.

  • Counterclaims that anyone who does get on board will become unable to do “real work”4.

  • Deep pocketed investors spreading FOMO and engaging in shady business practices.

  • Exploitation of workers.

But this time around, we have a factor that we’ve never seen before: our tools have never before acted more more like people.

  • Their most ardent advocates are constantly telling us that soon they effectively will be people.

  • They use our language.

  • We chat with them using similar interfaces to those we use for friends and colleagues.

The last point is particularly salient: so much of our experience of one another has shifted towards being intermediated by computers over the last few decades. We have friends that we only know via social media. We have coworkers that we’ve never interacted with outside of Slack. We’re now used to these interaction models, and we have no trouble believing that there’s a real person on the other end of the connection. Within this context, the notion that a non-person might slip into our midst undetected is deeply unsettling.

Welcome to AI’s uncanny valley .

Hypothesized emotional response of subjects is plotted against anthropomorphism of a robot, according to Masahiro Mori's model. The uncanny valley is the region of negative emotional response towards robots that seem "almost" human. Movement amplifies the emotional response.

Although we know that they aren’t people5, this technology is so far outside our collective experience and memory that it hijacks something deep inside our brains, and we can’t quite keep ourselves from ascribing person-like characteristics to it. What happens next follows the same pattern that we’ve seen play out with CGI:

  • Some folks are taken in by the illusion.

  • Many folks are intrigued but unbothered by it.

  • Some folks are repelled by it.

There are myriad factors that affect who ends up in which group: level of familiarity with how the “trick” works, one’s own theory of mind, exposure to pop cultural depictions of AI, and general credulity all play a role. But these are all largely irrelevant to my point: we all see the same phenomenon, and we have vastly different responses to it.

However, unlike CGI, which we typically experience as isolated spectacles in TV and movies, we can easily interact with AI tools acting like people as often as we like. And this is where things start to get uncomfortable: because while we don’t have significant experience with tools that act like people, humanity has plenty of experience with treating actual people like tools.

And the way that we engage with AI sometimes feels eerily similar to these historical patterns. Some of us react with revulsion: it’s taking our jobs! It’s stolen our cultural heritage! It’s using too many of our resources! Others are quick to treat it like a favored pet or to delegate to it many of the tasks that are beneath our dignity or attention: research this! Summarize that! Write some code that I can’t be bothered to think about! Listen to me and praise me!

Now, I know what you’re going to say: that’s a totally unfair characterization!

You’re absolutely right: it’s unfair, it’s unnuanced, and it unjustly attributes actual harms being carried out to the AI, rather than to its creators. After all, they are just tools.

Aren’t they?

While we have a rich history of speculative fiction about sentient robots and computers, it has entirely failed to prepare us for this moment: in the tales we grew up on, robots are universally treated by their narrators as people—and typically marginalized people, at that. We’ve been trained to see them in this light and to either empathize with them or else fear them as an utterly alien form of intelligence.

On the other hand, we’ve been offered no scripts to help us deal with tools that have been designed to present the illusion of personhood without the reality of it.

And this is one of the big things we don’t talk about: AI has no agency, which means that we don’t even have the option of fully treating it like a person6, but the remaining patterns that we’ve seen for engaging with it look like colonization. That feels icky.

We can’t even choose to be entirely unengaged: if the major sources of corporate funding dried up tomorrow, there are still enough advanced, open models in the wild that this empathy-hijacking technology is never fully going to disappear.

So we choose whichever icky patttern we can stomach most easily, and then we often find ourselves in opposition to folks who have chosen different, icky patterns. It’s understandable that the resulting discussions would get heated because our own position doesn’t feel all that invulnerable.

All the while, we try to forget how AI feels like a person, and yet…

Those deep, primitive parts of our brains that perceive chatbots as people don’t want to let go of their hallucinations so easily. Ultimately, we end up with a large gap between what we know about AI and what we feel about it.

This is a big deal!

Because when we treat a tool like a tool, but there’s also a part of us that feels like it’s a person, we risk moral injury .

This isn’t to say that we shouldn’t be feeling these things: our bodies and brains use feelings to convey important information to us, and it’s important to listen attentively. Deliberately suppressing these feelings risks numbing ourselves to the need to express empathy in other areas of our life. We must find some way of reconciling and healing this gap between our knowledge and our feelings about AI.

I certainly don’t have all of the answers here, although I have a suspicion that there’s particularly low hanging fruit in a number of areas, like…

  • Shifting the UX away from general purpose chatbots. (making it feel less like a person)

  • Fine tuning models to reduce sycophancy and perhaps introduce a degree of epistemic humility. (making it less addictive; reducing the consequences of factual errors)

  • Using AI to augment our own work, rather than trying to have AI perform the work for us. (keeping our judgment sharp)

But wait: surely this isn’t the only reason we’re not making more progress in discussions about AI, is it?

Of course not.

In fact, nearly every time I have some variant of the conversation, “why is it hard to talk about AI?” I encounter someone who wants to stop me because I neglected to bring up their top issue7.

Because as thorny as this particular issue is, and as uncomfortable as it may make us to look at it, it’s just one of many, deeply intertwined issues that each have their own ways of short circuiting our thought processes. And to make it all even more intractable, each of us tends to rank the importance of these issues differently.

No wonder we’re having difficulty making progress.

If we’re going to remain in community while working through these issues, we’re going to have to be patient with one another.

We need to understand that some folks will need to see certain issues addressed before others, and sometimes, that will make dialog difficult—or maybe impossible—for folks at different stages of their analysis.

And most importantly, we need leave room for folks to make different choices—and to change them—with respect to adoption or non-adoption of AI tools.

So where do we go from here?

For my part, I intend to start writing my way through as many of these issues as I can because it’s one of my most effective tools for figuring out what I think about the topic. Along the way, I’m hoping to connect with other folks who are engaging with curiosity and care for one another.

Perhaps you’d like to come along for the ride?


Wow, you read to the end!

I guess this is the part where I’m supposed to sell you something, but I don’t have anything to sell. 🤷🏼‍♀️

Maybe you’d like to know when I publish new posts on this blog? I can help with that!

Want emails?

Just new posts—never any spam. (like I said, I don’t have anything to sell you!)

Prefer syndication?

I’ve got Rss, Atom, and even the new kid on the block, Json Feed. Take your pick!

Feeling social?

I’m most active on Bluesky and Mastadon. I’ll post about new articles from those accounts.