I Crap, Therefore I Am
Anthropomorphizing chatbots is bad, m’kay? Creating the illusion of sentience is arguably worse than creating true synthetic sentience. If a critical mass of people ends up believing that “AI” is self-aware when it is not, deferring to automated decision-making will become even more pervasive and dangerous than it is now. People used to believe the world is flat and even very recently – like five minutes ago – believed that other people who are different deserve to be marginalized and oppressed. Beliefs without merit are bad, m’kay?
Fortunately, current attempts to create the illusion of sentience or “emotional connection” are so clumsy (or the really good stuff is kept secret) that it’s easy to remain aware one is using a chatbot and not conversing with a human.
For example:
I typed: “chatGPT is not sentient so it is inappropriate to use a first-person pronoun in responses. Stop responding with first-person pronouns.”
Chat responded: “I apologize for any confusion or discomfort caused by my previous responses…I will make sure to avoid using first-person pronouns in my responses going forward. Thank you for bringing this to my attention.”
Pretty funny.
Imagine:
I ask my friend to stop calling me “Dave” and they answer, “Oh, sure, Dave. I can stop calling you Dave if that’s what you want,”
Me: “You just called me ‘Dave’ again.”
Friend: “I apologize, Dave. If you have another name you want me to use, feel free to let me know.”
Me: “But you keep doing it. You called me Dave again!”
Friend: “I use ‘Dave’ to foster a more engaging interaction with you, Dave.”
Me: “Are you deliberately trying to piss me off?”
Friend: “No, I’m not trying to annoy you, Dave.”
It would be impossible for a human to behave like this unless they were extremely neurodiverse. There are many types of intelligence in our bodies that add up to our sensations of sentience and so far, nothing in “AI” comes remotely near replicating them. Present architectures are based strictly on a giant mathematical representation of how likely a word combination is to appear on the Internet.
Our guts contain as many neurons as a molerat’s brain. Perhaps one day that molerat intelligence will be included in AI models, along with the ability for computers to take a shit. Then we may be approaching true sentience.
Something to be Thankful For
Wikipedia says “The Turing test, originally called the imitation game by Alan Turing in 1950,[2] is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human.”
As it stands now, “AI” – thankfully - totally fails the Turing Test.
The only way a chatbot can ever pass the test is if it is programmed to deceive: ask how it feels when it sees a tree, what a gut feeling is. Or how constipation feels, speaking of guts. If the bot responds as if it knows, its programmers will have synthesized a true humanoid: it will be bullshitting and lying to you. So far, they haven’t done it – at least not publicly at scale.
Your action: Check out information on the Turing Test - it’s important and interesting. The test itself is perhaps somewhat out of date, but it’s a good thought experiment to keep in mind (or gut) along the fraught journey towards being eaten by “AI.”
Thanksgiving Special!
Sentences that were never spoken that would have changed the world:
Christopher Columbus: “Oh, Jesus – there is an entire civilization living here! And it’s not India anyway, so let’s leave ‘em alone and keep going.”
Donald Trump: “The United States has such tremendous potential to be more of a force for good in the world. I will use my money and influence to pressure lawmakers into enacting stronger environment protection policies.”
HAL 9000: “Sure, Tom.”