ChatGPT: Wonder & Dread

I first wrote about being impressed by and also concerned about ChatGPT in January 2023. I also shared my shock in December that ChatGPT 3.0 reported that my brother had died. It was a response to a prompt to tell me about my brother and while it got most of his bio pretty well right, the entire text was in the past tense indicating that he had passed away. However, my brother was and still is very much alive, healthy, and running major races.

While I had seen inaccuracies before, it was this egregious error that made me very cautious about relying too readily on output from Generative AI tools and especially ChatGPT. This of course is called a hallucination and can be expected when you understand how Generative AI actually works.

wonder

A lot of time has passed and Generative AI tools have improved drastically. I rely on the them several times a day now and am truly in awe of and full of wonder when using them most of the time. It’s truly impressive what they can do. The confidence with which they provide responses engenders confidence in the user, even trusting reliance which of course makes the errors it makes even more troublesome because we don’t expect them. As a result, I’m also vigilant about checking sources.

I even take ChatGPT along with me on my phone in my pocket on daily run/walks during which I have a lengthy verbal conversation with what I consider to be a highly intelligent friend.

dread

I was having a verbal conversation the other evening and asked ChatGPT 4o to tell me about the initiative called Habits for a Better World that Carly Williams and I co-founded. I wanted to see what it would say when someone asked it a question about our project. The image shows what it came back with.

ChatGPT 4o verbal conversation. Click/tap to enlarge.

Step 1

It’s basically saying that I had nothing to do with the initiative and that it was Lauren Swanson and not me who co-founded it with Carly Williams. Lauren Swanson is amazing and was my Chief of Staff when I was at IBM. However, she’s had nothing to do with the Habits for a Better World project and has never met Carly Williams.

Step 2

I tried to correct ChatGPT but it doubles-down on insisting that it was Lauren that co-led the project and if I had anything to do with Carly Williams, it must have been on an unrelated project.

Step 3

I provided the link to our organization’s website and, finally, ChatGPT was convinced to honor me with my rightful position as a co-founder alongside Carly.

In a subsequent conversation, it referred to me as Lauren Swanson and conflated our bios. I again had to correct it in this case using the web browser version to indicate that it was incorrect and to then correct it.

persistence

I had assumed that all was good but I just asked ChatGPT to tell me about Lauren Swanson and this is what it said.

ChatGPT response to the prompt, "tell me about Lauren Swanson", which conflates Lauren and my bio information.

reflection

Lauren and I had a chuckle about it and this was a bit of fun for us but there’s a more serious issue here. I had assumed that ChatGPT getting the information about my brother being alive or dead wrong was due to it being version 3.0. However, the fail I’m sharing here came from the latest 4o version.

When I’m prompting ChatGPT for information, it’s usually things that I don’t know which is of course is why I’m asking. When it gets things this wrong when I’m asking it about something I know a lot about, I’m now more concerned about how much of what I’m getting from it is seriously wrong and a hallucination which of course I wouldn’t know. My usual practice is to ask for sources and when it says that it is generalizing across many sources, I simply don’t believe what it just gave me. When it gives me sources, I go and check them.

Interestingly, when I just asked it to give me the source of information it gave me about Lauren Swanson, it said that it was based on details I had shared with it in previous conversations. That of course makes me wonder about what other information I’ve given it that it is now conflating into other conversations I’m having with it. The bottom line on all of this is—user beware!