6 Comments

Very interesting, Robert. The Koch fellow and Ted Chiang, the sf writer, are in agreement. He said in a recent interview that a more accurate name for AI would be Applied Statistics. Interesting how we anthropomorphize chatgpt et al by using certain words. Like you I tend to put such words in quotes

Expand full comment

I think the main takeaway is that it only looks/feels like a being because it mimicks the way people (online) would behave/talk. There is no real thought behind what it's doing or why, nor would we mistake it for something alive it it were an actual physicial entity dealing with the complexity of the world, instead of with the limited rulesets within which it can interact with you now. Generalised intelligence, which is what would be more like human intelligence, is still far away as far as I can tell. Nevertheless, it is an impressive system for those things it can actually do.

Expand full comment

This article makes brilliant points around how ChatGPT generates responses. Not by actually thinking but by building likely answers based on the massive amounts of data it has at its disposal.

It’s not creative at all.

Which makes it concerning when I hear people using it for creative tasks. How can this be truly creative if it’s just working with ideas that already exist.

But then, what is creativity? Is it simply about building on last knowledge and forming new connections.

So maybe it is creativity in its raw sense? I don’t know!

Expand full comment

Well, since it's already crawled the web for information, it could help with finding (hopefully reliable) information more efficiently, in that sense it could help with the creative process. I would not trust it to come with actual original thoughts or ideas, the very way it works makes that highly unlikely. On the other hand, if your creative work wasn't that creative to begin with, it could probably do it for you.

Expand full comment

I think your discussion of the morality (or not) of the answers is very interesting. It can only respond in accordance with what has ben input, and my guess is it has been input with a very 'liberal', 'be kind' paradigm, so struggles to make any kind of value judgment that might not be seen as totally 'acceptable'. It appears to have no real critical facility. Although your test was a bit of fun - it tells us quite a lot. Thank you.

Expand full comment

Thank you! It really just started as me being interested in how the chatbot works, and having a bit of fun with it. Along the way, you start to see certain patterns and behaviours that do show us something about how it works. The problem with something like 'be kind', as I think most of us probably have experienced at one point or another, is that once it becomes a behavioural rule it can start feeling quite oppressive. For instance, how can you ever speak your mind when you feel negatively about something or someone without 'being unkind', or express a critical opinion, or say anything worthwhile? I feel like this represents the latest iteration of that much older conflict between 'being polite' and actually speaking your mind. The way these programmes are built means that they are much less useful in those kinds of contested areas, yet they can still be useful in obtaining general information as you would with a quick Wikipedia glance, or in my case as a writer, asking it highly specific grammar/language questions for sentences I'm constructing.

Expand full comment