JP van Oosten

Chat vs traditional UI

May 9, 2023

“ChatGPT will soon revolutionise all UI, by replacing it with a chat bot.” I read something along these lines the other day. But, here are some challenges to consider:

1️⃣ Discoverability is more difficult with a chat bot. You need to interact with it to understand what it can and cannot do. In a traditional interface, you can see a bunch of options, menus and can often grasp quickly from that context the feature set that belongs to the application. This also means that it’s easier to scan an application and quickly see some of the things you can do, instead of waiting for the chatbot to finish completion on the help-text.

2️⃣ Chatbots will hallucinate. That is: They will come up with facts on the spot that sound very convincing, but are totally made up. From personal experience: I was debugging a piece of code, and it made up an argument to a function that would have been very convenient, but just didn’t exist. No matter how much I tried to coax it away from this solution, it kept on introducing this hallucination. It’s very hard to trust the chatbot if you need to be on your toes all the time while using it.

3️⃣ Sometimes it doesn’t understand my input. Especially when the input is complex or nuanced, it can neglect certain parts of the prompt and focus on the parts that are “more convenient” (obviously that’s an anthropomorphism :-)). Sometimes you really need to “beg” the AI to pay attention to part of your prompt. Leading you to CAPITALISE some words, or put them in different order and so on. This is not something that you want to explain to a new customer that doesn’t know much about your product yet.

In any case, while I think a chat interface can be very useful in addition to a traditional UI, I doubt (and I hope) it will not replace it any time soon.

Do you have a relevant story of ChatGPT or other chat bots failing to understand your input, hilarious hallucinations? I’d love to hear them!

Edit May 17th

A friend posted a comment on the LinkedIn post that is a relevant counter point to this:

About the discoverability: this is true for the “what do you want me to do”-type of interaction. It’s the same in advertising, companies that advertise “we can do anything for you” will lose to specialized companies that show what they can achieve for your niche-question.

But, LLM have an advantage in that they don’t need to ask “what do you want me to do” like traditional UIs. They are flexible enough to ask “what is your context” and figure out the ask from there.

This type of thinking works very poorly for traditional UI (usually results in a wizard with many steps), but can work really well here. Same goes for exploring answers/solutions and adapting them.

as example, I can ask an AI to “change this holiday photo a bit so we’re both looking at the camera. Also, can you use an 80s Leica lens” or I could ask “I want to send this photo to my partner, but something feels weird. Doesn’t feel personal and warm. Can you make it look like it was made by a professional?”

— Matthijs Zwinderman

This is an interesting point, and I agree that for such use-cases Chat UI could be useful. I do feel that my main point (that you have to look at this from your customer’s perspective) still stands. If the customer has no idea what the software does, or they have trouble getting the software to do what they want, maybe your tool needs something other than Chat UI.

If you do implement Chat UI, it would be interesting to see the type of questions being asked, and then have that inform how the traditional UI should change. If many people ask to change a picture so that they’re both looking at the camera, I can imagine that being added later to a filter option that’s easily findable.

(Also posted on my LinkedIn feed)