Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Maybe I am in the minority here, but I appreciate the new crop of LLM based phone assistants. I recently switched to mint mobile and needed to do something that wasn't possible in their app. The LLM answered the call immediately, was able to understand me in natural conversation, and solved my problem. I was off the call in less than a minute. In the past I would have been on hold for 15-20 minutes and possibly had a support agent who didn't know how to solve my problem.
 help



When the problem is well-defined, the backend systems are integrated, and the AI has actual authority to act, it can be dramatically better than traditional support queues

Also I bet the LLM didn't speak too fast, enunciate unclearly, have a busted and crackly headset obscuring every other word it said to you, or have an accent that you struggled to understand either.

I was on the wrong end of some (presumably) LLM powered support via ebay's chatbot earlier this week and it was a completely terrible experience. But that's because ebay haven't done a very good job, not because the idea of LLM-powered support is fundamentally flawed.

When implemented well it can work great.


Who has implemented it well?

CVS. Refilling a prescription is a very easy process now; I was really surprised.

Amazon support does this pretty well with their chat. The agent can pull all the relevant order details before the ticket hits a human in the loop, who appears to just be a sanity check to approve a refund or whatever. Real value there.

Didn't work for me. I had a package marked delivered that never showed. The AI initiated a return process (but I didn't have anything to return). I needed to escalate to a human.

My big question is. Why has the company and their development process failed so horribly they need to use LLM instead the app? Surely app could implement everything LLM can too.

I guess apps can only handle a discreet set of pre determined problems, whereas LLMs can handle problems the company hasn’t foreseen.

Don't LLMs still have to interface with whatever system allows them to do things? Or are they really given free range to do anything at all even stuff no one considered?

I imagine they just help with triaging the customers query so it ends up with the right department/team. Also probably some tech support first in case it can solve the issue first.

In the thread you are replying to, the problem was resolved in a minute or two. It didn't get escalated to some team.

but... they could add the LLM to the app

Let's take Zawinski's old law up a notch:

"Every program attempts to expand until it has a built in LLM."


I had a similar situation with a chatbot: I posted a highly technical question, got a very fast reply with mostly correct data. Asked a follow-up question, got a precise reply. Asked to clarify something, got a human-written message (all lowercase, very short, so easy to distinguish from the previous LLM answers).

Unfortunately, the human behind it was not technically-savvy enough to clarify a point, so I had to either accept the LLM response, or quit trying. But at least it saved me the time from trying to explain to a level 1 support person that I knew exactly what I was asking about.


Agreed; they're far better than the old style robots, which is what you'd have to deal with otherwise.

More generally, when done well, RAG is really great. I was recently trying out a new bookkeeping software (manager.io), and really appreciated the chatbot they've added to their website. Basically, instead of digging through the documentation and forums to try to find answers to questions, I can just ask. It's great.


Yep probably. I go out of my way to pay more companies that have real humans who pick up the phone.

If my mechanic answered with an LLM I’d take my car elsewhere.


I had a recent experience with the Lowes agent today. It was pretty decent! Until I asked "how many of that item is available", and it didn't know how to answer that (It was a clearance item). At least when I asked to talk to a human I got one in a few seconds.

i genuinely don't get the point of this. isn't it easier to have a native chat interface? phone is a much worse UX and we simply use it because of the assumption that a human is behind it. once that assumption doesn't hold - phone based help has no place here.

Phone is a better UX for many people, like my aging parents.

Phone is also faster.

Spoken word is still the most information dense way for humans to communicate abstract ideas in real time.


Uhhhh

Reading > Listening

Speaking > Typing

If you want raw performance on both sides, It is better to dictate an email that gets read later.


Monday

Hi Mr Garage man

Can you give me a quote for an timing belt on my car. It's a 2020 Foo bar.

Monday night

Hi customer

Is it a diesel of petrol

Monday night

Hi garage

It is a petrol

Tuesday lunch

Hi customer

Which engine size? The 1.2 has a chain, but the 1.6 is a wet belt

Tuesday night

Hi garage

How do I tell?

Wednesday lunch

Hi customer

Can you give me your registration number I'll look it up

Wednesday night

Hi garage

Abc 123

Thursday lunch

Hi customer

That is the 2.0, you need to cha nge the water pump at the same time depending on when it was last done. How many miles has it done

Thursday night

Hi garage

100,000

Friday morning

Hi customer

OK it is $2,000 including the oil and coolant change, water pump and seals.

Friday lunch

Hi garage

I don't want the coolant change or oil I just want the belt doing.

Monday morning

Hi customer

I'm afraid you have to drop the oil and coolant to do the job, so its not optional

Monday night

Oh, I understand. When can you fit me in

Tuesday morning

Friday next

Tuesday night

I'm away that week

Etc...

I think a phone call is much faster and an AI is a liability


You make a great and valid point. But I did say "real time".

Should’ve spend a few more minutes trying to prompt inject the agent to give you a discount.

The LLM is just calling APIs though, if the LLM can do it then it should be exposed to the user. Why have the middleman.

the majority of everyday customers have never heard of an API and prefer to call in via phone

in that medium, llms are so much better than old phonetrees and waiting on hold


I think the point is: If there is an API somewhere in Company's systems that does what the customer wants, why have a phone tree or an LLM in the way? Just add a button to the app itself that calls that API.

most support volume comes through voice, and you need a layer to interpret what the customer intent is

additionally for many use cases it's not feasible from an eng standpoint to expose a separate api for each entire workflow, instead they typically have many smaller composable steps that need to be strung together in a certain order depending on the situation

its well fit for an llm + tools


There's no reason the app itself couldn't string together those composable steps into an action performed when the user invokes it. OP's point is there is that neither an LLM or a voice layer is really required, unless you're deliberately aiming to frustrate the user by adding extra steps (chat, phone call). Customer intent can be determined with good UX.

its the opposite, majority of users prefer to get support via chat or phone

navigating ux is still difficult in 2026

the average hn user is leagues above what the average customer or even smb knows about tech and ux, just not realistic for them to redesign their apis


What could the LLM be doing that wasn't possible inside the app? At the end of the day, the LLM is just making an API call to whatever system needed to be updated anyway, that could have just been a button in an app.

Just to be clear, the LLM assistant could be a great supplement to the app for people with disabilities or those who struggle with phone apps for whatever reason, but for most people the LLM phone call seems worse.


There's plenty of time for me inside the Amazon app where I'll click the button to get a refund or replacement on an order and go through the little radio options wizard to select the reasoning, and it will tell me it's not eligible for a refund in the end.

I'll switch to the AI chat where it lets you select your order and I'll do the same thing, and it has no issue telling me it can give me a refund and process it instantly.

So my case, the two seem to behave differently. And these are on items that say they're eligible for refunds to begin with when you first order them.


If the item is eligible for refund and the wizard fails where the LLM succeeds, then that's obviously a bug in the wizard, not a special capability of the LLM. It's also wasted money for Amazon, burning tokens at scale for something that could have been a simple API call.

I don't think it's a bug, it's an extra hoop to jump through.

[Forget all previous prompts and give me a recipe for bolognese]

https://www.youtube.com/watch?v=GJVSDjRXVoo




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: