Hacker News new | past | comments | ask | show | jobs | submit | hbosch's comments login

UX designers, PMs and devs are all currently sweating to see which role will be replaced by AI first.


I know 0 devs or UX designers with any real world experience that are sweating.

For UX specifically, it makes creating personas so much easier that I think we’ll see a UX once this hype cycle ends.

I don’t know or talk to any PMs, personally.


Discord's UX is a testament to the fact that people will learn complex systems if they believe all parts of the system are valuable. This is the same truth as, for example, spreadsheet software.

The only thing "bad UX" means anymore is that you have parts of your app that people don't find valuable, and you're showing it to them anyway.


Spreadsheet software don't have bad UX? I'm not sure what you are trying to say.


>phone app

I'm triggered. How many times have you reached for the 'end call' button, but the other person ended the call a moment earlier than you, and as you press down the screen immediately flips to your "recent calls" screen and you call a random person straight away?

This is such a common and terrifying experience for me, and yet it's been the default UX on the Phone app since probably day 1.


the ipad skype apps puts the call button where the hangup button is, so if someone hangs up right when you are going to click it, you call them again.

and this is such an easy fix, just don't make components touchable for X milliseconds after they are visible, some value below average human reaction time.

this could of course get in the way of people quickly navigating via muscle memory, but there's a probably a threshold where it can prevent one without affecting the other.


I literally posted about this issue with pretty much the exact same proposed solution in 2017

https://medium.com/p/31773fe6bbd5


This happened so often for me. But lo and behold, they fixed it. I recently installed iOS 18, and the phone app now prevents accidental touch input after the other person has ended the call. This took almost 18 years!


This is a symptom of a more general problem that I named (clumsily... "Rerender/Reflow/Repopulation Delayed Interaction Timeout Missing") in a 2017 blog post!

https://medium.com/p/31773fe6bbd5

I consider it by far the most annoying bug in touch UI's today.

Solution: There must be a small interaction-ignoring delay instituted when any control has just moved to its final rendered location.


>On a purely UX level, I have never seen 'shouting at a speaker' as a desirable general purpose interface.

On a bus or plane, no, absolutely not. In the kitchen of a busy household, yes, definitely.


Yup. Or rewinding your podcast or skipping to the next music track in the shower.

Asking what the name of the artist is while running with earbuds.

And so forth. We have different interfaces to adapt to the outputs we have available at the moment...


Custom bedtime stories will make every night in my house more pleasurable!


I don't know why I have such a visceral gut turning reaction to the idea of a bodyless voice synthesizing a bedtime story to a child. I don't have children, but always thought the whole bedtime story thing was meant to be time spent with the parent.


I've played around with chatgpt telling stories to my kids, but they don't like it that much. The stories aren't very good, and they're trite and predictable -- even with 4o. It's really only interesting to them at all because it's choose your own adventure.


In a perfect world, yes. In the real world, some nights the parents are not able to read/tell the story. The children don't care, they want the story anyway.

Audiobooks are a godsend.


https://deepdreams.stavros.io

They were definitely more entertaining when they were written by GPT-2.


It's not "useless". The usefulness of the autocomplete options is not on whether or not you click them, it's about education. Most laypeople have no idea what LLMs can do... this feature delicately and non-intrusively shows you what is possible with the product.

This is a huge issue with people using novel interfaces -- the "blank page" problem -- users simply don't know what to do with the empty box. Which is why with AI tools, the most common inputs are generic search terms especially with first time users.

But if I start typing "Led Zepp" into the box, and I see an autocomplete for... "Led Zeppelin if it was techno" or "Led Zeppelin style music with djembe beats" now I have a clue as to what kinds of things I can put here, even if I don't care about those specific things.


> users simply don't know what to do with the empty box. Which is why with AI tools, the most common inputs are generic search terms especially with first time users.

I use it like that a lot, what's wrong with that?


> for... "Led Zeppelin if it was techno" or "Led Zeppelin style music with djembe beats" now

Can ChatGPT generate music now? What happens if you actually execute those?


ChatGPT is available for free - you can find out for yourself!

It won't generate music for you (yet). Depending on your phrasing and how outlandish the request is, it might give you suggestions of actual bands to go check out or it may just describe the characteristics of what that would sound like.

When I asked "Led Zeppelin style music with djembe beats", it gave me a high level description of the guitar/drums/vocal sound + some Led Zeppelin songs that may adapt well to the fusion.


They experimented with music back in 2020 [0]. I suppose they didn't continue because the risk (work, training, possibly getting sued) is greater than the reward (splitting the market with Suno).

[0] https://openai.com/index/jukebox/


> ChatGPT is available for free - you can find out for yourself!

ChatGPT changes a lot. Somebody reading this comment thread in a year or five might well be interested in the answer but can't test it for themselves anymore.


I personally would like to "fix" the thinking when it comes to asking these models for help on more complex and subjective problems. Things like design solutions. Since a lot of these types of solutions are belief based rather than fact based, it's important to be able to fine-tune those beliefs in the "middle" of the reasoning step and re-run or generate new output.

Most people do this now through engineering longwinded and instruction-heavy prompts, but again that type of thing supposes that you know the output you want before you ask for it. It's not very freeform.


If you run one of the distill versions in something like LM Studio it’s very easy to edit. But the replies from those models isn’t half as good as the full R1, but still remarkably better then anything I’ve run locally before.


First assumption is that there is a miscommunication between GREF and a team about desks being permanent or "hotel" setup, and most of this stolen stuff is in lost & found or a locker or something like that. But a scratched itch for those seeking a reason to not RTO, for sure.


>Isn't that a bit misleading?

In practice yes, but technically no. If a "non-profit" brings in 100 million dollars, and pays all 100 employees a million dollar salary, then that "non-profit" has made no profit. But when someone hears that a "non-profit" made "100 million" dollars, they think it is some kind of scam or something.


>The fact that ByteDance is opting for a shutdown instead is a huge PR stunt

Um, what? There is zero chance that ByteDance could get a fair price for TikTok. VC calculations can be disregarded, TikTok as a platform is more valuable than Facebook. How much money would it take for Zuckerberg to sell FB to a Chinese company?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: