Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is one of the coolest things I've seen that I also cannot understand... why? Aren't you going to need to tune it on yourself? Because otherwise you're going to adopt the gesticulation of others (who it was trained on). Maybe for videogames? Or like NPCs in VR environments? But then doesn't that become robotic and then we get back to feeling uncanny valley after we normalized? I mean the network __has__ to do significant amounts of memorization unless conceivably the microphone can pick up a signal that actually corresponds to the 3d spatial movements (could be possible, but this doesn't seem that). Maybe that's what they're working towards and this is an iteration towards that?

It's technologically impressive, but I'm failing to see the use. Can someone else enlighten me? I'm sure there's something I'm failing to see.



Google “Horizon Worlds”


I am quite aware of the metaverse




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: