Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Unity’s perception tools to generate and analyze synthetic data at scale (unity3d.com)
3 points by jonbaer on June 10, 2020 | hide | past | favorite | 1 comment


So here's a question. Why aren't ML pipelines recursive? In the case of 3D models I can imagine the following pipeline. Take some 2D images, pass it through an ML pipeline to generate 3D objects (article mentions photogrammetry and I think this is a good idea for bootstrapping), refine the 3D objects and re-arrange them in novel ways, generate 2D images from those 3D objects, feed the results back into the 2D-to-3D pipeline, rinse and repeat until the ML model is generating interesting and non-random results.

I know this sounds like GANs but every paper I've seen on GANs does not require human input other than during the initial bootstrapping phase. The training loop is completely devoid of further human intervention other than maybe tuning the hyperparameters to avoid divergence.

This article hints at such a pipeline but I don't think they go all the way and put all the pieces together:

> We created a library of 3D assets of selected grocery products using Digital Content Creation (DCC) tools, scanned labels, and photogrammetry. Additionally, we created background and occluding assets using real world imagery mapped onto simple primitives such as cubes, spheres, and cylinders. All of the grocery products used custom shaders created in the Unity Editor using Shadergraph, in the Universal Rendering Pipeline.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: