Hacker Newsnew | past | comments | ask | show | jobs | submit | MichaelRazum's commentslogin

There seems to be two likely outcomes. First the value of education drops, since studying becomes much easier. Second, we will have few young genius level people, who were able to learn very quickly with help of AI.

Although is it really "understanding" or just able to write down the formulas...?


Being able to use a formula is the first, and necessary, step for understanding.

Then it is able to work at different levels of abstraction and being able to find analogies. But at this point, in my understanding, "understanding" is a never-ending well.


How about elliptic curve cryptography then? I just think coming with a formula is not really understanding. Actually most often the “real” formula is the end step of understanding through derivation. ML does it up side down in this regard


In some way it is true. Like understanding how a car works purely on physics laws.


What would you want in his position?


You can’t just plug and play it. As soon as you introduce async you need to have the runtime loop and so on. Basically the whole architecture needs to be redesigned


asyncio has been designed to be as "plug and play" as it gets. I'd discourage it, but one could create async loops wherever one would need them, one separate thread per loop, and adapt the code base in a more granular fashion. Blocking through the GIL will persist, though.

For any new app that is mostly IO constraint I'd still encourage the use of asyncio from the beginning.


Sure agree, for bi directional websocket communication it is the way to go. It's just that you have to really think it thorough when using it. Like using asyncio.sleep instead of sleep for example and there are more little things that could easily hurt the performance and advantages of it.


I remember back when the “Pythonic” philosophy was to make the language accessible.

It’s clear that Dr. Frankenstein has been at large and managed to get his hands on Python’s corpse.


I don’t think that’s fair. Yeah, there is a lot to learn and keep track of. At the same time, it’s an inherently complex problem. From one POV, and async Python program looks a lot like a cooperative multitasking operating system, but with functions instead of processes. It was a lot harder to write well-behaved programs on classic Mac OS than it was on a Commodore 64, but that Mac app was doing an awful lot more than the C64 program was. You couldn’t write them the same way and expect good results, but instead had to go about it a totally different way. It didn’t mean the Mac way was bad, just that it had a lot more inherent complexity.


It's this - asyncio is a nightmare to add to and get working in a code base not specifically designed for it, and most people aren't going to bother with that. asyncio is not good enough at anything it does to make it worth it to me to design my entire program around it.


Counterargument. So far bigger have proven to be better in each domain of AI. Also (although hard to compare) the human brain seems at least an order of magnitude larger in the number of synapses.


This looks like a nice application to get inductive bias into the model. But I think right now there is no solution to get fine grained motor skills besides tele operating and doing behavior cloning. And even then it is far from perfect..


How about Ilya


Reworded from [1]: Earlier this year Meta tried to acquire Safe Superintelligence. Sutskever rebuffed Meta’s efforts, as well as the company’s attempt to hire him

[1] https://www.cnbc.com/2025/06/19/meta-tried-to-buy-safe-super...


What about him?


Alexnet, AlphaGo, ChatGPT Would argue he did strike gold few times.


I don't follow him very closely. Was he important for these projects?


Yes


Right, what about him? Didn't he start his own company and raised 1 billion a while ago? I haven't heard about them since then.


Didn't he say their goal is AGI and they will not produce any products until then.

I admire that, in this era where CEOs tend to HYPE!! To increase funding (looking at a particular AI company...)


> Didn't he say their goal is AGI and they will not produce any products until then.

Did he specify what AGI is? xD

> I admire that, in this era where CEOs tend to HYPE!! To increase funding (looking at a particular AI company...)

I think he was probably hyping too, it's just that he appealed to a different audience. IIRC they had a really plain website, which, I think, they thought "hackers" would like.


The claim about sample efficiency sounds a bit strange, since they did not include the state of the art sample efficient algorithms. Like dreamer or tdmpc. Also PPO is known to be not efficient, just compute efficient.


Isn't most the time consumption in the airport anyway? Like people supposed to be ther 2-3 hours ahead. If you could get 10m before the flight to the airport that would save so much more time.


Grok4 was trained on 100k or 200k GPUs (as far as I understand)

Grok5 might need 1MM or 2MM.

So the question is what about metas / zucks plans? How many GPUs will Manhattan get? Looks like, that to get the next unlock you need crazy amounts of compute.


Meta had the equivalent of about 600K H100 cards a year ago, but they were geographically distributed and used mostly for inference.

These giant data centres will allow these companies to put about a million in one location and possibly into a single giant training cluster.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: