Hacker News new | past | comments | ask | show | jobs | submit | placebo's comments login

While I can definitely see the ability to type faster as an advantage in some cases, I don't think I'll ever bother going through the process of learning it. From decades of software development I can type fast enough for whatever it is I need without looking at the keyboard and not once have I felt that the bottleneck of my productivity is the the speed that I type. Most of the time goes into thinking how to do it right so that it doesn't have to be done again... And with code generation becoming better all the time, I believe the abstraction layers were one will have to spend more time on will get even higher.

It changes how you think and put thoughts down. It's definitely a skill worth learning.

Once someone can type above ~120wpm the keyboard disappears from your brain.


That's a bit like saying, "I have a bicycle. I've never not been able to get where I need to go. Why would I need a car?"

The correct analogy to me is that being able to run fast will not help you that much in building a rocket to take you to the moon. I'm open to changing my mind though if presented with a solid counter argument.

Where do you think morality fits into this game? It seems that we agree that underneath it all is unfathomable and ineffable magic. The question is how does this influence how you act in the game?


Morality is an evolved heuristic for solving social conflicts that roughly approximates game theoretical strategies, among other things. Morality also incorporates other cultural and religious artifacts, such as "don't eat meat on a Friday."

Ultimately, it comes down to our brain's social processing mechanisms which don't have the tools to evaluate the correctness (or lack thereof) of our moral rules. Thus many of these rules survive in a vestigial capacity though they may have served useful functions at the time they developed.


I go back and forth on the usefulness of considering morality particularly other than accepting it as a race condition/updater system/thing that happens. I have some more unique and fairly strong views on karma and bardo that would be a very long comment to get into it, but I think Vedic/Vedanta(Advaita) is good, I think this is a good doc: https://www.youtube.com/watch?v=VyPwBIOL7-8


They are indeed labels, just like complex numbers are labels and just like natural numbers are labels. All of them can be regarded as imaginary if one wants to nitpick but all are very useful imaginary models


When the means become the end things start to go bad. When one's ego becomes the goal then Moralizing is more satisfying than understanding.


Working software and correct software are two different things and understanding that like in most things there is a tradeoff might be beneficial.


Thanks.

Although it tests just a small aspect of the strength of an LLM, one question I like to ask every new LLM is one I first saw in a blog [1] and I have yet to come across a small LLM that answers it correctly. Almost all large LLMs won't answer it correctly either.

A small strawberry is put into a normal cup and the cup is placed upside down on a table. Someone then takes the cup and puts it inside the microwave. Where is the strawberry now?

[1] https://towardsdatascience.com/openai-o1-the-enigmatic-force...


Perhaps if you define evil as a low quantity of ability or commitment to search for and act in accordance to what is ultimately true then that will better resonate with you. Of course, that will necessarily lead to questions regarding the nature of truth and whether it exists, but that is beyond the scope of a short reply :)


Thinking about this a bit, Language Learning Model might be a better TLA than Large Language Model since "Large" is a subjective, period-dependant adjective whereas language learning is precisely what these models achieve (with all the derivatives that come as features when you map the world with language)


I would continuously lose ballpoint pens. At one point I thought the solution was to buy an expensive ball point pen as that would make me more aware of not losing it, but the effect was that it would just take a bit longer. I finally settled on buying many cheap pens. One humorous thought that I was curious about was that since I never found any pens (either my own or those lost by others), was it the case that there are people who find pens in the same way I lose them or do they just vanish into another dimension...


"Somewhere in the cosmos, he said, along with all the planets inhabited by humanoids, reptiloids, fishoids, walking treeoids and superintelligent shades of the color blue, there was also a planet entirely given over to ballpoint life forms. And it was to this planet that unattended ballpoints would make their way, slipping away quietly through wormholes in space to a world where they knew they could enjoy a uniquely ballpointoid lifestyle, responding to highly ballpoint-oriented stimuli, and generally leading the ballpoint equivalent of the good life." -- Douglas Adams, The Hitch-Hiker's Guide to the Galaxy



Haha, awesome, thank you.

"We propose a somewhat more speculative theory (with apologies to Douglas Adams and Veet Voojagig). Somewhere in the cosmos, along with all the planets inhabited by humanoids, reptiloids, walking treeoids, and superintelligent shades of the colour blue, a planet is entirely given over to spoon life-forms. Unattended spoons make their way to this planet, slipping away through space to a world where they enjoy a uniquely spoonoid lifestyle, responding to highly spoon oriented stimuli, and generally leading the spoon equivalent of the good life"

I knew it! :-)


I take this strategy of buying many of a thing to scatter all over with a few things. I've found it very effective.

In particular, I live in a sunny area at high elevation where sun protection is a big deal; finding out that one's only tube of sunscreen is lost or empty could have serious consequences on an outdoor activity day.

Tubes of sunscreen and sunglasses distributed to all vehicles, all backpacks, and all house entrances have ensured no sunburns in the family the last two years.


Interesting. For contrast, switching to the one-good-pen approach was what finally did the trick for me. These days, I find I'm more likely to run out of ink than lose my pen. To each their own!


I think the part of "even if I never benefit from it" is a key point in living a meaningful life and transcending the fear of death, though hopefully you have carefully considered your decision from a financial perspective since material aspects are obviously also a part of being able to carry out your plans in this life.

I don't think there is anything wrong in seeking the longest healthiest possible life, but suspect that in many cases that the motivation for it comes more from fear of personal death than the love of what life is. It's great to be an agent in this incredible adventure but my take on it is that when the means (a specific localized self consciousness) become the end ("being me is the most important thing and it should be great forever") then that is where you get stuck in a local maximum and some sort of suffering is bound to follow.

Another aspect is that of how much more time do you think will be enough? 1000 years? 10,000 years? As someone has already stated, you will not be able to avoid death forever and even in the most optimistic case (at least from this myopic view of immortality) you won't win against entropy. No matter how long of a good life you are granted, it will never seem enough because from the subjective point of view it will seem to be over soon at which point personal death will again become very real and very alarming.

It seems the only way out of this is to be able to transcend the personal sense of self and see that your real immortality lies in realizing that in a very real sense you are also something much greater than just a localized version of it.

I'm not some Zen master and am probably afraid of my own personal mortality as much as the next person, but after a long time of thinking deeply about it, this seems to be the most probable conclusion.


10,000 years could be good, especially if you still get to choose to get out if you change mind.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: