Hacker News new | past | comments | ask | show | jobs | submit | jshmrsn's comments login

This is true again for the most advanced fighter aircraft, except the active hand is now a computer.


Rockets too

Can't do that in Kerbal Space Program (at least not without mods), but it works fine in meatspace


If the machine can decide how to train itself (adjust weights) when faced with a type of problem it hasn’t seen before, then I don’t think that would go against the spirit of general intelligence. I think that’s basically what humans do when they decide to get better at something, they figure out how to practice that task until they get better at it.


In-context learning is a very different problem from regular prediction. It is quite simple to fit a stationary solution to noisy data, that's just a matter of tuning some parameters with fairly even gradients. In-context learning implies you're essentially learning a mesa-optimizer for the class of problems you're facing, which in the form of transformers means essentially means fitting something not that far from a differentiable Turing machine with no inductive biases.


I am familiar with ‘it’ as a default closure input from Kotlin. From a quick search, that in turns seems to be inspired by Groovy.


This goes at least as far back as anaphoric macros: https://en.m.wikipedia.org/wiki/Anaphoric_macro.


Some of the text that the LLM is trained on is fictional, some of the text that its trained on is factual. Telling it to not make things up can tell it to generate text that’s more like the factual text. Not saying it does work, but this is a reason how it might work.


Did that model also factor in risk of damage, liability, and normal wear and tear that a tenant brings over a vacant unit?


If the “sound” is an internal perception, then noise cancelling headphones would not help at all. They might make it worse by quieting any background sounds that could otherwise help cover up the internally produced sensations.


My tinnitus gets worse afterwards if I'm subjected to noise (as in an airplane). Noice-cancelling headphones is a must for me at this point if I'm to experience prolonged increased sound levels.


It depends on your tinnitus itself. My tinnitus gets crowded out by a loud environment, I don't tend to hear it. I only hear my tinnitus when there's no sound. So for me, noise-cancelling headphones do give some temporary symptom relief.

Wearing a Bose QC 35 is so important for me when I go to sleep, because the ANC also blocks out sound and blocks out my tinnitus to some extend. It's a bit of a skill to sleep with them (you can get audio feedback of the ANC mics) but I've mastered it and improved my sleep game a lot because of it.


But active noise cancellation removes (perceived) sound. Wouldn't that make it worse, then?


The way I experience it is as follows.

Normally:

- Tinnitus: 100%

With ANC:

- Tinnitus: 50%

- Bose ANC: 50%

I like the ANC sound more than my tinnitus.

I haven't noticed my tinnitus becoming worse or less worse.


Oh, by “the ANC sound”, do you mean the white noise floor of the ANC?

In that case, have you tried something like the Bose Sleepbuds? Same idea, much more comfortable to sleep with.


Bose recently retired their 2nd attempt. The team behind those have a new one coming out in January, https://ozlosleep.com I'm pretty interested... hard to tell how long the pre-sale discount lasts


It doesn't seem to have ANC, so it doesn't noise cancel, it only masks. And if ANC is the same as masking then they'd need to put that in their marketing.

Also: yep, that's what I mean by it, the white noise sound.


An idea I hear often listening to talks about LLMs, is that training on a larger (assuming constant quality) and more various data leads to the emergence of grater generalization and reasoning (if I may use this word) across task categories. While the general quality of a model has a somewhat predictable correlation with the amount of training, the amount of training where specific generalization and reasoning capabilities emerge is much less predictable.


I can only speak from my own internal experience, but don’t your unspoken thoughts take form and exist as language in your mind? If you imagine taking the increasingly common pattern to “think through the problem before giving your answer”, but hiding the pre-answer text from the user, then it seems like that would pretty analogous to how humans think before communicating.


> don’t your unspoken thoughts take form and exist as language in your mind?

Not really. More often than not my thoughts take form as sense impressions that aren't readily translatable into language. A momentary discomfort making me want to shift posture - i.e., something in the domain of skin-feel / proprioception / fatigue / etc, with a 'response' in the domain of muscle commands and expectation of other impressions like the aforementioned.

The space of thoughts people can think is wider than what language can express, for lack of a better way to phrase it. There are thoughts that are not <any-written-language-of-choice>, and my gut feeling is that the vast majority are of this form.

I suppose you could call all that an internal language, but I feel as though that is stretching the definition quite a bit.

> it seems like that would pretty analogous to how humans think before communicating

Maybe some, but it feels reductive.

My best effort at explaining my thought process behind the above line: trying to make sense of what you wrote, I got a 'flash impression' of a ??? shaped surface 'representing / being' the 'ways I remember thinking before speaking' and a mess of implicit connotation that escapes me when I try to write it out, but was sufficient to immediately produce a summary response.

Why does it seem like a surface? Idk. Why that particular visual metaphor and not something else? Idk. It came into my awareness fully formed. Closer to looking at something and recognizing it than any active process.

That whole cycle of recognition as sense impression -> response seems to me to differ in character to the kind of hidden chain of thought you're describing.


Mine do, but not so much in words. I feel as though my brain has high processing power, but a short context length. When I thought to respond to this comment, I got an inclination something could be added to what I see as an incomplete idea. The idea being humans must form a whole answer in their mind before responding. In my brain it is difficult to keep complex chains juggling around in there. I know because whenever I code without some level of planning it ends up taking 3x longer than it should have.

As a shortcut my brain "feels" something is correct or incorrect, and then logically parse out why I think so. I can only keep so many layers in my head so if I feel nothing is wrong in the first 3 or 4 layers of thought, I usually don't feel the need to discredit the idea. If someone tells me a statement that sounds correct on the surface I am more likely to take it as correct. However, upon digging deeper it may be provably incorrect.


This depends for me. In the framework of that book Thinking, Fast and Slow - for me the fast version is closer to LLM in terms of I'll start the sentence without consciously knowing where I'm going with it. Sometimes I'll trip over and/or realise I'm saying something incorrect (Disclaimer: ADHD may be a factor)

The thinking slow version would indeed be thought through before I communicate it


My unspoken thought-objects are wordless concepts, sounds, and images, with words only loosely hanging off those thought-objects. It takes additional effort to serialize thought-objects to sequences of words, and this is a lossy process - which would not be the case if I were thinking essentially in language.


You have no clue how GPT-4 functions so I don't know why you're assuming they're "thinking in language"


I am comfortable asserting that an LLM like GPT-4 is only capable of thinking in language; there is no distinction for an LLM between what it can conceive of and what it can express.


It certainly "thinks" in vector spaces at least. It also is multimodal, so not sure how that plays in?


There’s a website designed for language learning from watching YouTube captions with inline translations and dictionary lookup. It also has support for searching videos by subtitle content. But it has a limited index and isn’t free for all features. I thought its source was available but I can’t find it now… https://languageplayer.io/


I get the point that the US already put people on the moon… but how can you possibly make the leap that there can be no scientific value to additional unmanned laboratories and instruments landing on the moon? Especially since this represents increasing the number of countries who can contribute to this scientific endeavor? If the US elects a president who is not interested in lunar science or has economic problems, then the whole world must wait for the US to decide to resume lunar missions?

An overview of the scientific instruments onboard:

“ Lander payloads: Chandra’s Surface Thermophysical Experiment (ChaSTE) to measure the thermal conductivity and temperature; Instrument for Lunar Seismic Activity (ILSA) for measuring the seismicity around the landing site; Langmuir Probe (LP) to estimate the plasma density and its variations. A passive Laser Retroreflector Array from NASA is accommodated for lunar laser ranging studies.

Rover payloads: Alpha Particle X-ray Spectrometer (APXS) and Laser Induced Breakdown Spectroscope (LIBS) for deriving the elemental composition in the vicinity of landing site.

Chandrayaan-3 consists of an indigenous Lander module (LM), Propulsion module (PM) and a Rover with an objective of developing and demonstrating new technologies required for Inter planetary missions. The Lander will have the capability to soft land at a specified lunar site and deploy the Rover which will carry out in-situ chemical analysis of the lunar surface during the course of its mobility. The Lander and the Rover have scientific payloads to carry out experiments on the lunar surface. The main function of PM is to carry the LM from launch vehicle injection till final lunar 100 km circular polar orbit and separate the LM from PM. Apart from this, the Propulsion Module also has one scientific payload as a value addition which will be operated post separation of Lander Module.”

https://www.isro.gov.in/Chandrayaan3_Details.html


> I get the point that the US already put people on the moon

I didn't mention the US and I'm not from the US (I'm French). Humanity landed on the moon. Over 50 years ago.

If a country today built a 1969 computer I wouldn't marvel at the achievement.

And yes, sure, there are probably many instruments on board. But you can tell from the video -- and all the excitement here as well -- that this is mainly political and politically motivated.

> If the US elects a president who is not interested in lunar science or has economic problems, then the whole world must wait for the US to decide to resume lunar missions?

Or maybe do something else with our limited time and ressources than trying again to analyze the lunar surface and pretend it will be useful? While planting friggin' flags all over the place?


You can use the “isn’t there anything better to do” towards literally anything

Why not this though


Okay, this is true, but the cost/benefit ratio is a way to evaluate "things to do". Landing on the moon is an immense effort that doesn't bring much.

I wouldn't care one way or the other but what gets me is we're sold this as a scientific pursuit, while it's obvious it's just nationalistic bombast.


South pole is unexplored and no one has ever landed there, manned or unmanned. Exploring the unexplored isn't science?


The moon is NOT unexplored. That's my point actually. Should we explore every inch of it?


Of course we should? I'm surprised you think otherwise. It's like arguing that we shouldn't have explored the Americas because the Earth was not unexplored.

The Lunar poles have lots of scientific value, particularly for long term habitation, as you can have both permanently shadowed craters with water ice in them and permanently lit areas providing a reliable source of power.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: