I said "pretty much". The homelessness crisis in, say, the US, affects a relatively small portion of the population; from quick looking up figures, it's less than 1% of the population.
I'm no expert on any of this, but as far as I understand it, homelessness is usually 1. transitory, and 2. usually tied to other serious issues like mental health issues, drug abuse etc. It's usually not a "lack of resources".
The homelessness goes up and down woth economy and housing availability. Sure, people worh mental health issues are first to loose ... and being homeless makes all the mental health issues worst.
Like, common. And beyond homeless, you have people one paycheck away from being homeless. And people unable to pay for drugs they need - like insuline.
Yes, if all you know about West is what you see in movies, then everyone is rich.
In the sense that many anti-AI articles are shared and upvoted, and many anti-AI or AI-skeptical comments are upvoted. Of course, a lot of pro-AI and AI-enthuisastic content is also shared and commented.
HN has many, many users, and they're not all monolothic, which explains 90% of the questions along these lines that people raise.
I really don't understand how people can write entire articles which can be disproven with an hour of work. This is like writing a long polemic on how AIs will never be able to play Chess because of... reasons... four years after Deep Blue beat Kasparov.
> If you forced me to put a number on how much more productive having copilot makes me I think I would say < 5%, so I'm struggling to see how anyone can just assert that "the rational argument right now" is that I can be 200% more productive.
If you're thinking about Copilot, you're simply not talking about the same thing that most people who claim a 200% speedup are talking about. They're talking about either using chat-oriented workflows, where you're asking Claude or similar to wholesale generate code, often using an IDE like Cursor. Or even possibly talking about Coding Agents like Claude Code, which can be even more productive.
You might still be right! They might still be wrong! But your talking about Copilot makes it seem like you're nowhere near the cutting edge use of AI, so you don't have a well-formed opinion about it.
(Personally, I'm not 200% productive with Coding Agents, for various reasons, but given the number of people I admire who are, I believe this is something that will change, and soon.)
> But your talking about Copilot makes it seem like you're nowhere near the cutting edge use of AI, so you don't have a well-formed opinion about it
You can use Claude, Gemini, etc through Copilot and you can use the agent mode. Maybe you do or maybe you don’t have a well formed opinion of the parent’s workflow.
> nationalism, ideology based on the premise that the individual’s loyalty and devotion to the nation-state surpass other individual or group interests.
if your highest priority is to the concept and ego of your country, rather than how that country serves its population, then your core values align with the interest of the political establishment and machine rather than with the interests of people. a political establishment's growth and maintenance is damaged by a population's ability to impact it, which means it's damaged by democracy, which means it's damaged by freedom.
Small question - have you ever used Anki, and/or considered using it instead of this? I am a long-time user of Anki but also started using Obsidian over the last few years, wondering if you ever considered an Obsidian-to-Anki solution or something (don't know if one even exists).
I used Anki for years, not for Geoguessr, but I've been a fan of spaced repetition for a long time.
It worked well and has a great community, but I found the process for creating cards was outside my main note taking flow, and when I became more and more integrated into Obsidian I eventually investigated how to switch. As soon as I did, I've never needed Anki, although there have been a few times I wished I could use their pre-made decks.
I know there are integrations that go both ways. I built a custom tool to take Anki decks and modify them to work with my Obsidian Spaced Repetition plugin. I don't have a need to go the other way at the moment but I've seen other tools that do that.
> Perhaps the only reason Cursor is so good is because editing code is so similar to the basic function of an LLM without anything wrapped around it.
I think this is an illusion. Firstly, code generation is a big field - it includes code completion, generating entire functions, and even agenting coding and the newer vibe-coding tools which are mixes of all of these. Which of these is "the natural way LLMs work"?
Secondly, a ton of work goes into making LLMs good for programming. Lots of RLHF on it, lots of work on extracting code structure / RAG on codebases, many tools.
So, I think there are a few reasons that LLMs seem to work better on code:
1. A lot for work on it has been done, for many reasons, mostly monetary potential and that the people who build these systems are programmers.
2. We here tend to have a lot more familiarity with these tools (and this goes to your request above which I'll get to).
3. There are indeed many ways in which LLMs are a good fit for programming. This is a valid point, though I think it's dwarfed by the above.
Having said all that, to your request, I think there are a few products and/or areas that we can point to that are transformative:
1. Deep Research. I don't use it a lot personally (yet) - I have far more familiarity with the software tools, because I'm also a software developer. But I've heard from many people now that these are exceptional. And they are not just "thing wrappers on chat", IMO.
2. Anything to do with image/video creation and editing. It's arguable how much these count as part of the LLM revolution - the models that do these are often similar-ish in nature but geared towards images/videos. Still, the interaction with them often goes through natural language, so I definitely think these count. These are a huge category all on their own.
3. Again, not sure if these "count" in your estimate, but AlphaFold is, as I understand it, quite revolutionary. I don't know much about the model or the biology, so I'm trusting others that it's actually interesting. It is some of the same underlying architecture that makes up LLMs so I do think it counts, but again, maybe you want to only look at language-generating things specifically.
1. Deep Research (if you are talking about the OpenAI product) is part of the base AI product. So that means that everything building on top of that is still a wrapper. In other words, nobody besides the people making base AI technology is adding any value. An analogy to how pathetic the AI market is would be if during the SaaS revolution everyone just didn’t need to buy any applications and directly used AWS PaaS products like RDS directly with very similar results compared to buying SaaS software. OpenAI/Gemini/Claude/etc are basically as good as a full blown application that leverage their technology and there’s very limited need to buy wrappers that go around them.
2. Image/video creation is cool but what value is it delivering so far? Saving me a couple of bucks that I would be spending on Fiverr for a rough and dirty logo that isn’t suitable for professional use? Graphic designers are already some of the lowest paid employees at your company so “almost replacing them but not really” isn’t a very exciting business case to me. I would also argue that image generation isn’t even as valuable as the preceding technology, image recognition. The biggest positive impact I’ve seen involves GPU performance for video games (DLSS/FSR upscaling and frame generation).
3. Medical applications are the most exciting application of AI and ML. This example is something that demonstrates what I mean with my argument: the normal steady pace of AI innovation has been “disrupted” by LLMs that have added unjustified hype and investment to the space. Nobody was so unreasonably hyped up about AI until it was packaged as something you can chat with since finance bro investors can understand that, but medical applications of neural networks have been developing since long before ChatGPT hit the scene. The current market is just a fever dream of crappy LLM wrappers getting outsized attention.
I had a similar situation - I once wanted to grab some web pages and parse them in Python, and was going to use Python's built-in libraries for that, and use BeautifulSoup to parse them. But then I realized I'd have to read enormous code bases I didn't write, which felt like it would take forever.
(Obviously, this post is tongue-in-cheek, but I'm making a real point - almost all code we use is code we didn't write. I don't think that's what differentiates Vibe coding code.)
reply