Hacker News new | past | comments | ask | show | jobs | submit | DrSiemer's comments login

Why would you ask a 1B model anything? Those are only useful for rephrasing output at best.

It has gotten to the point where I now almost exclusively associate the use of emojis outside of social chat with LLM output. They are on the same level as all the "useful comments" added to my code: trash that the prompter was too lazy to remove.


Would this also work for adding effects to existing audio? A simple reverb and pitch bend on a recorded vocal would make me a lot more excited than experimental synth effects.


I think articles like this don't garner a lot of attention, because why would anybody even bother to engage? If you don't see this unstoppable steam train for what it is, well, enjoy your screaming in the void.

We can all agree the current generation of AI lacks humanity, but we are not going to stop using it because of that. Would be nice if we can get the energy cost down though.


So where is this unstoppable train headed? To me and a lot of others, it looks like a future where the two available professions are "manager of chatbots" and "doing some boring physical task that's not (yet) profitable to automate."


Idk. Outsource thinking to bots - we become its senses - even if only it is only about interacting with other people. There are things about humanity and ourselves we don't know much about, and I doubt machines do either, because all its "knowledge" comes curated from humans. I guess we become only the knowledge creators for a while, or guide knowledge creation with machines, have machines put guard rails around the knowledge creation, so we don't chase dead-ends as often, take care of ourselves better, health and relationships wise. We can now dream bigger, address intractable problems like recycling trash 100%, because we have little agents with intelligence that do our thinking for us. May be one day, we can edit DNA to have our own little bees with programmed intelligence flying around, pollinating flowers in the winter, making the earth truly human serving (and any animals we like - don't hate me, that's humanity as it acts). May be then we reach for the stars and go on doing more stuff. Its the beginning of infinity, dammit ;)


So basically the current system? I say that mostly, but not entirely, jokingly. : )


> We can all agree the current generation of AI lacks humanity

And testability. Is the AI validated ?


As somebody who has listened to hundreds of audiobooks, I can tell you authors are generally not the best choice to voice their own work. They may know every intent, but they are writers, not actors.

The most skilled readers will make you want to read books _just because they narrated them_. They add a unique quality to the story, that you do not get from reading yourself or from watching a video adaptation.

Currently I'm in The Age of Madness, read by Steven Pacey. He's fantastic. The late Roy Dotrice is worth a mention as well, for voicing Game of Thrones and claiming the Guinness world record for most distinct voices (224) in one series.

It will be awesome if we can create readings automatically, but it will be a while before TTS can compete with the best readers out there.


I’d suggest even if the TTS sounded good, I’d still rather a human because:

1. It’s a job that seems worthwhile to support, especially as it’s “practice” that only adds to a lifetime of work and improves their central skill set

2. A voice actor will bring their own flare, just like any actor does to their job

3. They (should) prepare for the book, understanding what it’s about in its entirety, and bring that context to the reading


A valid point of criticism, but I do wonder if this only applies to those who can (still) easily spot LLM assisted output.

Many people lack the time or writing skills to produce something elegant by themselves, so for them it's like fake breasts: an upgrade for the less discerning larger part of the target audience, as long as they don't look to closely.


>A valid point of criticism, but I do wonder if this only applies to those who can (still) easily spot LLM assisted output.

It is still somewhat easy to spot LLM output.

The number of humans who aren't a committee of MBAs and lawyers crafting a memo by consensus that reads like this: "Absolutely! A well-arranged conference can be a delightful experience. Here are some ideas to inspire you:" can be counted on zero fingers.

There is always the risk that what you are reading was indeed written by a committee of MBAs and lawyers crafting a memo by consensus.

My coworkers and I have taken to screenshotting the vapid LLM autoresponses in the tools we use, circling the most appropriate and/or depressingly funny option in red, and sending an image as a message.


Same here, plus I can finally find related songs in languages I cannot read!

At least a fifth of my favorite songs look like cryptic gibberish to me, that would be nearly impossible to find as a download... I'm firmly vendor locked for that reason alone.

I do make regular exports, just in case my Spotify account ever disappears for whatever reason. That data is too valuable to risk losing it.


What do you export from Spotify?


It's a Python script that exports all the playlists, including liked songs.


So many articles and comments claim Ai will destroy critical thinking in our youths. Is there any evidence that this conviction that many people share is even remotely true?

To me it just seems like the same old knee-jerk luddite response people have to any powerful new technology that challenges that status quo since the dawn of time. The calculator did not erase math wizards, the television did not replace books and so on. It just made us better, faster, more productive.

Sometimes there is an adjustment period (we still haven't figured out how to deal with short dopamine hits from certain types of entertainment and social media), but things will balance themselves out eventually.

Some people may go full-on Wall-E, but I for one will never stop tinkering, and many of my friends won't either.

The things I could have done if I had had an LLM as a kid... I think I've learned more in the past two years than ever before.


> The calculator did not erase math wizards

The major difference is that in order to use a calculator, you need to know and understand the math you're doing. It's a tool you can work with. I always had a calculator for my math exams and I always had bad grades :)

You don't have to know how to program to ask ChatGPT to build yet another app for you. It's a substitute for your brain. My university students have good grades on their do-at-home exams, but can't spot a off-by-one error on a 3 lines Golang for loop during an in-person exam.


This is incorrect. You very much need to know how to program to make an AI build an app for you. Language models are not capable of creating anything new without significant guidance and at least some understanding of the code, unless you're asking it to create projects that tutorials have been written about. AI in it's current form is also just "a tool you can work with".

Like with the calculator, why would you need to be able to calculate things on paper if you can just have a machine do it for you? Same goes for more advanced AI: what's the point of being able to do things without them?

Not to offend, but in my opinion that's nothing more than a romantic view of what humans "should be capable of". 10 years from now we can all laugh at the idea of people defending doing stuff without AI assistance.


Of course, an AI will not "magically" code an app the same way 10 developers will do in a year, I don't think we disagree on this.

However, it allows you to do things you don't understand. I'm again taking examples from what I see at my university (n=1): almost all students deliver complex programming projects involving multi-threading, but can't answer a basic quizz about the same language in-person. And by basic question I mean "select among the propositions listed below the correct keyword used to declare a variable in Golang". I'm not kidding, at least one-third of the class is actually answering something wrong here.

So yeah, maybe we as a society agree on the fact that those people will not be software engineers, but prompt engineers. They'll send instructions to an agent that will display text in a strange and cryptic language, and maybe when they'll press "Run" lights will be green. But as a professional, why should I hire them once they earned their diploma? They are far from being ready for the professional world, can't debug systems without using LLMs (and maybe those LLMs can't help them because the company context is too important), and most importantly they are way less capable than freshly graduated engineers from a few years back.

> 10 years from now we can all laugh at the idea of people defending doing stuff without AI assistance.

I hope so, but I'm quite pessimistic unfortunately. Expertise and focus capabilities are dying, and we are more and more relying on artificial "intelligence" and its biases. But the future will tell


Isn't it irrelevant that students do not have the answer to a basic quiz though? In a real life situation, they can just _ask an LLM_ if they need to know something.

I don't believe having this option will make people a lot less functional. Sure, some may slip through the cracks by faking it, but we'll soon develop different metrics to judge somebodies true capabilities. Actually, we'll probably create AI for that as well.

As a professional, you hire people who get things done. If that means hiring skilled LLM users, that do not fully understand what they produce, but what they make consistently works about as often as classic dev output does, and they do this in a fraction of the time... You would be crazy _not_ to hire them.

It's true that inexperienced developers will probably generate a massive tech debt during the time where AI is good enough to provide code, but not good enough to fish out hidden bugs. It will soon surpass humans at that skill though, and can then quickly clean up all the spaghetti.

Over the last two years my knowledge on how to perform and automate repetitive and predictable tasks has gradually worn away, replaced by a higher level understanding of software architecture. I use it to guide language models to a desired outcome. For those that want to learn, LLM's excel at explaining code. For this, and plenty of other subjects, it's the greatest learning tool we have ever had! All it takes is a curious mind.

We are in a transitionary time and we simply need to figure out how to deal with this new technology, warts and all. It's not like there is an alternative scenario; it's not going to go away...


I would expect people today to be quite a lot worse at mental arithmetic that we used to be before calculators. And worse at memorizing stuff than before writing.

We have tools to help us with that, and maybe it isn't a big loss? And they also bring new arenas and abilities.

And maybe in the future we will be worse at critical thinking (https://news.ycombinator.com/item?id=43484224), and maybe it isn't a big loss? It is hard to imagine what new abilities and arenas will emerge. Though I think that critical thinking is a worse loss than memory and mental arithmetic. Though, also, we are probably a lot less good at it than we think we are, generally.


> The calculator did not erase math wizards

But it did. Quick, what's 67 * 49? A math wiz would furrow their brow for a second and be able to spit out an answer, while the rest of us have to pull out a calculator. When you're doing business in person and have to move numbers around, having to stop and use a calculator slows you down. If you don't have a role where that's useful then it's not a needed skill and you don't notice it's missing, like riding s horse, but doesn't mean the skill itself wouldn't be useful to have.


> Ai will destroy critical thinking in our youths

I don't think that's the argument the article was making. It was, to my understanding, a more nuanced question about if we want to destroy or severely disturb systems at equilibrium by letting AI systems infiltrate our society.

> Sometimes there is an adjustment period (we still haven't figured out how to deal with short dopamine hits from certain types of entertainment and social media), but things will balance themselves out eventually.

One can zoom out a little bit. The issue didn't start with social media, nor AI. "Star Wars, A New Hope", is, to my understanding, an incredibly good film. It came out in 1977 and it's a great story made to be appreciated by the masses. And in trying to achieve that goal, it really wasn't intellectually challenging. We have continued in that downhill for a bit, and now we are in 16 second stingers in TikTok and Youtube. So, the way I see it, things are not balancing out. Worse, people in USA elected D.J. Trump because somehow they couldn't understand how this real-world Emperor Palpatine was the bad guy.


I don't think you got the point of the article? It is saying that we as wise humans know (sometimes) when to stop optimizing for a goal, due to the negative side effects. AIs (and as some other people have pointed out corporations) do not naturally have this line in their head, and we must draw such lines carefully and with purpose for these superhuman beings.


When I was on the market for a new phone, it had become a bit of a hobby of mine to ask Samsung dealers worldwide about the privacy of my data when using their cloud AI. It's incredible how uninformed literally every sales representative was when it came to this topic.


And it's not like the exploding batteries where they have an incentive to do anything about it.


Is there any historical data to support the height of the towers in the first image? It looks like at least some of that is leaning on an artistic license.


Looking at this picture [1] of Bologna’s skyline from the sixties it seems it could be pretty realistic. The skyline has changed drastically and now you have many more tall buildings that make the remaining tower seems shorter. Also, I think the strangeness of the picture is due to the number of towers, but afaik there were around 100 towers in the city in the 13th century.

[1] https://it.m.wikipedia.org/wiki/Torri_di_Bologna#/media/File...


There are documents from the time and even paintings from later times, supporting these projections. You can see a partial list of the towers that have since disappeared here : https://www.torridibologna.it/torri-scomparse/ . There were probably more that we just don't have documents for.

Dante famously described Bologna as selva turrita, a forest of towers. It really was as crowded as that.


I don't know how tall are those in the first image, but consider that several towers reach beyond 300 feet (90 meters today), and documents point out that few reached beyond 330 in middle ages.


[edit]

This, existing, tower appear to approximate the height of those in that image:

https://static.bolognawelcome.com/immagini/bb/ec/bd/1b/20220...


Someone did look into exactly that. There is no supporting document regarding the actual height. And it does look like it is exaggerated for impact purposes. So they did have towers, they were clearly higher than the rest of the other buildings... Might have been this tall? Maybe. Or maybe not.


What are you referring to? Who looked into it? Are you citing something?


I think they might be referring to this video? https://youtu.be/ikg3-GQLg3g They traveled to the city and spoke to an actual historian on the matter. The "more accurate" model appears in the last 20s of the video if you are just curious about what our best guess of the actual appearance of the city is.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: