Hacker News new | past | comments | ask | show | jobs | submit | more lnenad's comments login

> It ends up only costing around $300 a month, however with the scale of events that it processes, the managed version would be in the 10's of thousands.

I think this is a repeated question but... are you considering the cost of the people managing the deployment, security oversight, dealing with downtime etc?


If you can keep the people doing all the things, they become cheaper over time. Because as your system settles and people become more competent, both downtime and effort required to mend these problems reduce dramatically, and you can give more responsibilities to the same people without overloading them.

Disclosure: I'm a sysadmin.


I wonder what is your managers take on this, given your incentives here.


Honestly asking, what my incentives are looking like from there?


You are incentivised to argue that it is good to keep employing sysadmins for self hosting, because that will keep you employed. You have a monetary incentive, thus you are a bit biased, in my opinion.


I think I didn't elaborate my point enough, so there's a misunderstanding.

What I said is true for places where they already have sysadmins for various tasks. For the job I do (it's easy to find), you have to employ system administrations to begin with.

So, at least for my job, working the way I described in my original comment is the modus operandi for the job itself.

If the company you're working in doesn't prefer self-hosting things, and doesn't need system administrators for anything, you might be true, but having a couple of capable sysadmins on board both enables self-hosting and allows this initiative to grow without much extra cost, because it gets cheaper as the sysadmins learn and understand what they're doing, so they can handle more things with the same/less effort.

See, system administrators are lazy people. They'd rather solve problems for once and for all and play PacMan in their spare time.


I am the person, it's occasionally I log in to delete a log file that I just haven't setup to rotate. About once a month, apart from that, no intervention needed (so far).


I have to imagine at that size they have an ops team already for all the other services so those are pretty amortized.


> Honestly, too many. Software engineers can be really, really dumb. I think it has something to do with assuming they're really smart.

Maybe I am one of the stupid ones but I don't get you people.

This is going to happen whether you want it or not. The data is already out there. Our choice is either learn to use the tool so that we could have that in our arsenal for the future; or grumble in the corner that devs are digging their own graves and cry ourselves to sleep. I'd consider the latter to be stupid.

If you had issues with machines replacing your hands in the industrial age, you had a choice of learning how to operate the machines, I consider this to be a parallel.


It's not about having another tool in your arsenal. This thing is meant to replace you - the developer (role). Others are correctly pointing out that developers can be really really dumb by assuming that this A-SWE will be just below their skill level and only the subpar humans will be replaced.


> It's not about having another tool in your arsenal. This thing is meant to replace you - the developer (role).

You realize that it's what I am saying? Having the tool in our arsenal means being able to do another job (prompt engineering, knowing how to evaluate the AI etc...) in case we are made obsolete in the next couple of years. What happens after that is a mystery...


@stevekrouse FYI getGoogleCalendarEvents is not available.


I just tried making it public, sorry!


> host with no cost associated to it

Home server AI is orders of magnitude more costly than heavily subsidized cloud based ones for this use case unless you run toy models that might hallucinate meetings.

edit: I now realize you're talking about the non-ai related functionality.


I think since the complexity is high and you cannot run away from that other than making a very small box that not a lot of use cases fit into. The main use case of k8s is handling more complex use cases so you'd remove a large portion of your audience by constraining the config.


I might be jaded, but I think having libraries for such simple use cases leads to the inevitable `left-pad` situation.

When I say simple use cases I mean that since you probably don't need all of these functions at once that it would be easier to copy the code you need if you don't feel comfortable writing it instead of adding yetanotherlibrary to your dependency tree.


I understand your perspective, and it's a valid concern. However, this library is designed to support not only simple use cases but also more advanced scenarios, providing a comprehensive solution for various needs. Additionally, it has zero dependencies, which helps keep your project lightweight. This way, you can benefit from the library's features without adding unnecessary complexity to your dependency tree. Thank you for sharing your thoughts!


Nah it's not you or your library, there is definitely a place for such utilities. The issue is broader, related to everyone installing libs for two liners and having bajillion dependencies.


Don't forget space... npm install and 500GBs go "bye-bye"


Hey I'll have you know that memory is cheap nowadays and that I'd be happy to fill out my drives with node libraries for converting a's into A's.


You can always just take the code and put it in your app. Having libraries like these don't force you to add them as a dependency. Assuming the right OSS license.


I agree but in reality many will take the easier path of [`pip` `npm` `cargo` `yarn` `go`] [`install`, `add`] when seeing the functionality out there. I was also making a broader talking point.


> They push the narrative that they’ve created something akin to human cognition

This is your interpretation of what these companies are saying. I'd love to see if some company specifically anything like that?

Out of the last 100 years how many inventions have been made that could make any human awe like llms do right now? How many things from today when brought back into 2010 would make the person using it make it feel like they're being tricked or pranked? We already take them for granted even thought they've only been around for less than half of a decade.

LLMs aren't a catch all solution to the world's problems; or something that is going to help us in every facet of our lives; or an accelerator for every industry that exists out there. But at no point in history could you talk to your phone about general topics, get information, practice language skills, build an assistant that teaches your kid about the basics of science, use something to accelerate your work in a many different ways etc...

Looking at llms shouldn't be boolean, it shouldn't be between they're the best thing ever invented vs they're useless; but it seems like everyone presents the issue in this manner and Sabine is part of that problem.


No major company directly states "We have created human-like intelligence," they intentionally use suggestive language that leads people to think AI is approaching human cognition. This helps with hype, investment, and PR.

>I'd love to see if some company specifically anything like that?

1. DeepMind researchers: Sparks of Artificial General Intelligence: Early experiments with GPT-4 - https://arxiv.org/abs/2303.12712

2. "GPT-4 is not AGI, but it does exhibit more general intelligence than previous models." - Sam Altman

3. Musk has claimed that AI is on the path to "understanding the universe." His branding of Tesla's self-driving AI as "Full Self-Driving" (FSD) also misleadingly suggests a level of autonomous reasoning that doesn't exist.

4. Meta's AI chief scientist, Yann LeCun, has repeatedly said they are working on giving AI "common sense" and "world models" similar to how humans think.

>Out of the last 100 years how many inventions have been made that could make any human awe like llms do right now?

ELIZA is an early natural language processing computer program developed from 1964 to 1967

ELIZA's creator, Weizenbaum, intended the program as a method to explore communication between humans and machines. He was surprised and shocked that some people, including Weizenbaum's secretary, attributed human-like feelings to the computer program. 60 years ago.

So as you can see, us humans are not too hard to fool with this.


ELIZA was not a natural language processor, and the fact that some people were easily fooled by a program that produced canned responses based on keywords in the text but was presented as a psychotherapist is not relevant to the issue here--it's a fallacy of affirmation of the consequent.

Also,

"4. Meta's AI chief scientist, Yann LeCun, has repeatedly said they are working on giving AI "common sense" and "world models" similar to how humans think."

completely misses the mark. That LLMs don't do this is a criticism from old-school AI researchers like Gary Marcus; LeCun is saying that they are addressing the criticism by developing the sorts of technology that Marcus says are necessary.


> they intentionally use suggestive language that leads people to think AI is approaching human cognition. This helps with hype, investment, and PR.

As do all companies in the world. If you want to buy a hammer, the company will sell it as the best hammer in the world. It's the norm.

I don't know exactly what your point is with ELIZA?

> So as you can see, us humans are not too hard to fool with this.

I mean ok? How is that related to having a 30 minute conversation with ChatGPT where it teaches you a language? Or Claude outputting an entire application in a single go? Or having them guide you through fixing your fridge by uploading the instructions? Or using NotebookLM to help you digest a scientific paper?


Im not saying LLMs are not impressive or useful — Im pointing out that corporations behind commercial AI models are capitalising on our emotional response to natural language prediction. This phenomenon isnt new – Weizenbaum observed it 60 years ago, even with the simplest of algorithms like ELIZA.

Your example actually highlights this well. AI excels at language, so it’s naturally strong in teaching (especially for language learning ;)). But coding is different. It’s not just about syntax; it requires problem-solving, debugging, and system design — areas where AI struggles because it lacks true reasoning.

There’s no denying that when AI helps you achieve or learn something new, it’s a fascinating moment — proof that we’re living in 2025, not 1967. But the more commercialised it gets, the more mythical and misleading the narrative becomes


> system design — areas where AI struggles because it lacks true reasoning.

Others addressed code, but with system design specifically - this is more of an engineering field now, in that there's established patterns, a set of components at various levels of abstraction, and a fuck ton of material about how to do it, including but not limited to everything FAANG publishes as preparatory material for their System Design interviews. At this point in time, we have both a good theoretical framework and a large collection of "design patterns" solving common problems. The need for advanced reasoning is limited, and almost no one is facing unique problems here.

I've tested it recently, and suffice it to say, Claude 3.7 Sonnet can design systems just fine - in fact much better than I'd expect a random senior engineer to. Having the breadth of knowledge and being really good at fitting patterns is a big advantage it has over people.


You originally said

> They push the narrative that they’ve created something akin to human cognition

I am saying they're not doing that, they're doing sales and marketing and it's you that interprets this as possible/true. In my analogy if the company said it's a hammer that can do anything, you wouldn't use it to debug elixir. You understand what hammers are for and you realize the scope is different. Same here. It's a tool that has its uses and limits.

> Your example actually highlights this well. AI excels at language, so it’s naturally strong in teaching (especially for language learning ;)). But coding is different. It’s not just about syntax; it requires problem-solving, debugging, and system design — areas where AI struggles because it lacks true reasoning.

I disagree since I use it daily and Claude is really good at coding. It's saving me a lot of time. It's not gonna build a new Waymo but I don't expect it to. But this is besides the point. In the original tweet what Sabine is implying is that it's useless and OpenAI should be worth less than a shoe factory. When in fact this is a very poor approach to look at LLMs and their value and both sides of the spectrum are problematic (those that say it's a catch all AGI and those that say hurr it couldn't solve P versus NP it's trash).


I think one difference between a hammer and an LLM is that hammers have existed since forever, so common sense is assumed to be there as to what their purpose is. For LLMs though, people are still discovering on a daily basis to what extent they can usefully apply them, so it's much easier to take such promises made by companies out of context if you are not knowledgeable/educated on LLMs and their limitations.


>they're doing sales and marketing and it's you that interprets this as possible/true.

You've moved the goalpost from "they're not saying it" to "they're saying, but you're not supposed to believe it."


The companies are not doing it. This is what I am saying.


You admitted earlier that they are:

Person you replied to: they intentionally use suggestive language that leads people to think AI is approaching human cognition. This helps with hype, investment, and PR.

Your response: As do all companies in the world. If you want to buy a hammer, the company will sell it as the best hammer in the world. It's the norm.


As a programmer (and GOFAI buff) for 60 years who was initially highly critical of the notion of LLMs being able to write code because they have no mental states, I have been amazed by the latest incarnations being able to write complex functioning code in many cases. There are, however, specific ways that not being reasoners is evident ... e.g., they tend to overengineer because they fail to understand that many situations aren't possible. I recently had an example where one node in a tree was being merged into another, resulting in the child list of the absorbed node being added to the child list of the kept node. Without explicit guidance, the LLM didn't "understand" (that is, its response did not reflect) that a child node can only have one parent so collisions weren't possible.

> proof that we’re living in 2025, not 1967. But the more commercialised it gets, the more mythical and misleading the narrative becomes

You seem to be living in 2024, or 2023. People generally have far more pragmatic expectations these days, and the companies are doing a lot less overselling ... in part because it's harder to come up with hype that exceeds the actual performance of these systems.


How about Sam Altman literally saying on twitter "We know how to build AGI now"? That close enough?


“We know how to build something” is pretty different from “our in-market products are something”


How many examples of CEOs writing shit like that can you name? I can name more than one. Elon's been saying that camera driven level 5 autonomous driving will be ready in 2021. Did you believe him?


You went from "they're not saying it" to "and you believe them when they say it??" Pretty quickly


I said the company is not saying this and not using it for marketing - and this stays true. CEOs hyping their stock is par for the course.


A CEO is the most visible representative of a company.

A statement on their personal Twitter might not be "the company's" statement, but who cares?

Sam Altman's social media IS OpenAI marketing.


If it didn't officially come from the marketing department it's only sparkling overhype right?


Elon? Never did, and for the record, also never really understood his fanboys. I never even bought a Tesla. And no, besides these two guys, I don´t really remember many other CEOs making such revolutionary statements. That is usually the case when people understand their technology and are not ready to bullshit. There is one small differentiation though: At least self-driving cars hype was believable because it seemed almost like a finite-search problem, like along the lines of, how hard could it be to process X input signals from lidars and image frames and marry it to an advanced variation of what is basically a PID controller. And at least there is a defined use-case. With genAI, we have no idea what the problem definition and even problem space is, and the main use-case that the companies seem to be pushing down our throats (aside from code assistants) is "summarising your email" and chatting with your smartphone, for lonely people. Ew, thanks, but no thanks.


I mean you really don't know multiple CEOs in jail that hyped their stock to the moon? Theranos? Nikola?

That's reallyyyy trying hard to minimise the capability of LLMs and their potentials that we're still discovering. But you do you I guess.


No mate, not everyone is trying hard to prove some guy on the Internet wrong. I do remember these two but to be honest, they were not on top of my mind in this context, probably because it's a different example - or what are you trying to say? That the people running AI companies should go to jail for deceiving their investors? This is different to Theranos. Holmes actively marketed and PRESENTED a "device" which did not exist as specified (they relied on 3rd party labs doing their tests in the background). For all that we know, OpenAI and their ilk are not doing that really. So you're on thin ice here. Amazon came close though, with their failed Amazon Go experiment, but they only invested their own money, so no damage was done to anyone. In either case your example is showing what? That lying is normal in the business world and should be done by the CEOs as part of their job description? That they should or should not go to jail for it? I am really missing your point here, no offence.


No offense taken

> In either case your example is showing what? That lying is normal in the business world and should be done by the CEOs as part of their job description? That they should or should not go to jail for it? I am really missing your point here, no offence.

If you run through the message chain you'll see first that the comment OP is claiming companies market llms as AGI, and then the next guy quotes Altmans tweet to support it. I am saying companies don't claim llms are AGI and that CEOs are doing CEO things; my examples are Elon (didn't go to jail btw) and the other two that did.

> For all that we know, OpenAI and their ilk are not doing that really.

I am on the same page here.


CEOs represent their companies. "The company didn't say it, the CEO did" is a nonsensical distinction.


I think you completely missed the point. Altman is definitely engaging in 'creative' messaging, so do other GenAI CEOs. But unlike Holmes and others, they are careful to wrap it into conditionals and future tense and this vague corporate speak about how something "feels" like this and that and not that it definitely is this or that. Most of us dislike the fact that they are indeed implying this stuff as being almost AGI, just around the corner, just a few more years, just a few more hundred billion dollars wasted in datacenters. When we can see on a day-to-day basis, that their tools are just advanced text generators. Anyone who finds them 'mindblowing' clearly does not have a complex enough use case.


I think you are missing the point. I never said it's the same nor is that what I am arguing.

> Anyone who finds them 'mindblowing' clearly does not have a complex enough use case.

What is the point of llms? If their only point is complex use cases then they're useless, let's throw them away. If their point/scope/application is wider and they're doing something for a non negligible percentage of people then who are you to gauge whether they deserve to be mindblowing to someone or not regardless of their use case?


What is the point of LLMs? It seems nobody really knows, including the people selling them. They are a solution in search of a problem. But if you figure it out in the meanwhile, make sure to let everyone know. Personally I'd be happy with just having back Google as it was between roughly 2006-2019 (RIP) in the place of the overly verbose statistical parrots.


> Out of the last 100 years how many inventions have been made that could make any human awe like llms do right now?

Lots e.g. vacuum cleaners.

> But at no point in history could you talk to your phone

You could always "talk" to your phone just like you could "talk" to a parrot or a dog. What does that even mean?

If we're talking about LLMs, I still haven't been able to have a real conversation with 1. There's too much of a lag to feel like a conversation and often doesn't reply with anything related.


Right on the money. Plus vacuum cleaners are actually useful and predictable in their inputs and outputs :)


Sure, a vacuum cleaner is the same.

> If we're talking about LLMs, I still haven't been able to have a real conversation with 1. There's too much of a lag to feel like a conversation and often doesn't reply with anything related.

I don't believe this one bit. But keep on trucking.


> Sure, a vacuum cleaner is the same.

> I don't believe this one bit. But keep on trucking.

You sure? Isn't that contradictory? It can't be the same if you don't believe it...


Did you need an /s to understand sarcasm?


Of course they aren't "real" conversations but I can dialog with LLMs as a means of clarifying my prompts. The comment about parrots and dogs is made in bad faith.


By your own admission, those are not dialogues, but merely query optimisations in an advanced query language. Like how you would tune an SQL query until your get the data you are expecting to see. That's what it is for the LLMs.


Point and context completely missed ... and this is a radical misrepresentation of the process.


> The comment about parrots and dogs is made in bad faith

Not necessarily. (Some aphonic, adactyl downvoters seem to have possibly tried to nudge you into noticing that your idea above is against some entailed spirit the guidelines.)

The poster may have meant that for the use natural to him, he feels in the results the same utility of discussing with a good animal. "Clarifying one's prompts" may be effective in some cases, but it's probably not what others seek. It is possible that many want the good old combination of "informative" and "insightful": in practice there may be issues with both.


> "Clarifying one's prompts" may be effective in some cases but it's probably not what others seek

It's not even that. Can the LLM run away, stop the conversation or even say no? It's as much as your boss "talking" to you about the task and not giving you a chance to respond. Is that a talk? It's 1-way.

E.g. ask the LLM who invented Wikipedia. It will respond with "facts". If I ask a friend, the reply might be "look it up yourself". This a real conversation. Until then.

Even parrots and dogs can respond differently than a forced reply exactly how you need it.


True - but LLMs can do this.

A German Onion-like magazine has a wrapper around ChatGPT that behaves like that called „DeppGPT“ (IdiotGPT), likely implemented with a decent prompt.


If you have something to say, just say it directly and clearly.

And the poster clearly did not mean what you say he "may have meant".


> If we're talking about LLMs, I still haven't been able to have a real conversation with 1. There's too much of a lag to feel like a conversation

Imagine the LLM is halfway through its journey to the Moon, and mentally correct for ~1.5 seconds of light lag.

> and often doesn't reply with anything related.

Use better microphone, or stop mumbling.


> This is your interpretation of what these companies are saying. I'd love to see if some company specifically anything like that?

What is the layman to make of the claim that we now have “reasoning” models? Certainly sounds like a claim of human-like cognition, even though the reality is different.


Studies have shown that corvids are capable of reasoning. Does that sound like a claim of human level cognition?

I think you’re going too far in imagining what one group of people will make of what another group of people is saying, without actually putting yourself in either group.


But when you need other people to work with your wheel it's much harder to find those capable enough/that want to deal with that. Also when shit hits the fan and the wheel reinventor left the company you're gonna wish you had a standard wheel.

(I am a wheel reinventor btw)


Yeah I’ve seen solo wheel reinvention lead to frustration with people leaving. It should be group effort or at least lots of “teach the wheel” (can we switch off this analogy yet :p)


> primed by Apple Watch pricing

This is a weird take. An avg price for a normal watch is $100-$200. This is a watch with a lot more functionalities that a quartz movement, and the production run is much smaller. I think the price is very fair not taking into account the price of an Apple Watch.


> An avg price for a normal watch is $100-$200.

That depends on where you live (a $200 watch certainly wouldn't be considered "normal" where I live), but also: a normal watch has a lot more aesthetic value than this, even considering that this has good aesthetics for a smartwatch. It usually also has a significantly longer usable life, at least five times that of devices like this.

But I should have clarified in my original comment that the "primed by Apple watch pricing" was specifically referring to the people that seemed to think this was really cheap and that the price should be increased! I don't think the price here is unfair necessarily, but definitely disagree with the people who seem to think this is really underpriced.


> An avg price for a normal watch is $100-$200.

No, it really is not.

Take off a zero.


I think in this particular scenario it's shortsightedness. They see this an instant increase in revenue without considering that longer term this will destroy their ever dwindling market share.


Many C*Os plunder the companies they work for, then move to another one. They don't care about the market share of the company they'll leave, no more than bugs care about a tree they are eating. They'll move to the next one. There is always a next tree, until suddenly there are no more trees nearby.


Because no one rewards long-term efforts. You are rewarded for short-term goals and, at best, mid-term ones. In an abstract sense, customers reward you for long-term efforts, but this is something no one will put in an Excel spreadsheet with financial voodoo calculations, except when you are the sole owner of the business.


Could this perverse incentive be rectified in some way? Perhaps by offering much of the compensation in equity that they'd have to hold on to for decades?


> Could this perverse incentive be rectified in some way?

Yes, the companies will slowly disappear into oblivion with competitors (ie: China) gradually eating their launch.


What does China actually do differently in this regard? Is there something we can learn from them?


An overreaching government?

When you ask “how can we be more like China?” you’ve lost 1/2 the chat…


Sales managers and CEOs make sales, get their options then move on.


> longer term this will destroy their ever dwindling market share

I'd very much hope so, but I am seeing humanity adapt to and support much worse terms and conditions, under the banner of "Meh.".


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: