Hacker News new | past | comments | ask | show | jobs | submit | the_mitsuhiko's comments login

Which however does not support DECDHL. So if you want to try what this post is about, Ghostty is not the right terminal. (It's great in general though)

> What makes this fascinating is that Ronacher knows this ("Is that even valid when there's barely a human in the loop?") but published anyway.

That has very pragmatic reasons. People should be able to use this library, in my jurisdiction I cannot place things in the public domain. So there are two outcomes: the Apache 2 license is valid and you can use it, or it was impossible to copyright it in the first place and it's in the public domain. Either way you are free to use it.

I'm not sure what else I can really do here.


> Occasionally? Tons of middle class people do it.

I would not be surprised if occasionally driving into Manhattan is cheaper now. Surely the excessive prices on parking should be going down.


It should be cheaper already if you place a non-zero value on your time.

Do people put a value on time when not doing value added stuff? When they go for a walk, do they instead run? Do they try to only meet up with friends who can return an investment on their time? Do these people not shoot the shit? Are they busy beavers at all times maximizing wealth?

These are all things that people find value in. Most people don't assign any value to sitting in traffic.

Shooting the shit could be precisely what they do instead of idling in traffic. Most people would prefer it.

I dunno, man, It's rumored they have this thing called cellular telephony technology allowing just such a thing while in traffic --I could be wrong though, thems being wealthy and shit.

The rumors are true, but you seem to have missed my point. Some people might prefer to communicate in person. You might not be one of them.

Most normal people put a very low value on their time, because they don't have any practical way to monitize an extra hour. It's just "free" time.

The supply demand curve might mean prices temporarily drop with demand, but that might put pressure on some parking to convert to other uses, which will then lower supply.

If your wager is that I will build an AI code quality measuring tool then you will lose it. I'm not advertising anything here, I'm just playing with things.

> code quality measuring tool

I didn't, just an AI tool in general.


> I'm really skeptical of using current LLMs for judging codebases like this.

I'm not necessarily convinced that the current generation of LLMs are overly amazing at this, but they definitely are very good at measuring inefficiency of tooling and problematic APIs. That's not all the issues, but it can at least be useful to evaluate some classes of problems.


I started to use sub agents for that. That does not pollute the context as much

All the allocators have the same issue. They largely work against a shared set of allocation APIs. Many of their users mostly engage via malloc and free.

So the flow is like this: user has an allocation looking issue. Picks up $allocator. If they have an $allocator type problem then they keep using it, otherwise they use something else.

There are tons of users if these allocators but many rarely engage with the developers. Many wouldn’t even notice improvements or regressions on upgrades because after the initial choice they stop looking.

I’m not sure how to fix that, but this is not healthy for such projects.


malloc is bad api in general, if you want to go fast you don't rely on general purpose allocator

This is true, but the unfortunate thing with how C and C++ were developed is that pretty much everything just assumes the existence of malloc/free. So if you’re using third-party libraries then it’s out of your control mostly. Linking a new allocator is a very easy and pretty much free way to improve performance.

> and people increasingly adopt an LLM writing style.

If you are insinuating that this is written by an LLM: it is not.


No, I didn't try to claim that. I seem to see the influence in many people's writing and verbosity though. It could be as simple as a counter reaction: If an LLM is allowed to be verbose, so are humans. It could also be that people who use LLMs a lot subconsciously adopt the style.

I infer that you are the author of the post. Take it as a compliment, I think you have written many good pre-LLM articles.


I'm really not an expect in Go, but the data that I'm passing at the moment via context is the type of data which is commonly placed there by libraries I use: database connections, config, rate limiters, cache backends etc. Does not seem particularly bad to me at least.

If you use context.Context for this you give up a lot of type safety and generally make your data passing opaque.

It's totally fine to put multiple values into a different data bag type that has explicit, typed fields. For example, the Echo framework has its own strongly typed and extensible Context interface for request scoped data: https://pkg.go.dev/github.com/labstack/echo#Context


> If you use context.Context for this you give up a lot of type safety and generally make your data passing opaque.

The data passing maybe, not sure how you lose type safety. The value comes from the context with the right type just fine. The stuff that I'm attaching to the context are effectively globals just that this way you can enable proper isolation in tests and else.

From my limited experience with echo, the context there is not at all the same thing.


Context.Value's signature is Value(any) any - you have to use type conversion or reflection to determine the value's type at runtime, instead of a compile-time check.

But I have methods such as MustRateLimiterFromContext(ctx) which returns the right type :)

By crashing your program at runtime if any wrong type is added!

> I've created agents that work 24/7 on GH issues for me, in Rust, Python and PHP. I use Claude (api). The result overall is very good.

It's quite possible it's a case of holding things wrong but I think at least the basic evaluation I did that made me come to the conclusion that Go works particularly well isn't too bad. I just get results that I feel good with quicker than with Rust and Python. FWIW I also had really good results with PHP on the level of Go too, it's just overall a stack that does not cater too well to my problem.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: