Hacker News new | past | comments | ask | show | jobs | submit login

It still matters for software packages. Particularly python packages that have to do with programming with AI!

They are evolving quickly, with deprecation and updated documentation. Having to correct for this in system prompts is a pain.

It would be great if the models were updating portions of their content more recently than others.

For the tailwind example in parent-sibling comment, should absolutely be as up to date as possible, whereas the history of the US civil war can probably be updated less frequently.






> the history of the US civil war can probably be updated less frequently.

It's already missed out on two issues of Civil War History: https://muse.jhu.edu/journal/42

Contrary to the prevailing belief in tech circles, there's a lot in history/social science that we don't know and are still figuring out. It's not IEEE Transactions on Pattern Analysis and Machine Intelligence (four issues since March), but it's not nothing.


Let us dispel with the notion that I do not appreciate Civil War history. Ashokan Farewell is the only song I can play from memory on violin.

this unlocked memories in me that were long forgotten. Ashokan Farewell !!

I didn’t recognize it by name and thought, “I wonder if that’s the theme for pbs the civil war…”, imagine my satisfaction after pulling it up ;)

I started reading the first article in one of those issues only to realize it was just a preview of something very paywalled. Why does Johns Hopkins need money so badly that it has to hold historical knowledge hostage? :(

Johns Hopkins is not the publisher of this journal and does not hold copyright for this journal. Why are you blaming them?

The website linked above is just a way to read journals online, hosted by Johns Hopkins. As it states, "Most of our users get access to content on Project MUSE through their library or institution. For individuals who are not affiliated with a library or institution, we provide options for you to purchase Project MUSE content and subscriptions for a selection of Project MUSE journals."


The journal appears to be published by an office with 7 FTE's which presumably is funded by the money raised by presence of the paywall and sales of their journals and books. Fully-loaded costs for 7 folks is on the order of $750k/year. https://www.kentstateuniversitypress.com/

Someone has to foot that bill. Open-access publishing implies the authors are paying the cost of publication and its popularity in STEM reflects an availability of money (especially grant funds) to cover those author page charges that is not mirrored in the social sciences and humanities.

Unrelatedly given recent changes in federal funding Johns Hopkins is probably feeling like it could use a little extra cash (losing $800 million in USAID funding, overhead rates potential dropping to existential crisis levels, etc...)


> Open-access publishing implies the authors are paying the cost of publication and its popularity in STEM reflects an availability of money

No it implied the journal not double-dipping by extorting both the author and the reader, while not actually performing any valuable task whatsoever for that money.


> while not actually performing any valuable task whatsoever for that money.

Like with complaints about landlords not producing any value, I think this is an overstatement? Rather, in both cases, the income they bring in is typically substantially larger than what they contribute, due to economic rent, but they do both typically produce some non-zero value.


They could pay them from the $13B endowment they have.

Johns Hopkins University has an endowment of $13B, but as I already noted above, this journal has no direct affiliation with Johns Hopkins whatsoever so the size of Johns Hopkins' endowment is completely irrelevant here. They just host a website which allows online reading of academic journals.

This particular journal is published by Kent State University, which has an endowment of less than $200 million.


Isn’t john hopkins a university? I feel like holding knowledge hostage is their entire business model.

Pretty funny to see people posting about "holding knowledge hostage" on a thread about a new LLM version from a company which 100% intends to make that its business model.

I'd be ok with a $20 montly sub for access to all the world's academic journals.

So, yet another permanent rent seeking scheme? That's bad enough for Netflix, D+, YouTube Premium, Spotify and god knows what else that bleeds money every month out of you.

But science? That's something that IMHO should be paid for with tax money, so that it is accessible for everyone without consideration of one's ability to have money that can be bled.


This is exactly the problem that pay-per-use LLM access is causing. It's gating the people who need the information the most and causing a divide between the "haves" and "have nots" but at a much larger potential for dividing us.

Sure for me, $20/mo is fine, in fact, I work on AI systems, so I can mostly just use my employer's keys for stuff. But what about the rest of the world where $20/mo is a huge amount of money? We are going to burn through the environment and the most disenfranchised amongst us will suffer the most for it.


The situation we had/have is arguably the result of the 'tax' money system. Governments lavishly funding bloated university administrations that approve equally lavish multi million access deals with a select few publishers for students and staff, while the 'general public' basically had no access at all.

The publishers are the problem. Your solution asks the publisher to extort less money.

Aka not happening.


Given that I am still coding against Java 17, C# 7, C++17 and such at most work projects, and more recent versions are still the exception, it is quite reasonable.

Few are on jobs where v-latest is always an option.


It’s not about the language. I get bit when they recommend old libraries or hallucinate non-existent ones.

Hallucination is indeed a problem.

As for the libraries, for using more modern libraries, usually it also requires more recent language versions.


I've had good success with the Context7 model context protocol tool, which allows code agents, like GitHub Copilot, to look up the latest relevant version of library documentation including code snippets: https://context7.com/

I wonder how necessary that is. I've noticed that while Codex doesn't have any fancy tools like that (as it doesn't have internet access), it instead finds the source of whatever library you pulled in, so in Rust for example it's aware (or finds out) where the source was pulled down, and greps those files to figure out the API on the fly. Seems to work well enough and also works whatever library, private or not, updated 1 minute ago or not.

It matters even with recent cutoffs, these models have no idea when to use a package or not (if it's no longer maintained, etc)

You can fix this by first figuring out what packages to use or providing your package list, tho.


> these models have no idea when to use a package or not (if it's no longer maintained, etc)

They have ideas about what you tell them to have ideas about. In this case, when to use a package or not, differs a lot by person, organization or even project, so makes sense they wouldn't be heavily biased one way or another.

Personally I'd look at architecture of the package code before I'd look at when the last change was/how often it was updated, and if it was years since last change or yesterday have little bearing (usually) when deciding to use it, so I wouldn't want my LLM assistant to value it differently.


The fact that things from March are already deprecated is insane.

sounds like npm and general js ecosystem

Cursor have a nice ”docs” feature for this, that have saved me from battles with constant version reversing actions from our dear LLM overlords.

> whereas the history of the US civil war can probably be updated less frequently.

Depends on which one you're talking about.


The context7 MCP helps with this but I agree.

How often are base level libraries/frameworks changing in incomparable ways?

In the JavaScript world, very frequently. If latest is 2.8 and I’m coding against 2.1, I don’t want answers using 1.6. This happened enough that I now always specify versions in my prompt.

Geez

Normally I’d think of “geez” as a low-effort reply, but my reaction is exactly the same…

What on earth is the maintenance load like in that world these days? I wonder, do JavaScript people find LLMs helpful in migrating stuff to keep up?


The better solution would be the JavaScript people stop reinventing the world every few months.

The more popular a library is, the more times its updated every year, the more it will suffer this fate. You always have refine prompts with specific versions and specific ways of doing things, each will be different on your use case.

That depends on the language and domain.

MCP itself isn’t even a year old.


Does repo/package specific MCP solve for this at all?

Kind of but not in the same way: the MCP option will increase the discussion context, the training option does not. Armchair expert so confirmation would be appreciated.

Same, I'm curious what it looks like to incrementally or micro train against, if at all possible, frequently changing data sources (repos, Wikipedia/news/current events, etc).

Folks often use things like LoRAs for that.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: