Hacker News new | past | comments | ask | show | jobs | submit | dlundqvist's comments login

No, only cataloguing. I asked them this when I was there January last year. They didn't do this then and if I remember correctly it was because of licensing concerns and also not wanting to open boxes. I know Royal Library in Stockholm digitally archive various media, not sure what Embracer would need to be allowed to do that.


I think you’re right, they would need some kind of copyright exemption in order to properly preserve the games by migrating them to new media regularly. I’m not sure it’s possible to get such an exemption for a private corporation under Swedish law?


Ah, found it. Only some government and municipal archives have such an exemption.

https://riksarkivet.se/utforska-och-bestall/vad-du-har-ratt-...


also not wanting to open boxes

What?!? How can one preserve games without opening boxes? Physical media don't last forever.

Unless they're interested in preserving the boxes themselves? (or other goodies inside)

Reads like they're looking for donations to enlarge a private collection. Or perhaps obtain some physical copies for stuff in their IP portfolio?


If it can detect type of compiler, I would think it can pass the correct flags to it. If that is all it is.


From what I've read PW's official stance is to keep using the existing protocols such as PA and JACK. PW comes with a PA-compatible server, replacement JACK client libraries and an ALSA plugin. So it supports clients from all three at the same time. Precisely so software does not need to be rewritten (I imagine).


This. I personally think ASAN and it's friends are some of, if not the, best things to happen to C/C++ in the last decade. We use it at work, I use it in all my personal projects. We've found "countless" bugs with ASAN together with LSAN.

It should be part any project from the start, run as much as possible with ASAN when the performance hit can be afforded.


Just confirming, but you're taking about Leak Sanitizer? https://clang.llvm.org/docs/LeakSanitizer.html. Not really familiar with these tools, definitely not the acronyms.


Sorry, yes, I meant Leak Sanitizer.


No need to be sorry! I appreciate it!

It was actually in the link the parent posted. I just missed it at first.


Agreed! Likewise, if you're doing anything multithreaded, ThreadSanitizer (TSAN) is a must!


I use it together with wireplumber on three systems, all running bookworm. One old MacBook Air, a fairly new ThinkPad and a run-of-the-mill desktop PC. Works flawlessly on all of them. I have no custom configuration, all default what comes from the packages.

For the laptops I also use BT headphones, and with the ThinkPad I use a headset that with the latest wireplumber switching profiles just work whenever I join a meeting with Slack or Teams (both in Google Chrome).

For the desktop PC I had a use case I never got working properly, or at least easily, with PA. That is to expose all the HDMI outputs of the graphics card as separate sinks. With PW, just switch over to the Pro profile and they all appear. Then I can route what ever stream/application I want to a specific HDMI output.

Additionally, I use USB MIDI devices and a software synth, qsynth, using JACK. Setting up both MIDI and audio routing via qjackctl is so convenient.

Oh, and USB audio devices also work without a problem. Can't say if that is different from PA as I never used them much before.


We’ve been doing something similar at day job for a couple of years now, at least. Tried a few different things, but this cause us the fewest problems.

We have a monorepo with a dozen different products, supporting four rolling release series at any time. Some code is shared between products. So having a commit that contains the release note is very convenient. It’ll automatically follow merges, both when merging up bug fixes through all the release branches and merging in features.

When it’s time to build release notes, simply walk the new commits since last release and extract each release note.

Note, I’m leaving out most details on exactly how we have this setup. It’s not that complicated though.


I find Insights into Mathematics (https://youtube.com/c/njwildberger) interesting and thought provoking. Even though many subjects are well beyond me, I appreciate the way subjects are dissected and explained. Note though, AFAIU the channel mainly deals with pure mathematics, not always relevant for applied mathematics.


Just a caveat: Wildberger subscribes to a strict, non-mainstream view that rejects the existence of infinite objects. His exposition is fantastic, and as you say the content is definitely thought provoking if you have enough mathematical sophistication to (at least superficially) understand what he's about, but I wouldn't recommend it to anybody who hasn't taken a traditional real analysis course.


Yes, should have mentioned that. It’s not always clear, even though he rarely fails to mention it :)

Thanks for clarifying.


Agree. Just the other year I discovered {*}, oh how that simplified some things.


Tcl is still fun :) We use it at work as an embedded language. It powers many features, and is a quite large part of our product.

It has absolutely lost some mindshare, but I still think it's a great choice.


same here. In the hardware we build (containing, amongst others, an ARM and a FPGA), I added tcl scripting of the kernel driver interface for the FPGA for testing, debugging, fun & profit ;) ..although that has no visual side.


This is similar to what I wrote at work on and off over the last couple of months, replacing a build system using recursive Makefiles. Due to the way our product is composed I ended up adding support for building static libraries, programs, RPMs and documentation. With full support for dependencies, meaning if a source for a library is changed, the library will be rebuilt, programs that links against it will be re-linked, if the program is part of a RPM it will be rebuilt etc. Documentation will also be rebuilt if source embeds documentation. Another great bonus is of course that with one GNU Make instance and proper modeling of dependencies "make -j" works great, every time. I guess we have a couple of hundred source source files and "make -j" will happily start compiling them all. Read Peter Millers paper about recursive Makefiles about why the above is preferred.

Makefiles are of course a bit limited compared to shell scripts, but you can do a lot with implicit rules, static pattern rules, second expansion, call and eval, etc.


I ended up using $(eval ...) a lot, and turns out Make's support for it is... suboptimal.

As for paper - I believe it's "Recursive Make Considered Harmful" - I read it, it's great. Was one of the main motivators during build system rewrite.


Three or four levels down in $(eval ...) and $(call ...) still makes me stop and think how many $ I should have.

Yes, that's the paper, I was also heavily motivated by it. And when I read JDK also switched[1] to something similar, it just cemented my belief it was the right way to go, for the same reasons as for them.

[1] - http://openjdk.java.net/jeps/138


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: