Hacker News new | past | comments | ask | show | jobs | submit | lstamour's comments login

While there might be incompatibilities with GPL and the App Store due to Apple’s insistence that developers must accept Apple’s terms to run Xcode and apps on developer devices, LGPL and other open source licenses are generally compatible with the App Store and Apple’s licenses. You can ship programs that use open source or are themselves open source within closed ecosystems by providing source code to end users via a website linked to within credits within the app. This distinction is because LGPL in particular permits more usage than GPL, allowing you to use the library in non-open source apps or those licensed under different terms, so ffmpeg has been somewhat adopted by a variety of open and closed source apps when a shared codebase is desired or particular codecs or functionality is required. That said, Apple themselves would prefer that you use their audio/video frameworks, due to device performance optimization, binary sizes, licensing and ecosystem lock-in. As far as I know, ffmpeg has adopted some of these Apple optimizations when appropriate frameworks are detected and configured at compile time.


How do I relink a random app on the app store with my own version an LGPL library? This is what it comes down to.


It wouldn't surprise me if they picked who would be included based on which drugs should be relatively price flexible yet cost a lot. I've noticed that ozempic/wegovy prices have dropped in many markets recently, even price controlled ones, especially compared to Mounjaro, as the latter is seen as more effective and in short supply and has fewer generics available still.

In fact, by introducing new multi-dose versions to different regions, I'm starting to see Mounjaro prices reportedly double for some. The real kicker is that for some brands/doses the price doesn't vary whether you get more or less of the drug - so people end up asking to for a prescription to the highest dose off-label and then split the dose themselves.

For example, you can click the auto-injector pen a fewer number of clicks to measure out a smaller dose than what is normally injected by the pen, then relatively safely save it in the fridge for longer than recommended even without preservatives (some pens have and some don't).

It's frustrating when pricing decisions are made assuming insurance benefits and yet insurance isn't always available, e.g. unemployment. This thinking even applies in places that do regulate drug prices. But hey, you can always sign up for the manufacturer's discount program to get it cheaper, so, win-win right?


As others have pointed out, the drugs on this list go into effect in 2027, which is after the EU semaglutide patents expire (2026), so that might be a pretty compelling reason for semaglutide pricing to be more flexible than tirzepatide.

> The real kicker is that for some brands/doses the price doesn't vary whether you get more or less of the drug - so people end up asking to for a prescription to the highest dose off-label and then split the dose themselves.

FWIW, I'm paying cash buying it directly from Lily, and they charge $400/mo for the 2.5mg dose and $550/mo for the 5mg dose. So, some price differentiation between dose sizes, but not linear.


Yeah. I've seen some split between low vs high doses, where the first two doses cost less than the rest - a cynical take is that they want to make it cheaper to get started knowing they will get you hooked possibly for life, or at least the duration of their patent.

But yes, non-linear by design - a 15mg dose provides 6x the medication but cannot be sold for 6x the price or people will stay on lower doses (or discontinue) rather than going to a higher dose.

Meanwhile it provides 6x the medication. One multi-use 4-week pen has enough to provide 12 weeks of doses at 4-week titration if used off-label. Obviously this is only helpful on low doses.

Important note: I am not a doctor, I don't recommend doing this - in fact, I have not done it myself and will probably not do it in future. I have seen YouTube videos of medical professionals explaining how to dose split weight loss drugs though.

I would highly recommend dose splitting the brand name drug over picking some compounding pharmacy's version of the drug, or worse, buying it off the street. It's crazy though, there are even counterfeit medications in the supply chain sometimes, for example: https://www.fda.gov/drugs/drug-safety-and-availability/fda-w...


Except it's not actually true. https://www.ssllabs.com/ssltest/clients.html highlights that many clients support standard SSL features without having to update to fix bugs. How much SSL you choose to allow and what configurations is between you and your... I dunno, PCI-DSS auditor or something.

I'm not saying SSL isn't complicated, it absolutely is. And building on top of it for newer HTTP standards has its pros and cons. Arguably though, a "simple" checkbox is all you would need to support multiple types of SSL with a CDN. Picking how much security you need is then left to an exercise to the reader.

... that said, is weak SSL better than "no SSL"? The lock icon appearing on older clients that aren't up to date is misleading, but then many older clients didn't mark non-SSL pages as insecure either, so there are tradeoffs either way. But enabling SSL by default doesn't have to exclude clients necessarily. As long as they can set the time correctly on the client, of course.

I've intentionally not mentioned expiring root CAs, as that's definitely an inherent problem to the design of SSL and requires system or browser patching to fix. Likewise https://github.com/cabforum/servercert/pull/553 highlights that some browsers are very much encouraging frequent expiry and renewal of SSL certificates, but that's a system administration problem, not technically a client or server version problem.

As an end user who tries to stay up to date, I've just downloaded recent copies of Firefox on older devices to get an updated list of SSL certificates.

My problem with older devices tends to be poor compatibility with IPv6 (an addon in XP SP2/SP3 not enabled by default), and that web developers tend to use very modern CSS and web graphics that aren't supported on legacy clients. On top of that, you've HTML5 form elements, what displays when responsive layouts aren't available (how big is the font?), etc.

Don't get me wrong, I love the idea of backwards compatibility but it's a lot more work for website authors to test pages in older or obscure browsers and fix the issues they see. Likewise, with SSL you can test on a legacy system to see how it works or run Qualys SSL checker, for example. Browsers maintain forwards-compatibilty but only to a point (see ActiveX, Flash in some contexts, Java in many places, the <blink> tag, framesets, etc.)

So ultimately compatibility is a choice authors make based on how much time they put into testing for it. It is not a given, even if you use a subset of features. Try using Unicode on an early browser, for example. I still remember the rails snowman trick to get IE to behave correctly.


> Except it's not actually true. https://www.ssllabs.com/ssltest/clients.html

Oh, if only TLS was that simple!

People fork TLS libraries, make transparent changes (well, they should be), and suddenly they don't have compatibility anymore. Any table with the actually relevant data would be huge.


Well, there is this: https://clienttest.ssllabs.com:8443/ssltest/viewMyClient.htm... But you’d have to test your own clients.

One imagines though that with enough clients connecting to your site you’ll end up seeing every type of incompatible client eventually.

The point I was trying to make is that removing SSL doesn’t make your site compatible and the number of incompatible clients is small compared to the number of compatible ones. Compatibility alone is not a reason to not use SSL on its own, arguably. The list of incompatibility doesn’t stop at SSL, there’a still DNS, IPv6 and so on.

SSL is usually compatible for most people - enough that it has basically become the defacto default for the web at large. Though there are still issues. CMOS batteries dying and having bad client time is one that comes to mind first, certificate chain issues too. SSL is complex, no doubt. Especially for server-side implementation to remain compatible client-side. That’s why tools like Qualys’ exist in the first place!


Define the objective metric that you would use to assess a candidate's work ethic or reputation credit score. Would LinkedIn issue it, as if it were a popularity contest?

And come to think of it, actually, credit scores can be gamed. It's well known that when companies and territories get credit scores they are largely a con game, as in based on the conifdence the raters have on your future performance, and not objective reality.

Likewise, credit scores can be juiced and tools exist to help you improve them and track them. But a bad credit score doesn't always mean fiscal mismanagement. It could be loans from a predatory lender or due to a medical expense or something completely outside the context the credit check is to be used for. Credit scores tell you if someone has lots of money first, and if they are smart with their money second. People with financial means often have good credit scores but can be as likely to default if their circumstances change. Perhaps more likely if the amounts of money at play are greater. People got those subprime mortgages with great credit scores, somehow.

So... Yeah, credit scores for loans are a form of outsourcing of responsibilities. But the point is somewhat well taken. The equivalent in hiring to a credit score isn't to ask banks but to do reference checks and ask a network or former manager about a hire.

Credit scores can easily be discriminatory as much as criminal charges (without due process, at least) and other unfair systems. We just normalize it because it works for most people. We poke fun at it when other countries try to come up with e.g. a social credit score, though.


Haven't looked into this too deeply but there is a difference between delaying a response (requests get stuck in the tarpit) vs providing a useless but valid response. This approach always provides a response, so it uses more resources than ignoring the request, but less resources than keeping the connection open. Once the response is sent the connection can be closed, which isn't quite how a tarpit behaves. The Linux kernel only needs to track open requests in memory so if connections are closed, they can be removed from the kernel and thus use no more resources than a standard service listening on a port.

There is a small risk in that the service replies to requests on the port, though, as replies get more complicated to mimic services, you run the risk of an attacked exploiting the system making the replies. Another way of putting it, this attempts to run a server that responds to incoming requests on every port, in a way that mimics what might run on each port. If so, it technically opens up an attack surface on every port because an attacker can feed it requests but the trade-off is that it runs in user mode and could be granted nil permissions or put on a honeypot machine that is disconnected from anything useful and heavily tripwired for unusual activity. And the approach of hardcoding a response to each port to make it appear open is itself a very simple activity, so the attack surface introduced is minimal while the utility of port scanning is greatly reduced. The more you fake out the scanning by behaving realistically to inputs, the greater the attack surface to exploit, though.

And port scanning can trigger false postives in network security scans which can then lead to having to explain why the servers are configured this way and that some ports that should always be closed due to vulnerability are open but not processing requests, so they can be ignored, etc.


The original Labrea Tarpit avoids DOS'ing it's own conntrack table somehow, too;

LaBrea.py: https://github.com/dhoelzer/ShowMeThePackets/blob/master/Sca...

La Brea Tar Pits and museum: https://en.wikipedia.org/wiki/La_Brea_Tar_Pits

The NERDctl readme says: https://github.com/containerd/nerdctl :

> Supports rootless mode, without slirp overhead (bypass4netns)

How does that work, though? (And unfortunately podman replaced slirp4netns with pasta from psst.)

rootless-containers/bypass4netns: https://github.com/rootless-containers/bypass4netns/ :

> [Experimental] Accelerates slirp4netns using SECCOMP_IOCTL_NOTIF_ADDFD. As fast as `--net=host`

Which is good, because --net=host with rootless containers is security inadvisable FWIU.

"bypass4netns: Accelerating TCP/IP Communications in Rootless Containers" (2023) https://arxiv.org/abs/2402.00365 :

> bypass4netns uses sockets allocated on the host. It switches sockets in containers to the host's sockets by intercepting syscalls and injecting the file descriptors using Seccomp. Our method with Seccomp can handle statically linked applications that previous works could not handle. Also, we propose high-performance rootless multi-node communication. We confirmed that rootless containers with bypass4netns achieve more than 30x faster throughput than rootless containers without it

RunCVM, Kata containers, GVisor all have a better host/guest boundary than rootful or rootless containers; which is probably better for honeypot research on a different subnet.

IIRC there are various utilities for monitoring and diffing VMs, for honeypot research.

There could be a list of expected syscalls. If the simulated workload can be exhaustively enumerated, the expected syscalls are known ahead of time and so anomaly detection should be easier.

"Oh, like Ghostbusters."


I tried something like that. It didn't work because the application added the socket to an epoll set before binding it, so before it could be replaced with a host socket. Replacing the file descriptor in the FD table doesn't replace it in epoll sets.


In this day and age that seems increasingly like a solved problem to most end users, often a client-side issue or using a very old method of generating a PDF?

Modern PDF supports font embedding of various kinds (legality is left as an exercise to the PDF author) and supports 14 standard font faces which can be specified for compatibility, though more often document authors probably assume a system font is available or embed one.

There are still problems with the format as it foremost focuses on document display rather than document structure or intent, and accessibility support in documents is often rare to non-existent outside of government use cases or maybe Word and the like.

A lot of usability improvements come from clients that make an attempt to parse the PDF to make the format appear smarter. macOS Preview can figure out where columns begin and end for natural text selection, Acrobat routinely generates an accessible version of a document after opening it, including some table detection. Honestly creative interpretation of PDF documents is possibly one of the best use cases of AI that I’ve ever heard of.

While a lot about PDF has changed over the years the basic standard was created to optimize for printing. It’s as if we started with GIF and added support to build interactive websites from GIFs. At its core, a PDF is just a representation of shapes on a page, and we added metadata that would hopefully identify glyphs, accessible alternative content, and smarter text/line selection, but it can fall apart if the PDF author is careless, malicious or didn’t expect certain content. It probably inherits all the weirdness of Unicode and then some, for example.


Signal can be a bit weaker on the watch up here in Canada but is otherwise adequate. The problem with Apple Watch cellular when not using an iPhone to forward data is (1) battery life on LTE is terrible compared to data over Bluetooth, using wifi, or turning on airplane mode and (2) call forwarding from iPhone to Watch, on some Canadian carriers, is charged per minute due to a carrier bug (Telus) which you can call to get refunded but is still frustrating. Normally calls go to your iPhone and the voice is forwarded to the watch over Bluetooth, I believe. Basically the Apple Watch more often acts like an AirPod than a cell phone.

I end up carrying my iPhome with my Android phone to avoid this. I mount the iPhone to my bike/scooter when available using Quad Lock waterproof cases.


It's fair to say that with OAuth the resource owner can choose to display a consent screen or not. For example, when consent is granted already, it can be skipped if the resource owner does not need it. Likewise, Google Workspace and other enterprise services that use OAuth can configure in advance which apps are trusted and thus skip permission grants.

Not to say the concern about redirects isn't legitimate, but there are other ways of handling this. Even redirects aren't necessary if OAuth is implemented in a browser-less or embedded browser fashion, e.g. SFAuthenticationSession for one non-standard example. I haven't looked this up in awhile but I believe the OAuth protocol was being extended more and more to other contexts beyond the browser - e.g. code flow or new app-based flows and even QR auth flows for TV or sharing prompts.

(Note I am not commenting on OpenAUTH, just OAuth in general. It's complex, yes, but not as bad as it might seem at first glance. It's just not implemented in a standard way across every provider. Something like PassKeys might one day replace it.)


> Even redirects aren't necessary if OAuth is implemented in a browser-less or embedded browser fashion, e.g. SFAuthenticationSession

Can you please expand on that or give me some hints what to look at? I have never heard of this before and I work with Oauth2 a lot.

When I look for SFAuthenticationSession it seems to be specific to Safari and also deprecated.

I always share this article because people overimplement OAuth2 for everything, it’s not a hammer: https://www.ory.sh/oauth2-openid-connect-do-you-need-use-cas...


For browserless, I was referring to a 2019 article that I could have sworn was newer than that, on the need for OAuth 2.1 that also covers how they added OAuth for Native Apps (Code Flow) and basically a QR code version for TVs: https://aaronparecki.com/2019/12/12/21/its-time-for-oauth-2-...

As for SFAuthenticationSession, again my info might be outdated, but the basic idea is that there are often native APIs that can load OAuth requests in a way that doesn’t require you to relogin. Honestly most of those use cases have been deprecated by PassKeys at an operating system level. There’s (almost) no need for a special browser with cookies to your favourite authenticated services if you have PassKeys to make logging in more painless.


Thanks for sharing!

I agree that passkeys would solve all that, but they have their own set of problems (mainly being bound to a device) and they are still very far from being universally adopted.

I’m looking forward to OAuth2.1 - at the moment it is still in draft stage, so it will take a couple more years until it’s done and providers start implementing.

My prediction is that passwords will be around for a long time, at least the next 10 years.


PassKeys are definitely the future, they aren't just device-specific, they can be synced also. https://www.corbado.com/blog/nist-passkeys talks about this, though I'll admit I haven't read anything on the subject yet. But I can say that most implementations of PassKeys seem to cloud sync, including 1Password, Apple, Google, Edge, etc.

I should also add that PassKeys that are tied to devices are like FIDO2 security keys, you should be able to add more than one to your account so that you can login with a backup if your primary FIDO2 token is unavailable.

Likewise, SSO should ideally be implemented such that you can link more than one social network - and a standard email address or backup method - in addition to the primary method you might use to login with. It has always bugged me that Auth0 makes it much harder than it should be to link multiple methods of login to an account, by default.


The biggest issue I've seen organisations facing with PassKeys is that neither iOS or Android require biometrics to unlock one - this seems like a massive drawback.

Most apps wanting extra authentication implement biometrics which fall back to an app-specific knowledge based credential like a PIN or password. As far as I can tell, PassKeys on those devices fall back to the device PIN which in the case of family PCs/iPads/tablets is known to the whole household.

I've seen several organisations give up on them for this reason.


The article by Ory's Aeneas Rekkas perfectly describes OAuth / OIDC problems. The only thing it misses is the suggestion for the alternative protocol for first-party auth. It does suggest that it's preferable to use simpler systems like Ory Kratos. But OAuth / OIDC is a set of protocols, not an implementation. Is there an a effort to specify simple auth protocol, when third-party auth is not needed?


It can vary by implementation need. Can you send a time-limited secret as a login link to someone's email as a replacement to entering or managing passwords? Can you use PassKeys? Or a simple username and password? (Password storage and management is left as an exercise to the reader.)

Part of the question is - why present a login? Do you need an identity? Do you need to authorize an action? How long should it last?

Generally, today, PassKeys are the "simple" authentication mechanism, if you don't need a source of third-party identity or can validate an email address yourself. (Though once you implement email validation, it is arguable that email validation is a perfectly simple and valid form of authentication, it just takes a bit more effort on the part of the user to login, particularly if they can't easily access email on the device they are trying to login on, though even then you can offer a short code they could enter instead.)

Frankly, the conclusion to "how to login" that I draw today is that you will inevitably end up having to support multiple forms of login, including in apps, browsers and by email. It seems inevitable then that you will end up needing more than one approach as a convenience to the end user depending on the device they are trying to sign in to, and their context (how necessary is it that they must be signed in manually vs using a magic link or secret or QR code or just click a link in their email).

I should also note that I haven't discussed much about security standards here in detail. Probably because I'm trying to highlight that login is primarily a UX concern, and security is intertwined but can also be considered an implementation detail. The most secure system is probably hard to access, so UX can sometimes be a tradeoff between security and ease-of-access to a system. It's up to your implementation how secure you might need to be.

To some, you can use a web-based VPN or an authenticating proxy in front of your app, and just trust the header that comes along. Or you could put your app behind Tailscale or another VPN that requires authentication and never login a user. It's all up to what requirements the app has, and the context of the user/device accessing it.


It's probably going to be vendor-specific or you will implement your own auth. At ZITADEL we decided to offer all the standards like OIDC and SAML, and offer a session API for more flexible auth scenarios. You will also be able to mix.


Hot take: that sounds more like a critique of modern AI assistants. My Google Assistant used to be predictable. Progress isn't always progress.


That's not where I thought this was going. I tried using DDG and Kagi and went back to Google. Google had more relevant, fresher results than DDG, and Kagi didn't have the same integration with Google Maps and often a smaller set of results for very niche queries. Google is still basically the internet - the entire internet - though in many ways they do still fall short. But breadth of content indexing and information about local places, Google is still king.


> information about local places, Google is still king.

I try to use OpenStreetMap as much as I can (I have a deGoogled smartphone and OpenStreetMap works well enough) but it is true that Google Maps is better (at the cost of privacy of course).

But in terms of search... I can't remember of a time where I tried Google because I couldn't find with Kagi and ended up finding something with Google. On the contrary with the Kagi lenses it's often a lot easier to get specific results.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: