I’m in Turkey no UK or US so I can only speak for here. It’d be like denouncing electricity and saying join me. Good luck with that.
Its not friends and family. Our office administration basically runs on a WhatsApp group. I just sent a location to a plumber using WhatsApp. They dont know / use / want anything else.
At best, you might get some close people to use Signal or whatever but you have to use whatsapp to function.
Especially American people won’t understand that because I believe iMessage and SMS is still the de facto standard. That would be WA here.
I feel fine if AI was used to add features to an established software. Let it loose on the linux kernel for what I care. It still somehow feels icky to use it to build something from scratch.
Ironically it wouldn't be very useful for Linux kernel development (would be very hard to out it in context) while it is more suitable for new projects written from scratch.
This of course not considering the quality or anything else.
The other day I was toying with the MCP server (https://github.com/modelcontextprotocol/typescript-sdk). I default to bun these days and the http based server simply did not register in claude or any other client. No error logs, nothing.
After fiddling with my code I simply tried node and it just worked.
We used a token bucket one to allow say 100 requests immediately but the limit would actually replenish 10 per minute or something. Makes sense to allow bursts. This was to allow free tier users to test things. Unless they go crazy, they would not even notice a rate limiter.
Sliding window might work good with large intervals. If you have something like a 24h window, fixed window will abruply cut things off for hours.
I mostly work with 1 minute windows so its fixed all the way.
Having a lint rule to disallow empty catch bodies has been a good practice for me. Sometimes you deliberately do it and even then I am forced to at least leave a `// ignore` there.
Absolutely, linter rules that expose such things are critical to fixing these things. And I add them to the PR linter to comment on PRs when I can to point out discussions about such things.
Whenever I read a blog post assuring me that something is not how it looks, it turns out to be exactly how it looks at the end.
BTW, I don't use deno and haven't been following any news whatsoever so this is simply a shitty statement from an outsider. It is interesting that I tested deno a couple of times but kept using node until bun came around and I basically switched to bun. I can't say why exactly.
Bun has high node compatibility with lightning fast testing and a good/fast built in package manger. I'd use bun for local dev even I was deploying with node.
Bun is still faster, and Bun's testing is insanely fast -- I had a test suite that would take 30 seconds with Jest that finished in 800ms with bun. Plus Bun's networking performance is insane compared to Node, and you can have a lot more concurrent clients in a light vpc (think 1gb).
I think you're saying the integration of fast testing and building is what moves the needle for you? Because esbuild is limited to the build portion. I haven't explored fast testing alternatives to pair with esbuild.
Go is safer for networking heavy/performance critical services, but for anything that's IO/upstream limited the performance of Bun and Go will be comparable and there are more people who are node proficient/it lets people work across the stack with the same tooling.
I agree also as an outsider. These sorts of "meta" discussions always smell of spin aimed at investors and usually are not good news for customers. Customers generally care about things like product and long term reliability and stability. These meta things always have the tone of Monty Python's "Bring out your dead!" segment.
Squid ink is black so are teflon pans, the latter feels clunky and superfluous by design when the former type of fruit can smoothly provide vitamins to vertically integrate with our stack.
Haha yep. I'm Turkish and been using US layout keyboards my entire life. Therefore, I do not use the Turkish characters online. I use S for Ş. G for Ğ and it just works, nobody ever complained.
One word is to get bored that's causing issues.
sık - to bore
sik - to fuck
So if I write "sikildim" to say "I got bored", it actually becomes "I got fucked".
One way around it to capitalize. SIKILDIM is "I got bored" but now you are yelling. Typing "sıkıldım" is a hassle on a US keyboard though.
The problem was that Emine's cell phone was not localized properly for Turkish and did not have the letter <ı>; when it displayed Ramazan's message, it replaced the <ı>s with <i>s.
Does it make sense? Could the phone arbitrarily replace characters? Or could it more likely that the guy typed dotted i's?
There might be some truth to it but it does not make much sense. Technically, ı would probably show up as □ instead of i if the phone had a hard time displaying it.
There is also the suffix not matching that change: sıkışınca vs sikişince. A becomes E in that suffix when you switch from ı to i. Even if the phone fucked up, "sikişinca" would look weird.
I noticed that countries with latin script use latin-1 encoding for sms, because they never really needed unicode. Then when software converts text to latin-1 or acsii, there's an option to find the best match character in ascii repertoire, I think in that case ı will be converted to i.
It's not latin-1, and it's not ASCII either. It's 7-bit GSM 03.38 charset, with optional shift tables. You can use either that or UCS-2 (though these days phones use UTF-16 instead), and UCS-2/UTF-16 significantly limits the number of characters that fit into the message (160 vs. 70).
The one everyone uses is better.
If you dont have a way to move masses, it does not matter.
reply