I've actually noticed this two days ago with some bluetooth headphones and my phone.
The volume control on my android phone was acting just like this when my headphones were connected. When changing the volume with the phone only a small section of the bottom quarter of the volume control actually made a difference, but the volume controls on the headphone themselves were acting "normally".
Usually the phone volume is fine, it only screws up on bluetooth devices (my speakers + my headphones). I have to use the volume control on the device itself to have any good control.
This explains the weird behaviour, the phone volume changes are being sent linearly, but the headphone/speaker settings are correct and being set logarithmically.
i.e. somewhere a developer working on the bluetooth integration didn't understand the difference, screwed up and never tested it. That it's happening to both my Edifier speakers and my cheapo headphones probably means it's on the stock Android end (it's a pixel phone).
I've had the same issues as you, and here are some things I've done or tried as a remedy.
Try going into Android "Developer options" and enable the option "Disable Absolute Volume". Some devices cannot handle the way Android maps the "master" volume of the system to Bluetooth. With the option enabled you will have a separate slider to adjust the Bluetooth volume, and the volume buttons will instead only control the "Media" volume.
An alternate thing to do is under the same Developer Options is instead of disabling Absolute control is to change the Bluetooth AVRCP version to at least v1.5. v1.5 AVRCP introduces the Absolute Volume control functionality.
But, it could also be what you may have are Bluetooth devices that do not support Absolute Volume, or lack AVRCP v1.5 compatibility. If none of this works, I suggest purchasing the "Precise Volume 2.0 + Equalizer" app. I use this as it gives you more fine-grained control over the number of steps in the volume slider (for example, I now have 100 steps). It also allows you to calibrate the number of steps to a specific device, so you can literally change how many steps from quiet to loud. It's worth all of the $10 it costs, and has other nice quality of life features as well.
Yeah, as someone else has pointed out it's C# inspired, this is a C# example:
public void AMethod() {
//some code
using var stream = thing.GetStream();
//some other code
var x = thing.ReadToEnd();
//file will be automatically disposed as this is the last time file is used
//some more code not using file
} //any error means file will be disposed if initialized
You can still do the wrap if you need more fine grained control, or do anything else in the finally.
You can even nest them like this:
using var conn = new SqlConnection(connString);
using var cmd = new SqlCommand(cmd);
conn.Open();
cmd.ExecuteSql();
Edit: hadn't read the whole article, the javascript version is pretty good!
That's also labor and capital costs though, not just raw ingredient costs.
It's also a byproduct of minimum wage, the time spent on making the burrito, prepping the kitchen, cleaning up, processing the order, doing the accounts, etc. all adds up meaning a single meal has a minimum cost directly proportional to minimum wage.
Though don't get me wrong, minimum wage is an overall good imo.
However if we had universal basic income instead, and thus could scrap minimum wage, you might see the price of a burrito drop.
When you say fast refactoring, etc. can I ask if you've used visual studio?(and I assume the Java equivalents)
As they have had for over a decade fast refactoring (extract method, boilerplate generation, change function signatures everywhere, suggested code refactoring due to language improvements, one click refactoring based on linting suggestions, etc.).
Without AI.
I'm just confused as the stuff you've mentioned already existed and doesn't need an AI to do it.
So can it do something more or are you using AI tokens to do something most decent IDEs can already do without AI?
They're pretty good at following direction. For example you can say:
'Usw React, typescript, materialUi, prefer functions over const, don't use unnecessary semi colons, 4 spaces for tabs, build me a UI that looks like this sketch'
I hear you - but I had already read through the chain of thought which identified the right region before search, and had already seen the capabilities in many other rounds. It was self-evident to me that the search wasn't an essential part of the model's capabilities by that point.
Which turned out to be true - I re-ran both of those rounds, without search this time, and the model's guesses were nearly identical. I updated the post with those details.
I feel like I did enough to prove that o3's geolocation abilities aren't smoke and mirrors, and I tried to be very transparent about it all too. Do you disagree? What more could I do to show this objectively?
I think the UX of chatgpt works because it's familiar, not because it's good. Lowers friction for new users but doesn't scale well for more complex workflows. if you're building anything beyond Q&A or simple tasks, you run into limitations fast. There's still plenty of space for apps that treat the model as a backend and build real interaction layers on top — especially for use cases that aren’t served by a chat metaphor
I wouldn't call it familiar, it's a weird quasi-chat. They didn't even do the chat metaphor right, you can't type more as the AI is thinking. Nor can you really interrupt it when it's off over explaining something for the 20th time without just stopping it.
It's missing obvious settings, has a weird UX where every now and mysterious popups will appear like 'memory updated', or now it spews random text while it's "thinking", it'll every now and then ask you to choose between two answers but I'm working so no thanks, I'm just going to pick one at random so I can continue working.
People had copy pasta templates they dropped into every chat with no way of savings Ng thatz they they added a sort of ability to save that but it worked in a inscrutable and confusing manner, but then they released new models that didn't support that and so you're back to copy pasta, and blurgh.
It's a success despite the UI because they had a model streets ahead of everyone else.
The volume control on my android phone was acting just like this when my headphones were connected. When changing the volume with the phone only a small section of the bottom quarter of the volume control actually made a difference, but the volume controls on the headphone themselves were acting "normally".
Usually the phone volume is fine, it only screws up on bluetooth devices (my speakers + my headphones). I have to use the volume control on the device itself to have any good control.
This explains the weird behaviour, the phone volume changes are being sent linearly, but the headphone/speaker settings are correct and being set logarithmically.
i.e. somewhere a developer working on the bluetooth integration didn't understand the difference, screwed up and never tested it. That it's happening to both my Edifier speakers and my cheapo headphones probably means it's on the stock Android end (it's a pixel phone).
reply