Something I've noticed is that Gemini through gemini.google.com or through the mobile apps is vastly inferior to Gemini through aistudio.google.com. Much worse handling of long contexts amongst other things. Very odd that a product that is free (AI Studio use is free), is much worse than the product I am paying 20 quid a month for.
I find this to be especially true for the newer models like "gemini-2.5-pro-preview-03-25", so if you haven't tried AI Studio yet, I'd give that a go.
Just be aware you don't get to use all of it. I believe you only get access to
~20.8GB of GPU memory on a 32GB Apple Silicon Mac, and perhaps something like ~48GB on a 64GB Mac. I think there are some options to reconfigure the balance but they are technical and do not come without risk.
Great stuff - and training was so cheap too - it would have cost less than $200 on Runpod, and half that price on spot instances.
I guess it's time languages other than Python, especially niche ones, started collating their own language specific datasets.
Personally, I daydream about an Elixir specific LLM I could run locally, trained or fine tuned to respond in an idiomatic fashion, and plug into a tool like Cursor.so.
Are there any examples of the internal dataset used in the 80K instruction / answer pairs Phind used to tune this?
With the prevalence of Wayland, I don't think it's correct to say that most Linux users will never encounter problems with NVIDIA. Things have improved recently but Intel and AMD are still way ahead when it comes to that.
I have not seen any evidence yet that Wayland is an improvement over X Window System, while for NVIDIA there are a large number of available software packages that are known to provide useful functions.
Very slowly, the AMD GPUs become usable with important applications, like Blender, but the problems that can be encountered when trying to use such applications with AMD GPUs and trying to accomplish something concrete are far more annoying than the fact that Wayland may not work well, because one can always choose to avoid Wayland without losing anything.
An approach I moved from this to, was to use the DB trigger to write jobs directly to an Oban oban_jobs table.
Oban jobs run almost instantly when they are queued so there's no perceptible difference in speed, but you get all the benefits of resilience that Oban provides, so your webapp connection can go down, such as during a deploy, and still catch up reliably.
It's also handy to be able use the oban_jobs table for debugging purposes.
Very interesting project, thanks for linking. Was thinking about building something like Oban.Peer a while ago. We're not using Elixir. Might use Oban as an example :-)
I honestly believe SWOS 96/97 (Amiga version) is the greatest football game ever made. I've been playing it periodically ever since, whether through emulators (WinUAE) or bundles and mods like this that adds netcode and updated players. I'm not that great though, I played Playaveli (the guy behind this site) online about 12 years ago and he easily beat me - 6-3 if I remember correctly.
With Slots and JS Commands LiveView has almost everything now to be able to convert Bootstrap, Bulma or especially Tailwind UI into a really fantastic Phoenix/LiveView component library. Probably won't even require AlpineJS...
Although I think 'Declarative Assigns' from the roadmap (already in Surface UI) would make everything a bit nicer, it's not needed.
The Tailwind UI license will unfortunately prevent peoples implementations of that from being shared though.
Stylewise, It reminded me of reading the original PragProg Rails book back in the day.
It's mostly finished. I just saw it's 40% off this week with the code 2025PERSPECTIVES at https://pragprog.com/titles/ldash/ash-framework/