This package has saved me so many hours of tedious gruntwork. It's like a junior developer - you still have to manually check their work, but when it's correct, it's a great productivity improvement.
And don't forget where this will go in a couple years with improved models and more computing power, it's gonna be awesome!
This exactly. It is more important to move fast. Screw the edge cases. As long as it’s correct _most_ of the time, you can always fix anything that’s broken tomorrow.
What model were you using? You need to use gpt-3.14-tastesgreat-lessfilling, I've used it to write 130 side hustle projects this month with only prompting.
Vibe coding your way to greatness. Wish I could do that, but it seems too hard becoming a prompt engineer. Stringing together words? That's why we have AI!
We hire junior devs, but not senior, and dont replace our senior devs. So our developer base is moving towards being junior weighted with less senior. The reason being a junior dev is cheaper, complains less, is more capable now through utilising generative AI, works harder to impress knowing they arent in a safe position, and we can let them go more easily with less process and less reasons needed to be given.
you're so wrong. This only works if what you do is so simple that any junior develper can sufficiently do it well. Senior developers with AI is gonna destroy a bunch of junior developers with AI.
wrong? I'll gladly continue this 'wrong' approach if it continues to be as successful as it has over the last 6 months. Aswell as it being entertaining seeing the level of cope among 'senior' developers watching someone on 1/4th of their salary design systems better than they can
I recognize I’m not going to change your mind on this, but I’ll sure be interested to hear how all those systems are working in a year or two - although from your comments elsewhere, you run a consultancy, so I guess that’s not your problem, either.
Many people without experience ask this same question.
It isnt relevant. They arent just producing code and pushing it, saying it works. It undergoes the same extensive testing for stability and security as the solution written by anyone else goes through. If it passes that, then its as likely to have issues further down the line as the solution written solely by the senior dev would have.
Theyre getting paid and have a job. If they dont like that deal they can go find another one elsewhere.
But its going to get increasingly difficult to justify promoting them to higher salaries if generative AI continues as it is, as the bottom line is that there will be another junior dev out there that will do the role on less.
That would defeat the purpose. The whole point is to reduce costs by getting a cheap junior dev and having them operate AI to produce the same or better result for far less
So the point is to use technological advancements only to increase company profit and not pass any on to the actual workers. If a junior costs 1/4 of a senior, they could easily paid more from the 3/4s saved (since they're also more valuable now), but I guess shareholder millions come first.
Apathy? More like the fact that the majority of people are too lazy, not motivated enough, not willing to take risks, and go out and build something of their own that results in wealth, and prefer to sit safe as someones employee, complaining about 'wealth inequality'
It's interesting to watch you put zero value on work/effort/labor and huge value on risk taking (which is very different for people with different "safety nets").
Almost here. Elon said Full Self Driving would mean full self driving within a year! That means we are less than 12 months away from not needing to drove ourselves anymore.
Regardless, the car mostly driving itself while remote operators handle particularly tricky situations is a feat which will allow driver-not-in-vehicle taxis to take over. My understanding (which admittedly could be the victim of successful PR) is that the vast majority of the driving and even a majority of the trips are fully automated.
"particularly tricky situations"
is doing a lot of heavy lifting here to describe situations every owner of a driving license is expected to handle routinely.
driver-not-in-vehicle is an interesting approach, but calling it "self-driving" is doing the mechanical turk without the reveal. Someone less charitable might assume intentional misrepresentation for the sake of winning an internet argument.
This is pretty useless to be honest. It's good for telling whether a number is even, but in our industry we need more powerful functionality. We also need to know whether a number is odd.
With a few lines of code, you can just create a list with all the numbers that are even and when you need to check if a number is odd, you simply have to check if it's in the list.
Yes, this is what we do as a RAG workflow. We created a list of all 32bit unsigned integers and whether they were even or odd, and we pass that into the context. The future is amazing!
We have an agentic system that looks up the context size, and then summarizes the even/odd table if necessary. We lose a little bit of accuracy, but now we can handle any model. Be sure to like & subscribe!
I have found that even 2 bit quantization works, but you have to make sure you only discard the LABs (that’s what we are calling the Left Aligned Bits internally). I have no idea why it works so well but it has cut our costs significantly.
Yeah, some of the bigger numbers were a problem, so we switched to using a horizontally scaling db cluster so that we could cover all of the (useful) numbers. When we encounter a new number, it gets routed to the appropriate db where the results of the function are cached after being calculated. We're thinking of spinning it off as an API service actually if there's any interest.
Sorry for the offtopic post, but I am looking to hire someone with 10 years of experience with is-even-ai. Urgent. Your first unpaid assignment will be to help load balance a bunch of MCP servers to add and THEN check if it's even. So much to go from here! We're a single threaded GPU first identity operator company with a lot history of returning the same thing. We're now expanding to combine and add multiple things. In 6 months of SOTA fine tuning we can already add upto 3 numbers. An MCP first. With temperature 1 we even add random numbers. An industry first. And we're just getting started. Join us. We're adding to our team!
Certainly not, it's actually possible to add 3 float32 numbers with 90% precision using AI! With a recent breakthrough, the team is working on pushing that to 10, we have enough cracked engineers to hope to make it happen soon!
If I were a hiring manager and saw your comment, I wouldn't hire you. Could it have been a joke in a more obvious way? Do you think all the other comments here are serious?
1. For an AI engineer, you can already build such a system yourself quite trivially by fine-tuning a lightweight inference model, deploying it behind a FastAPI endpoint, and orchestrating requests with a custom prompt pipeline. If you want to go further, you could even ensemble multiple LLMs for higher evenness accuracy.
2. It doesn't actually replace the modulo operator. Most people I know just use `n % 2 === 0` to check if a number is even, and they still keep that knowledge handy in case the AI service is down. This does not solve the reliability issue.
3. It does not seem very "viral" or income-generating. I know this is premature at this point, but without charging users for the library, is it reasonable to expect to make money off of this?
I think before you would deploy this to prod, you should wrap it with a few guardrails to make sure it’s not hallucinating. Pretty simple — just take the output from the llm and see if it agrees with a simple mod2 operation.
Of it agrees, return model output to the user. Otherwise do a couple of retries with different prompts.
…but there’s only one dependency!! This goes against the NPM ethos of importing anything and everything that you might be tempted to just handle yourself. I’ll be waiting for the Enterprise Version that uses the appropriate number of dependencies.
This is never going to scale. Eventually we’re going to run out of numbers which have been manually checked for evenness by a human, and instead the training data for the checks will be polluted by numbers which have only been verified by computers.
I tried with this on chatgpt.com (anonymous) and it was wrong:
>You are an AI assistant designed to answer questions about numbers. You will only answer with only the word true or false.
>Is 393330370227914821469106615363204944758938252979261537157082994586230072180858944545028761701928694832864623009988147774229437650643225379825905427239525512110359581021414640894111281701792224552922491447051506246553646282117414112976459608594044929244664050172002138933343230226871897567 an even number?
The tokenizer might lump the last digit together with some preceding digits though. I know o200k_base (OpenAI -o models) tends to give groups of three (900001 for example is 900-001).
Anyway, I wouldn't be surprised if a non-finetuned model made some mistakes.
I remember when the whole isEven package was ridiculed for the first time a while ago, back then I thought about training a NN to predict the odds of a number being even, as a joke. I don't actually remember if I actually wrote code for it, but in the end I thought no one would laugh and gave up
When I watched Andrej Karpathy's NN videos a year or two back, I trained a Neural Network to multiply an integer by two. This was with a rather small training set but if you rounded the results (which was a floating point), they were mostly correct. For positive numbers. My training data didn't include any negative numbers so they were hilariously bad.
Finally, someone had the courage to disrupt the tyranny of the modulo operator. Who needs n % 2 === 0 when you can invoke a large language model and incur network latency, token limits, and API costs to answer the age-old question: is this number even? Truly, we’re living in the future.
On a similar note, I was testing LLM's code writing ability and asked Qwen to write me a model to reverse a numerical string. It gave me code and instructions to compile and run. However it had errors in it and after few attempts asking it to fix it, I was able to compile and run. But, alas, the code just kept failing and generated hubris. I gave up. Not to pick on Qwen. I actually like it much better than chatGPT. I have seen Qwen give correct responses when chatGPT lied and gave me wrong information for the exact same question.
The other joke package is-even has 179K weekly downloads. Is an LLM making up these numbers as well, or is this the Dead Internet at work? And if it's the Dead Internet, has the Dead Internet ever heard of caching? Maybe the Dead Internet gets a kickback from S3 egress fees.
Perhaps I should file an issue to increase the accuracy by including a RAG database in LanceDB with embeddings for the set of even numbers up to 32-bits.
It's a remote server running assembler, C and and CGI code, it sends your OpenAI key to the Super Intelligence to make paperclips and reaps the benefit of all the productivity increase by AI for the creator.
Kudos to the open source contributors but honestly this is the kind of area where the big commercial players need to step up and help with the heavy lifting.
it doesn't say if it's implemented in rust i had to click on the link to find out please future hn posters start every submission also with an exemple so i can see if i like the syntax
Great, now can you make an AI-powered type checker? I wish to expel those pesky types, which too often seem to exist only to clutter my otherwise pristine code. :)
First we invented fire. Then along came the wheel. Countless inventions followed. Today, using the most brilliant minds of our time, the technology of billionaires, and hubris that runs thickly through our veins.. we made not just another step. We leapt head on into uncertain future. It feels odd. Are we even there?
This package should be updated to use the newer gpt-4o-mini model, rather than gpt-3.5-turbo.
Its 3x cheaper, twice as fast, and supports cached input just in case you need to double check if the last number you entered was even. It also has a knowledge cutoff of September 30 2023, which helps for any newly discovered even numbers since gpt-3.5s launch!
And don't forget where this will go in a couple years with improved models and more computing power, it's gonna be awesome!
[/i]