> I had a lot of interaction with a startup incubator you know well, and ended up sitting in the discussions and planning around banning and erasing a young programmer we considered a threat to our financial interests, due to his concerns about authoritarianism in technology.
And… this wasn’t immediately seen as being deeply unethical and downright evil? Most of the western world punched Nazis on purpose 80 years ago… that doesn’t get to stop, because authoritarianism never goes away.
>In retrospect, he was harmless, but an example had to be made.
Wow. Just… wow. To destroy a life simply because of greed and because a person’s passion of fighting evil made you uncomfortable.
You need to understand you are very much the bad guy, here.
> And… this wasn’t immediately seen as being deeply unethical and downright evil?
Oh, these people know it's evil - from the second they suggest it. The personality types associated with SV leadership are simply wired for rationalising evil as a means to any business end. It's just implicitly accepted that it's an appropriate tool to be wielded.
You're the weird one who "just doesn't get it" if you push back. Push back hard enough and you'll quickly find yourself on the receiving end.
Except… science has shown this to be true. Even after a year-plus of work, less than 2% of devs work faster or more efficiently with AI than without it. And for almost 90% of the remainder, regression analysis strongly indicated that none of them would ever be better with AI than without it, regardless of how much practice they had with it.
These general results have come up with study after study over the last few years, with very consistent patterns. And with AI becoming more hallucinatory and downright wrong with every generation - about 60-80% of all responses with the latest models, depending on model being examined - the proportion of devs being able to wrestle AI into creating functionally viable work faster than they could to it themselves has also decreased slightly.
> Essentially, the amount of human labour needed to identify and correct these AI hallucinations is greater than the human labour saved by deploying the AI. As such, AI isn’t even a widely viable option for augmentation, let alone automation.
I have been saying variations of this across all social media platforms for the last six months, and every time I get savaged by tech bros. The pro-AI ideology absolutely insane.
As nice as this app is, a subscription for something that has zero online components, and therefore has no ongoing costs that need covering, is what goes a step too far for me.
I have zero issue with a one-time payment, and have no problem justifying paying for major version updates. But a subscription just because?? Sorry, no. IMO that’s Adobe-level scummy.
I know you need a revenue stream. But the only justification I can accept for a subscription is a compelling online component that has out-of-pocket costs for you, directly. Even if there is a profit margin involved.
It’s why I pay Bitwarden for family syncing of passwords, and Sophos for the dashboard that can let me monitor up to 10 machines across the family, and Apple for that data insurance package they call iCloud that can cover up to 6 family members. These all have compelling online components that cost the provider serious money and which need ongoing revenue to cover.
I appreciate the feedback. The subscription model covers ongoing development, bug fixes, OS compatibility updates, new features, and yes, my time maintaining and improving the app. I understand the preference for one-time purchases, and it's something I'm considering for future pricing options. That said, the subscription model allows me to provide continuous improvements rather than holding features hostage for paid upgrades. I respect that it's not for everyone, and I appreciate you giving the app a try!
I actually don’t know how many books I have. I only know that it is somewhere north of 3,000, as I had hit some sort of buffer overrun within the BookCrawler iOS app somewhere just shy of 3,000 volumes.
Back in 2015.
Now, it still works just fine as a reference app to tell me what I have obtained, it’s only search and stats that have done a thorough dirt nap. And the scanner function, which can’t seem to handle iOS phones with zoom lenses like the 5× of the iPhone 15. It seems to be stuck on the 5× zoom and mis-reads almost all UPC codes regardless of lighting, spawning bad entries. I’m forced to enter all ISBN codes by hand.
But the exportable database… has more than doubled since things went sideways in 2015. So yeah. If there is a linear relationship there…
This won’t happen - at least, not successfully and without the company collapsing - because AI is bullshit.
To me it is four things in particular:
1. How AI use erodes skills in the subject AI is being used to assist in. This is a 100% occurrence, and has been demonstrated across all industries from software developers to radiologists. Most experience a 10-20% erosion in their skill set within the first 12 months of AI use, but others in the study groups have seen up to a 40% erosion in their skill sets.
2. How AI use shuts down critical thinking, and makes users more stupid. This is a 100% occurrence, and has been clearly demonstrated by MRI scans of the prefrontal cortex while users are actively using AI.
3. How AI use makes the user slower. This is the only user point that is not 100% coverage, as slightly less than 2% of the most senior and skilled users show a slight increase in work completed… after more than 12 months of using AI. Projections have been made on the other 98%, and almost all of them will likely never work faster with AI than without it, regardless of training or experience.
4. The gratuitous hallucinations, which are only increasing in scope and severity with every AI generation. It arises entirely from the constraints the AI are rewarded with - providing no answer is weighted just as negatively as a wrong answer - and depending on the model being examined, anywhere from 60-80% of all responses are hallucinatory or incorrect in some fashion.
60-80% of all responses. That’s bad.
In prior decades, any corporate solution with such performance/output would be laughed clear out of the boardroom. You cannot build a business where a majority of output is downright wrong or false. A political movement, fine; conservatism seems to be flourishing worldwide-wide with this “feature” as a core advantage. But businesses?
But because capitalism is desperately seeking a solution to what they perceive as a problem - how to obtain labour without having to pay said labour - AI is being adopted hand-over-fist.
After all, the underlying purpose of AI is to allow wealth to access skill while removing from the skilled the ability to access wealth.
And… this wasn’t immediately seen as being deeply unethical and downright evil? Most of the western world punched Nazis on purpose 80 years ago… that doesn’t get to stop, because authoritarianism never goes away.
>In retrospect, he was harmless, but an example had to be made.
Wow. Just… wow. To destroy a life simply because of greed and because a person’s passion of fighting evil made you uncomfortable.
You need to understand you are very much the bad guy, here.