I imagine what these guys miss is the freedom of obscurity. Tech has gone from being ignored to being ridiculed and regulated. Like a company getting more bureaucratic as it scales, it makes sense to professionalize a sector as its influence grows. The problem is that professionalization is often the enemy of innovation. If you care about having the freedom to innovate, this leaves you with three options: be the angry old man yelling at the clouds, constructively try to improve the process of professionalization from within or quietly go innovate elsewhere. The problem with the last option is that there aren’t too many places left to go and it’s hard to start over, which is why I think Elon and others take to some combination of the first two.
Innovation in trad engineering disciplines didn't stop when the industry gained professional guilds and legal status. We have far better buildings, extremely reliable jet planes, high speed rail, the internet...
I don't think mechanical engineering is more professional now than it was 50 years ago. If anything its less so, with execs calling on Engineers to wing it, and getting sign off on shoddy designs.
It's quite the feat that capital holders have pulled off with software, being able to run amok in the public sphere wrecking anything they want with little over-sight or regulation.
I don't think they miss anything. These guys are at the apex of their power and are shaping the world at just about the highest level. This all seems like copium to explain why they're doing stuff the author doesn't like.
I'm pretty sure this is what Airtm started out doing years back but it looks like they since pivoted to focusing more on paying contractors with stablecoins: https://www.airtm.com/en/
The whole thing with social media is network effects though. The added friction of a VPN, though small, is just so much larger than "click download, open app"
I don’t think people don’t care, I think they have too much to do. Kids at home, too much work, and still barely making ends meet. Our society is set up to push people to the max, prioritizing quantity and “good enough” over quality. Most people do not have a career where spending 1% more time on a curb design instead of spending that time with their kids results in any more pay, much less the spare time to focus on craftsmanship for its own sake.
Usually when you convert from a non-profit to a for-profit, the non-profit has a third party value its assets then sells them to the new for-profit.
Because you are acquiring assets from yourself, there are some protections, like you can’t have the same people run the non-profit and the new for-profit and the attorney general has to sign off on the transaction, which I believe happened with OpenAI.
Also, the non-profit has to use the funds it received in the acquisition to continue to further its original mission.
My gut is the lawsuit will come down to whether the for-profit paid a fair price, which in retrospect could look suspect given how much OpenAI is valued at now, but the government and a third party likely already approved the transaction long ago.
It may also come down to whether the non-profit used / is using the funds it received in exchange for the assets to continue to serve the public interest per OpenAIs original mission and not the interests of the new for-profit or any associated individuals.
This will be a good test for the legal strategies to convert non-profits to for-profits.
Donald Hoffman takes the panpsychist view a step further. Most theories argue consciousness is something that happens in physical reality, whereas his conscious agent theory assumes consciousness (not space and time) is fundamental and creates physical reality. In other words, brains don’t create consciousness, consciousness creates brains. It implies we’re all little portals allowing consciousness to experience itself vs the panpsychists who say we’re all physical things with a little consciousness in us.
That is a very interesting viewpoint and seems like a logical conclusion, and much more plausible than brains creating consciousnss in a way. The potentially scary conclusion is that AI may also be such a possible outlet for consciousness and who can guess as to its malevolance?
I love listening to Donald Hoffman. Very unique guy. I do sort of subscribe to his theories - and not to take anything away from Donald Hoffman - but this concept comes from eastern philosophy and isn't necessarily his own.
"We are one"
"We are the universe experiencing itself"
These are tenants of buddhism and other eastern religions.
Dr Hoffman makes things a bit more interesting in his observations by relating these older concepts to modern understandings of simulation theory and physics (my own opinion).
Just some context. Ive listened to everything the guy has on YouTube.
That's a grouping meaningful to Christians themselves, but meaningless to non-Christians, which means it's not a good way for this survey to be grouped.
To group things for statistics, pick the grouping level that is meaningful to all the entries in that same list.
Exactly. Nonprofits have a principal agent problem — they are funded by donors, not the people they serve. As a result, they are incentivized to find ways to give a non financial ROI to donors in order to keep the money coming (eg galas, name a building, focus on the donor’s pet project, etc) instead of serving the interests of the people they are supposed to help.
This dynamic — which is similar to politicians serving special interests rather than constituents — leads to lots of inefficient philanthropy.
EA tries to solve this with - better spreadsheets! Surely we can just math our way out of this. This is a step in the right direction, but it doesn’t solve the root problem that donors, not beneficiaries, allocate capital and they are not in a great position to determine what is “effective” because it’s highly subjective.
So even though I think EA largely means well, these same problems of donor motives creep back into giving, despite better spreadsheets and a bunch of phds.
EA gets funding from tech VCs and all of a sudden Mars and AI are the central topics to giving. If hedge funds were the main donors, we’d probably be back focusing on microfinance. And in a sense, the false confidence and superiority that EA lends to donors, even though it does help on the margins creates more cover for the rare nefarious donors or nonprofit that use it to hide sinister ulterior motives.
The whole thing is similar to communism not working and the people in charge think, “we need better spreadsheets to allocate resources!” Rather than, “maybe we should give some of our power back to the people we’re ostensibly trying to help so they can pick what best serves them.”
The single thing missing from 99% of philanthropy conversations after all these years is STILL the voices of the people we’re all trying to help. We don’t need more phds or business tycoons deciding what’s best for the poorest people in the world. We just need to give them a little more money and power. But deep down, most of us don’t want to do that. We want to help in our way, on our terms, while controlling the purse strings and taking the credit.
Your last paragraph is so huge. There's a ton of "well we know the trick" stuff coming from donors and donor organizations that lead to just obviously broken approaches. One Child One Laptop is a great example.
This is one reason why I like direct financial gifts. People receiving the money know what they need the most.
> The single thing missing from 99% of philanthropy conversations after all these years is STILL the voices of the people we’re all trying to help
Not so! Popular EA charity GiveDirectly does exactly what you're suggesting - giving money directly to people in extreme poverty through direct cash transfers.
I love GiveDirectly. If all of EA was simply advocating for unconditional cash transfers, I’d be a fan. But on the whole, it’s far from that. Cash is no longer cool, now it’s AI and existential risk, tomorrow it’ll be something else.
reply