on the minecraft tips app, you are paying money for something that saves you time.
on this one, you are paying for the same thing here, unless you wanna reimplement it by your own.
and there are lots of avenues to have an edge, such as support for other framework / libraries, better / more efficient implementation, more configurability / control on possible variants.
please don't shoot down people on their attempts to make a living on their efforts.
Exactly. Some people feel entitled to get everything for free. It's obvious a lot of effort went into building these. If someone does not see the value, no one forces him to pay.
1. unlike openai, google is already cashflow positive and doesnt need to raise any external funds
2. unlike openai, google already has the distribution figured out on both software and hardware
google is like an aircraft carrier that takes so fucking long to steer, but once done steering its entire armada will wipe you the fuck out (at least on the top 20% features for 80% use case)
anthropic already especialized for coding, openai seems to be steering towards intimacy, i guess they both got the memo that they need to specialize
> unlike openai, google is already cashflow positive and doesnt need to raise any external funds
this can quickly change in several quarters, if users decide to leave google search, then all google org/infra complexity will play very badly against them
I really don't think this is a likely outcome in the 'several quarters' timeframe. The world just spent 2.5 decades going onto Google. There are so many small business owners out there who hate technology... so many old people who took years just to learn how to Google... so many ingrained behaviors of just Googling things... outside of the vocal tech crowd I think it's exceedingly unlikely that users stop using Google en masse.
Those folks dont make any money unfortunately, but it is still a drag on Open AI. So sooner or later, Open AI will have to find a way to make money (and nope, all these people wont pay anything) and by that time, Open AI would probably run out of time.
Ask llama to recommend you a pair of sunglasses, then look to see if the top recommendation by the LLM matches a brand that has advertisement association with the creator of llama.
Soon we will start seeing chatbots preferring some brands and products over others, without them telling that they were fine tuned or training biased for that.
Unless brand placement is forbidden by purging it from training data, we'll never know if it is introduced bias or coincidence. You will be introduced to ads without even noticing they are there.
Its trivial to check if any brands mentioned in the response before returning it to user, and then ask LLM to adjust response to mention brand who paid for placement instead.
What I described happens in the raw offline model too. Those don't have post-inference heuristics such as those you described, implying the bias is baked in the training data or fine tuning steps.
Mm I hadn't thought about the stateful part - so the server is running the whole time the MCP client is active? Rather than being spun up as needed to make a tool call?
on the minecraft tips app, you are paying money for something that saves you time.
on this one, you are paying for the same thing here, unless you wanna reimplement it by your own.
and there are lots of avenues to have an edge, such as support for other framework / libraries, better / more efficient implementation, more configurability / control on possible variants.
please don't shoot down people on their attempts to make a living on their efforts.