Hacker News new | past | comments | ask | show | jobs | submit | numlocked's comments login

(Caveat on this comment: I'm the COO of OpenRouter. I'm not here to plug my employer; just ran across this and think this suggestion is helpful)

Feel free to give OpenRouter a try; part of the value prop is that you purchase credits and they are fungible across whatever models & providers you want. We just got Sonnet 4 live. We have a chatroom on the website, that simply uses the API under the covers (and deducts credits). Don't have passkeys yet, but a good handful of auth methods that hopefully work.


Just wanted to provide some (hopefully) helpful feedback from a potential customer that likely would have been, but bounced away due to ambiguity around pricing.

It's too hard to find out what markup y'all charge on top of the APIs. I understand it varies based on the model, but this page (which is what clicking on the "Pricing" link from the website takes you to) https://openrouter.ai/models is way too complicated. My immediate reaction is, "oh shit, this is made for huge enterprises, not for me" followed immediately by "this isn't going to be cheap, I'm not even going to bother." We're building out some AI features in our products so the timing is otherwise pretty good. We're not big fish, but do expect to spending between $3,000 and $5,000 per month once the features hit general availability, so we're not small either. If things go well of course, we'd love to 10x that in the next few years (but time will tell on that one of course).


From https://openrouter.ai/docs/faq#how-do-i-get-billed-for-my-us...

> We pass through the pricing of the underlying providers; there is no markup on inference pricing (however we do charge a fee when purchasing credits).


Thanks that is helpful, although even that only says they charge a "fee" for purchasing credits and then links to this page[1], which isn't very straightforward

[1]: https://openrouter.ai/terms#_4_-payment


Currently, there is a terrible regression UI bug in OpenRouter (at least on Firefor MacOS). Previously, while the LLM was generating the answer I could scroll up to the top of the answer and start reading.

For the past couple of weeks, it keeps force scrolling me down to the bottom as new words come in. I can't start reading till the whole answer is generated. Please fix.


Looks good, thanx for the suggestion!

I’m a bit late here - but I’m the COO of OpenRouter and would love to help out with some additional credits and share the project. It’s very cool and more people could be able to check it out. Send me a note. My email is cc at OpenRouter.ai


wow, that would be amazing, sending you an email.

I don't think the project would have gotten this far without openrouter (because: how else would you sanely test on 20+ models to be able to find the only one that actually worked?). Without openrouter, I think I would have given up and thought "this idea is too early for even a demo", but it was easy enough to keep trying models that I kept going until Claude 3.7 popped up.


Thank you for the kind words and email received!


COO of OpenRouter here. We are simply stating the WE can’t vouch for the behavior of the upstream provider’s retention and training policy. We don’t save your prompt data, regardless of the model you use, unless you explicitly opt-in to logging (in exchange for a 1% inference discount).


I'm glad to hear you are not hoovering up this data for your own purposes.


That 1% discount feels a bit cheap to me - if it was a 25% or 50% discount I would be much more likely to sign up for it.


We don’t particularly want our customers’ data :)


Yeah, but Openrouter has a 5% surcharge anyway.


Better way to state is 20% of surcharge then :)


You clearly want it a little if you give a discount for it?


Is this the Will Brown talk you are referencing? https://www.youtube.com/watch?v=JIsgyk0Paic


Thanks for linking, yes that is the one he talks about on his blog also.


Ohh, that is lovely. I had not seen it. Black does seemingly nothing wrong and is just in a world of hurt by the 12th move.


I have no insight but boy oh boy is this funny and well written. Like prime Dave Barry [0].

[0] https://www.davebarry.com/columns/how-to-make-board.php


I genuinely can’t remember having laughed at well written prose in a long time, thanks for sharing [0]. OP is gold too.

Takes me back to my high school days when I would have to choke down my laughter as I surreptitiously read Cracked.com articles in class


Reminded me of Dennis Lee too

[0] https://foodisstupid.substack.com/p/escargogurt


My God I havent struggled to contain laughter in a public place like this before. Thanks.


Reminded me a lot of Steve, Don't Eat It! (modulo twenty years of progress in being less deliberately over the top online):

https://web.archive.org/web/20180712104826/http://www.thesne...


Well that kicks me right in the memberberries. The Beggin, Lettuce, and Tomato sandwich was a classic. https://web.archive.org/web/20180712104821/http://www.thesne...


In the same vein, this had me rolling https://cernius.substack.com/p/finger-lickin-good


Dave Barry was my favorite columnist as far back as age 8 or 9. He's so funny.


I have not laughed this hard in ages. I am in love with this author.


A genuine lol, for both the OP and for [0]


Minor -- the date on the Riot Games video "stop point" is likely wrong. It says 2024, but based on the timeline it should be 2023.


Thanks for reporting!


Many years ago I experimented with making recipes into Gantt charts. For more complex recipes this proved incredibly useful. I spent some time trying to automate turning some of the recipe formats into Gantts, but it was pretty cumbersome. I'll bet a good LLM would make this achievable now.

For an example, here's a gantt chart for Beef Bourguignon:

https://ibb.co/c3TVTnX

Note that when I print it on a (physical) recipe card, I have the 'prose' instructions underneath.

I still think this is a pretty good idea, and I still use the cards for this recipe, and Beef Wellington.


I use a version of this preparing for the holidays when I have multiple dishes all trying to finish at approximately the same time.

I use a vertical grid format, top to bottom, with rows bucketed into 10 minute increments. Columns are different cooking implements, so I can ensure I am not over allocating space in the oven/microwave/whatever nor my ability to manipulate the next dish.

Makes it trivial to assess where I am in preparing everything on game day. Also, it gives me a historical artifact for the meal. Which is unexpectedly neat to reference.


Reminds me of the cookingforengineers.com format to some degree.

Here's an example. You'll need to scroll down to see the actual recipe format. https://www.cookingforengineers.com/recipe/194/Cream-of-Mush...


After reading another comment here about a recipe being an “upside down tree” I now understand what at least this format is trying to accomplish.

It has some really nice properties, but trades off a key feature of the gantt format: your hands can only be doing one thing at a time. With the gantt format it’s very clear what you are supposed to be doing at any time and it preserves the order of operations. It doesn’t express how things are combined, however, which the tree format accomplishes.

My motivation for the gantt format was to prevent getting “meanwhiled” by a recipe. You are chugging along, and think you are in good shape, and come across that dastardly word in a recipe: Meanwhile. Turns out you should have beaten the eggs to a stiff whip 15 minutes ago.


> but trades off a key feature of the gantt format: your hands can only be doing one thing at a time. With the gantt format it’s very clear what you are supposed to be doing at any time and it preserves the order of operations.

In the cookingforengineers.com (COE) format, the order is to do each step in the first column and then move right to the next column and do those steps, etc.


Hmm interesting. But I have to confess I have no idea, intuitively, how to read that format. I’m sure it works once you understand it, but if you need an instruction manual for the format then maybe you’ve lost the plot a bit.


It reads left to right, top to bottom. Ingredients are on the left and then each step boxes around the items involved in that step.

Using the mushroom soup recipe as an example:

(1) Melt the butter.

(2) Wash and dice the onions, celery, and leeks.

(3) Sweat the melted butter from step (1) and the diced onions, celery, and leeks from step (2) together for 6 minutes.


I was having this discussion with a workmate. Where this approach really shines is when you need arrange a number of recipe (for example, for a dinner party). Being able to put together different recipe modules into a meal, then know when to do each section would be fun. Though totally over thinking it.


Oh, I really like the idea of using Gantt charts for meals. If you're trying to serve a bunch of hot dishes at the same time in the end and are constrained in terms of keep warm/crispy/moist options, starting/stopping things at the right time can be critical.


I believe wage-fixing would require companies to agree to…fix wages. Having knowledge of average compensation is not inherently problematic. Firms are still able to decide to pay more or less than the prevailing wage.


RealPage also gave recommendations. Giving a salary range seems like the same thing.


By this logic, employees who share their salary publicly also contribute to wage fixing.


If it’s public there is no information asymmetry. There is no fixing.


How?

Information symmetry doesn't prevent fixing. E.g., rents are all public information.

If it is public, won't employers still have access to the salary ranges for free? The very thing Pave is giving them at a cost?


"I believe wage-fixing would require companies to agree to…fix wages"

They don't need to officially twirl their mustaches and laugh evilly while telling each other how they're definitely fixing wages. They just need to share data on wages with other companies in the same or a similar business with the intent of decreasing wages. That is already illegal, because they're colluding with competitors to keep wages low.


The whole point of getting such data is to ... fix wages.


Do you actually believe they will be doing this?


Amazing! Thanks so much!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: