Hacker News new | past | comments | ask | show | jobs | submit | onderkalaci's comments login

PG community had a similar patch, which got reverted from PG 15 on the last minute: https://www.depesz.com/2022/04/06/waiting-for-postgresql-15-...


this would be our preference. we'll try to support this for the next commitfest, and if it gets merged then we will deprecate our extension in favour of the native solution.


Is there a chance it gets committed during the current commitfest? See https://commitfest.postgresql.org/42/4086/

This years feature freeze will be on April 8th (https://www.postgresql.org/message-id/flat/9fbe60ec-fd1b-6ee...), so if it does not get committed within the next two and a half weeks it will miss this years Postgres release in September...


I'll flag it with the team this week. I'm not sure what the blocker was previously, but it might just be a matter of submitting the patch again (with minor changes) so that it's "in" for commitfest, with someone willing to own the work over the next few months.


I think the things that needed to be fixed from last year are already committed (more general stuff not directly related to the JSON patches). Also according to this message at least partial stuff from the JSON patches should be committed "...in the next few days..." however that was two weeks ago: https://www.postgresql.org/message-id/454db29b-7d81-c97a-bc1...

I am a bit worried, even though the patches seem to be "stable", they will miss the deadline... (since I would need those features as well)


I don't know any of the people involved in this patch, so I've sent it to Alexander Korotkov to get his opinion. I'll let you know his response after he has a chance to look at it.


Alexander's response:

> This is very long story starting from 2017. This patch should finally be committed. Some preliminary infrastructure already landed to PostgreSQL 16. Regarding SQL/JSON itself I doubt it will be committed to PostgreSQL 16, because feature freeze is coming soon. It's likely be postponed to PostgreSQL 17.

> Regarding replacement for pg_jsonschema, I don't think it will be a good replacement. Yes, one can construct a jsonpath expression which checks if particular items have particular data types. But I doubt that is nearly as convenient as jsonschema.

It looks like there would still be some benefit for pg_jsonschema, unless the community decided that they wanted support jsonschema validation. We could propose this, but I don't think it would arrive to pg core any time soon.


That JSON_TABLE feature looks pretty useful [0], but it seems complimentary to pg_jsonschema. Can it actually be used to validate a json prior to inserting it to the database?

[0] For my use case, there is a problem: if the json represents a sum type (like Rust enums or Haskell ADTs) as SQL tables. Often you will have a "tag" that specifies which variant the data encodes, and each one has its own properties. When representing as a table, you will usually add the fields of all variants as columns, setting as non-null only the fields belonging to the variant of each row. And the reason I insert jsons into the database is really just to represent sum types in a better way.


Well written article! Also very glad to hear your approach to support the native implementation. For all of our projects when we're integrating external services we usually keep the relevant original JSON responses as a jsonb as kind of a backup. Next to that we extract the data we'll be using to queryable data. To be able to use those "dumps" directly would be a nice thing to have.


How can this be used to validate JSON? (and prevent invalid json from being inserted into the database in the first place). I don't see "jsonschema" mentioned in that post.

Perhaps Postgres could support jsonschema validation natively.


As far as I can tell, considering that you are planning to migrate a multi-tenant application, you are pretty much ready to go. One step you should be careful is to make sure that all your queries include the tenantId filter.

There is a step-by-step documentation for migrating multi-tenant apps to Citus, which might help you to decide how much ready you are for a migration: https://docs.citusdata.com/en/latest/develop/migration.html#...

(p.s. I'm the author of the post)


Thanks a lot for that information!


> I doesn't even really seem to me you necessarily want to partition on time since your load distribution is going to be terrible.

I think that's not accurate. The tables mentioned in the post are first sharded/distributed on `repo_id`. Later, each shard is also partitioned on time dimension (i.e., `created_at `). Thus, the load should be distributed proportionally with the activity for each `repo_id`.


Thanks for the feedback. I didn't want to change the title of the text.


Is there a documentation on "How BDR works?" or so?



The title says "how SQLite is tester", well I don't think the article talks about "how". It only talks about the different types of tests that they apply. I'm curious about "how" it is done. Do you guys run a single "test" button and all the tests are executed? Or, all the tests are done independently? On which platforms do you test?


There is a testing checklist (https://www.sqlite.org/checklists/3130000/index is an example). Each item on the checklist is usually just a single "button push" (really a shell command). But we have to push that same button on lots of different platforms.


There seemed to be no reason why everyone couldn’t just use the same prime, and, in fact, many applications tend to use standardized or hard-coded primes.

Then, if the prime number is standardized or hard-coded, why they just not use it? Why we need to break it?


It shouldn't be nor standardized nor hard-coded because someone with the funds (e.g. NSA) would need to break the encryption using this 'standardized' number only once.

If everyone used a random very large prime (spec suggests so) then NSA would have to break with every prime number possible which currently is not possible


I couldn't find either.


These stars counts are very low compared to the ones in the link.


There are plenty of projects out there with thousands of stars - https://github.com/robbyrussell/oh-my-zsh hsa 25k, the linux kernel has 25k stars


>at least by my terms


I don't agree with it. There are many other projects that many web developers use. For instance: editors, databases, linux utility tools and so.


Sure, but 100% of web developers need web stuff, whereas not all of them use a database, not all of them use linux, and so on.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: