Hacker News new | past | comments | ask | show | jobs | submit | aayjaychan's comments login

There is no "one-time" over the network. Invalidating the refresh token immediately when the server recieves it is asking for trouble.


it's always wild to me hearing what pressure people use.

on my road bike with 28c and inner tubes, 60 psi is what i use on _good_ road surface. maybe the roads are just shit here, but even 55 psi feel rough. i usually run on around 50 psi, 40 in winter.

there was a time i lost my track pump and i just pump the tyres using a mini pump without a guage. later i discovered i was running on something as low as 30 psi.

i have never had a pinch flat. i don't think i'm particularly light. full load when doing groceries is probably 85 kg. is it just that my pressure juage is woefully inaccurate?


navigation properties are not loaded automatically, because they can be expensive. you need to use `.Include(foo => foo.Bars)` to tell EF to retrieve them.

EF tries to be smart and will fix up the property in memory if the referenced entities are returned in separate queries. but if those queries don't return all records in `Foo.Bar`, `Foo.Bar` will only be partially populated.

this can be confusing and is one of the reasons i almost never use navigation properties when working with EF.


We have those, and when I say inconsistent I mean inconsistent on the same query / exact same line of code on the same database.

e.g. stick a breakpoint, step over, see in the debugger that it was not populating everything it should. Then run it again, do the same and see different results. Exact same code, exact same db, different results.

5000 results back from the db, anything between 5000 and a handful were only fully correctly populated.


If that happens with the correct `.Include()`, you really should raise an issue with EF, trying to reproduce it. If it's not a random mistake in your code, that's a really big deal.


Like your parent said, the same line of code will or won't populate the navigation property depending on whether EF is already tracking the entity that belongs there (generally because some other earlier query loaded it). You get different behavior depending on the state of the system; you can't look at "one line of code" in isolation unless that line of code includes every necessary step to protect itself against context sensitivity.


Published history in Mercurial is sacred [1]. Modern Mercurial fully embraces history editing, and provides (IMO) better tools than git to facilitate (safe, collaborative) history editing.

[1] You can still change published history if you try hard enough with a lot of co-ordination.

If Mercurial doesn't support cleaning up of commits, then the commit history of Mercurial itself [2] wouldn't look so clean.

[2]: https://foss.heptapod.net/mercurial/mercurial-devel/-/commit...


I drew a different conclusion from similar experience. I avoid navigation properties and other advanced mapping features, so an entity maps flatly to one table.

The LINQ queries will be more verbose as you'll need to write the join and group clauses explicitly, but I find it much easier to predict the performance of queries since the generated SQL will look almost exactly the same as the LINQ syntax. It's also less likely to accidentally pull in half the database with `.Include()` this way.


You can use `hg commit --interactive` to use the the commit itself as the staging area. And I'd argue that's a better model because:

- It limits the amount of time changes are stored in an intermediate state, making it much less likely to interfere with other operations, like pulling and switching branches.

- You can use the same commands (and mental model) to manage the "staging area" and other commits.

- The history of staging and unstaging becomes actual history and can be recovered and shared.


No reflog, but there is the obslog, which stores obsolescence history of individual revisions. Better yet, the obslog is distributed during pull / push. Because Mercurial knows precisely which commit is replaced by which commit, it can automatically resolve a lot of conflicts that result from history editing.

Have a branch-a that depends on branch-b that depends on branch-c that upstream just rebased and squashed some of its commits? More often than not `hg pull && hg evolve` is all you need to do to synchronise everything. This makes stacked PRs much easier to manage.


If Terraform is stateless, how does it know what it needs to / can delete?

You'll either have to:

- Move the state management elsewhere, and invoke different commands depends on what and how resources are changed. This will make automation difficult, and doesn't solve the problem.

- Make Terraform assume that everything it sees is under its management, deleting everything not defined in the current configuration. This will make Terraform hard to adopt in an environment with existing infrastructure.


Most (all?) cloud providers support some form of tagging. Have like a `managed-by=terraform` tag, and assume everything with that tag is Terraform managed.


Im gonna ignore the fact that moving state to other place completwly misses article point, but:

Two Users create exactly the same resource with the same tags.

Which one should be removed by Terraform?

Now either way lets ignore that.

You want to refresh infrastructure to know what to do. Without the state you have to go through EVERY API CALL on every service even those you did not create to be able to determine the whole state of the infrastructure which would be super super long action.

Without dependencies you would also have to maintain and build dependency tree EVERY TIME you would try to apply infra.


I don't believe it misses the article's point - I think he's asking the valid question "if the target system has the ability to to store all the required state in order to understand mappings between what I want and what I have, why do we need additional state files which always seem to be wrong"

And a good answer might be "all providers don't have that capability" and/or "providers can't efficiently answer questions about that, such as 'find me all things with this configuration tag'.

In your example, those two users wouldn't have the same tags, because you'd arrange it so that they didn't - either by user or a resource grouping based on the configuration itself. This is the choice made by some other tooling, for better or worse.


There are a lot of resources in AWS that don't support tags.


Do you have a few examples?


Unfortunatly, it's just enough to be a problem in many cases:

Route53 records, ECR repositories, Cloudwatch Alarms, IAM user groups, EC2 Launch configuration


Not all of them have tags.


Presumably you'd encode removed resources somehow in the DSL. Maybe a flag like `removed = true`.


I'll consider this a variation on moving state elsewhere. Now you have to keep the deleted resource forever. Or keep track of which environments the version with the removal directive is deployed to, or risk having orphaned resources in different environments.


It's not that bad as long as you have a reasonable deployment process. If you can't rely on your production state being fairly up to date relative to the Terraform definition, then you've got bigger problems than dealing with TF statefulness.

If you know that TF changes are guaranteed to be deployed within X days of writing them (e.g., with something like Atlantis, or even a weekly deployment schedule), then you can put a date in the comment of when the tombstone was added, and clean it up either automatically after X days or occasionally in a semi-automated sweep.


I agree having production updated frequently is ideal, but sometimes we don't get to choose when things are deployed when working with external clients. I'm glad Terraform doesn't dictate the workflow, so that we can fix one thing at a time.


Fair enough. It does at least let you continue to use the same commands/tooling.

But anyway, agreed, I think the stateful status quo is the way to go.


> Now you have to keep the deleted resource forever

Not really. In every other system you can remove tombstones after a while.


Not much different from how DB migration scripts are managed when using ORMs.


Congratulations; your state management is now part of your code.


And that would be a huge improvement! Code has history. Code can be managed with sed and grep. Code can be generated by tools I write myself.

Adding a tombstone for deletion, or a formerly-known-as tag for renames, is only "state" in the way that reserved tag numbers in protocol buffers are "state". It is a little annoying to have to do, and it creates clutter that you eventually have to go back and clean up, but neither of those is a dealbreaker, and in the meantime it solves the second-biggest problem with Terraform, which is the inscrutability of what it actually thinks it's doing when it comes up with a plan you don't expect. (The biggest problem is how ridiculously inexpressive HCL is as a language.)


The best practices for terraform is to use a versioned state store, which covers history.

Under the hood, the terraform file is just JSON, so sed, grep, jq etc can be used to manipulate it (as well as any other tools you'd care to write).


csproj is fine these days. sln on the other hand...


that would be `hg prune` in modern Mercurial.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: