Hacker Newsnew | past | comments | ask | show | jobs | submit | _boffin_'s favoriteslogin

Sounds like serializing the entire tab state (DOM, JS, form inputs, etc) for instant resume, plus indexing text for instant search.

Imagine being able to resume a complex web-app, complete with input form text and the entire application state. A huge limitation of most browser suspend/resume implementations is that they often cause data loss.

We've all had the experience of letting a tab get too "stale" and suddenly it drops you back to the main page or the (dreaded) empty form. This mistrust becomes a constant mental burden, often forcing you to unnaturally twist your workflow due to fear of getting burned again. Yuck.


Compare that icon introduction to the insane amount of work and research that actually went into the original ribbon 20 years ago: https://web.archive.org/web/20080316101025/http://blogs.msdn...

I organized my sources into a simple RSS reader I visit daily: https://altneu.me/rtr/

Did an inventory based on my crawler data a while back.

Relatively common to find sensitive or embarassing links singled out in robots.txt

Especially in old large organizations, like universities.


> It's a shame that it's turned into a niche scene

Not sure about niche. Maybe more... understated, proprietary and confidential.

We've been doing totally vanilla HTML/JS/CSS web apps for our B2B customers for the last 3-4 years. In fact, we can't use your typical web frameworks because our contracts are measured in half-decades and due diligence against our vendors makes it infeasible to participate in that kind of ecosystem. Banking is a great industry to get into if you want to get frameworks out of your life. You have the perfect bat to use. "Oh.. I don't know about that... is Angular 12 still going to be around and supported halfway through this client's seven-figure contract?".

Doing pure web in 2022 is hard. It's mostly a human/courage thing. The technology is easier than its ever been. But, you have to stand your ground day after day against this onslaught of cargo cult web dev. The outcome is worth whatever salty arguments you get into.

>the toxicity of JavaScript

I understand the sentiment. I'd probably use similar diction if I had to screw around with NPM-style projects for a living. That said, javascript itself can be an answer to this vendor bloat if used very carefully.


This is exactly the case. I've done conversions before where it was possible to see and extract underlying, hidden elements, that were not visible or even detectable in the rendered webpage in a browser.

This is actually a somewhat common method when it comes to a bit of corporate sleuthing.. anytime you see a pretty website with vector-y graphics, maybe engineering-drawing representations.. if the data hasn't been stripped completely or redrawn you can extract information that otherwise people would assume unknowable.

In a recent example... I did this on a startup company's page involving a product where they had a CAD-like side view drawing of one of their products... but the base file (in this case it was an SVG) driving the page actually contained multiple hidden views of the same product and other products and at the 'real' precision of what likely was a DXF export from a CAD program, given to the web team. This allowed a critical dimension of an unannounced product to be precisely determined (to three significant figures) which was a spec that had not been publicly released...


20 years ago I was a grad assistant for a couple of professors who'd built a pretty incredible JIT supply chain platform. The US Army started using it to manage their uniform orders and it worked so well that two entire warehouses were torn down due to lack of use.

A new general came in and insisted on using SAP. 6 months later, they had to rebuild both warehouses.

EDIT: For some additional context, the system managed to basically eliminate the "bull whip effect" all the way down the supply chain. It's really a fascinating system. Developed by Dr. Bill Kernodle and Dr. Steve Davis from Clemson Apparel Research.


Depends on the area and the team.

For product web-related teams, it’s often “pick something decent that will get you shipping the fastest”, with product-y infrastructure teams generalizing/cleaning/more carefully designing things up later, assuming the investment doing so makes sense. Documentation happens minimally if at all; architecture conversations happen but are somewhat on the fly (ie a few folks making a drawing on a white board before moving on). This is generally the pattern for anything new/hot/needs to get out of the door yesterday.

Larger scale infra code tends to be a bit better. Newer iterations on older systems (which may have a bit less pressure) tend to have a bit more time for structure. Many teams will have architecture review structures in place, though it is by no means mandatory. There is no separate architecture role. Who is doing what architecture depends on the complexity of the system/subsystem. Documentation here tends to be better though, IMO, it’s only really the open source stuff -— or that 20% of the engineers in the company are using, and only maybe API-level at that -- that has “good” documentation.

(There is also some code that the company has which is tightly coupled with hardware. Due to the longer timeline of these projects and how hard it is to change things, these have more architecture and process.)

Overall, everyone is responsible for their own code quality/architecture. How much it’s important depends on “where a feature is in the product cycle” and “how critical is it to get this component correct”. Folks are free to use whatever design they want as long as it makes sense. The constraints of “what is reasonable for the product/what already exists/what will others on a given team understand”, however, will tend to strongly imply some best practices. These best practices generally end up being codified in an internal searchable wiki/message board, usually in a team’s onboarding doc or in a “how do you use this” doc for a feature.

Pros to this are relative freedom, speed, and ability to adjust as a situation deems fit. Taking a theoretical approach here, you can think of it as there being a “best-appropriate-velocity” that a given team/organization can function at given their product/environment constraints. Rather than being prescriptive about what this should be by inducing artificial tangential constraints, letting folks choose the structure that makes the most sense tends to increase the likelihood these teams will approach this “best velocity”. To this end, I would say doing so (and structuring a culture around it) has been a large part of this company’s success.

Cons are that oftentimes code is not documented (which can be a struggle for folks unused to the culture); particularly spaghetti systems may remain untouched for years. Systems that could be unified may be left un-unified for longer than they should be. It can also be hard to push for large systemic changes without a lot of effort, though whether that is a bug or a feature (and whether some of the missteps these cause are about “architecture” versus “product intuition”) is another question.

Not sure what you mean by “business architecture” —- if here you mean the structure of large divisions within the company, there are plenty of directors/VPs/etc for that.


"I want to write my orchestration in Python and I'm comfortable hosting my own compute" -> Prefect (lightweight) or Dagster (heavier but featureful)

"My team already knows Airflow and/or I want to pay Astronomer a lot of money" -> Airflow

"I love YAML and everything is on k8s anyway" -> Argo

"I just want something that works out of the box and don't want to host my own compute" -> Shipyard, maybe Orchest

"I want a more flexible, generic workflow engine and don't care about writing orchestration in Python" -> Temporal/Cadence

"I am very nostalgic" -> Azkaban, Oozie, Luigi

"I love clunky Java solutions to data problems" -> Nifi et al

"I like to pay for half-managed solutions and late upgrades to a first-generation technology" -> AWS/GCP hosted Airflow options

"I am on AWS and it doesn't need to be complicated" -> AWS Step Functions


Literally just figured out how to get my browsing history piped into Grafana+Loki so that I have a single source of truth regarding my browsing history using a userscript that ignores CORS and just POSTs to Loki's API.

Apparently that's going away entirely now and there is no good way to stream browsing history anywhere if it's not tied to a browser maker's services (even if they are selfhosted a la Firefox Sync).

You can't even read from the browsers history database because browsers for some reason lock the entire database even while running in WAL mode (my original plan was to do something similar to litestream and just attach to the places.sqlite or History files and push the url to Loki, but that just doesn't work).

There are reasonable reasons to do this, but it really seems that it's just to curb user agency with their own data and devices.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: