Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Apart from source code versioning what are the other most important real world use cases of diff algorithms ?


I encountered one about 17 years ago. It was for diffing IP packets, TCP segments, and network event payloads. At the time I worked at RSA Security on a network forensics and security analytics product, written in a mix of C and C++. In one of the projects I worked on, we needed to let users diff the packets, segments, and payloads. Back then we were very conservative about adding third-party libraries to the product. I have written more about that culture here: https://news.ycombinator.com/item?id=39951673

Long story short, due to the conservative culture, most data structures and algorithms were implemented in house. The diff algorithm for packets/segments/payloads was written in house too and I was the one to write it.

My implementation was based on a straightforward dynamic programming solution to the longest common subsequence problem. If I recall correctly, it ran in O(mn) time and O(min(m, n)) space in the worst case, where m and n are the lengths of the two sequences. I knew there were more efficient algorithms, but this code was not performance critical. I chose to keep the implementation simple so anyone could understand it, learn it quickly, and fix bugs if they arose. It served us well for the next seven years until the product was replaced with a new one.

On a related note, I sometimes miss that older style of software development where we would dive deep into a problem domain, master it, and design solutions ourselves. I am not being naively nostalgic though. I am very well aware that modern development, with its reliance on well established libraries, usually delivers much greater productivity and reliability. Still, I think the slower and more deliberate approach of building things from the ground up had a certain charm.


"Approval" / "Golden Master" / "Snapshot" / "Characterization" testing can be very helpful.

They all seem to be names for more or less the same idea.

The first time a test runs successfully it auto captures the output as a file. This is the "approved" output and is committed with the code or saved in whatever test system you use.

The next time the test runs, it captures the new output and auto compares it with the approved output. If identical, the test passes. If different, the test fails and a human should investigate the diff.

The technique works with many types of data:

* Plain text.

* Images of UI components / rendered web pages. This can check that your code change or a new browser version do not unexpectedly change the appearance.

* Audio files created by audio processing code.

* Large text logs from code that has no other tests. This can help when refactoring, hopefully an accidental side effect will appear as an unexpected diff.

See: * https://approvaltests.com/ * https://cucumber.io/blog/podcast/approval-testing/ * https://en.wikipedia.org/wiki/Characterization_test


There are a bunch of more (and less) specialised ones used for contract red-lining.

Also in the legal space, sorting through discovery can be incredibly tedious. There are lots of diff-based and diff-like solutions in this space; most are completely proprietary and undocumented.


Aside from the others already mentioned, it's very useful in infrastructure-as-code context like Kubernetes.

I also used diff at work today to compare the output of two different 'docker history' outputs to look for what a high-level overview of changes made by a contractor tasked with hardening a base image.


- Backup and restore

- integrity checks from security perspective

- nlp, finding same tokens in text

Etc


We diff construction schedules! These tend to be massive Gantt charts (400-700 pages is common).


One use case where I never want to miss it is in tests: Understanding what the differences between the expectation and the actual result are is invaluable.


Minimizing DOM mutation operations. In react for instance. But not only.


Interesting, didn’t think of it that way


There is this debate of virtual DOM vs no virtual DOM, and from time to time you see people on HN claim how great vanilla JS is. Won't get into the former debate, but for the latter, people who make such comments probably aren't aware how different it is to create a UI as complex as Outlook/reddit/Spotify vs their personal website or a simple demo. For complex sites with lots of widgets and data, being able to write JSX and also efficiently update the DOM makes a huge difference. It is almost impossible to build and maintain a complex site with vanilla JS.


As someone mostly familiar with non-web UIs - isn't the real question "why aren't you using MVC instead of a big lump of spaghetti?"


This might come as a shock to you, but nobody is downloading an .exe and running it on their computer any more, which does not work on Mac/Linux/Android/iOS etc anyway. In fact, if you do that, people likely think it's virus unless you are Microsoft or something.


Diff algorithms are astoundingly widely applicable.

curses had the task of updating your 2400 baud terminal screen to look like a TUI program's memory image of what should be on it. (The TUI might be vi, nethack, a menu system, a database entry screen, ntalk, whatever.) The simple solution is to repaint the whole screen. But 80x24 is 2000 bytes, which is 8 seconds at 2400 baud. Users won't use a program that takes 8 seconds to respond after their latest keystroke. So curses uses a diff algorithm and the terminal capabilities database to figure out a minimal set of updates to send over the serial line. (Some terminals have an escape sequence to insert or delete a line or a character, which can save a lot of data transmission, but others don't.) React's virtual DOM is the same idea.

If you run `du -k > tmp.du` you can later `du -k | diff -u tmp.du -` to see which directories have changed in size. This can be very useful when something is rapidly filling up your disk and you need to stop it, but you don't know what it is.

If you `sha1sum $(find -type f) > good.shas` you can later use diff in the same way to see which files, if any, have changed.

rsync uses rolling hashes to compute diffs between two computers at opposite ends of a slow or expensive connection in order to efficiently bring an out-of-date file up-to-date without transmitting too much data. It's like curses, but for files, and not limited by terminal capabilities.

rdiff uses the rsync algorithm to efficiently store multiple versions of a file, which does not in general have to contain source code. Deduplicating filesystems can use this kind of approach, but often they don't. But it's common in data backup systems like Restic.

It's common for unit tests to say that the result of some expression should be some large data structure, and sometimes they fail by producing some slightly different large data structure. diff algorithms are very valuable in understanding why the test failed. py.test does this by default.

Genomic analysis works by finding which parts of two versions of a genome are the same and which parts are different; BLAST, by Gene Myers, is a common algorithm for this. This is useful for an enormous variety of things, such as understanding the evolutionary history of species, identifying cancer-causing mutations in DNA from tumors, or discovering who your great-grandmother cheated on her husband with. It's maybe reasonable to say that all of bioinformatics is real-world use-cases of diff algorithms. They have to handle difficult pathological cases like transposition and long repeated sequences.

Video compression like H.264 works to a great extent by "motion estimation", where the compressor figures out which part of the previous frame is most similar to the current one and only encodes the differences. This is a more difficult generalization of the one-dimensional source-code-diff problem because while the pixel values are moving across the image they also can get brighter, dimmer, blurrier, or sharper, or rotate. Also, it's in two dimensions. This is pretty similar to the curses problem in a way: you want to minimize the bandwidth required to update a screen image by sending deltas from the image currently on the screen. Xpra actually works this way.


.PDF data sheets from asshat chip companies that don't include detailed changelogs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: