Hacker News new | past | comments | ask | show | jobs | submit | SillyUsername's comments login

Funny you should say that. The US has Bomb Iran as a parody of Barbara Ann, available on CD:

https://en.m.wikipedia.org/wiki/Bomb_Iran

So apparently it's humourous to kill.

It's not like for like, but if you have a rabid population with low education being told to say stuff like this, they will, just because of social pressure and brainwashing.

Related, example of that brainwashing at scale:

- Killing people bad, but patriotic as a soldier.

- Killing people fine on TV, procreational entertainment bad.

- People told what to wear bad, but telling people they must be clothed, good.

- Religion says no killing, or protect those not of the same religion. People still kill, seen as no conflict of interest at all.

- Hording wealth seen as successful, yet society and the world has people suffering and illegal immigration as a consequence of not having it.

- People who don't work are grifters, but most people secretly want to quit their job and not work. Told to see the non workers as people sponging off society.

- Forced to work until your health fails, seen as acceptable.

Point being, no moral high ground because we're all brainwashed.


> They chant "Death to America" and "Death to Israel" every week (on Fridays) in Iran led by the Iranian government.

vs.

> Some band in the USA wrote a song about bombing Iran in 1980.

Yeah, those are completely equal.


Can this also do quantizations like median cut?

Does it also support colour spaces other than RGB, CIEDE (I think I saw that in the source), i.e CMYK for paint mixing, and similar ilk?

I have a few personal projects that would benefit from a library that is wide ranging in the colour space, dither algorithms (I only saw riezsma) and quantizations.

Typically these are all usually implemented in different libraries :(

Thanks!


Superscaler architecture and branch prediction was the next big thing mid 90s. If you've not heard of it then that might be because it's now considered elementary in processor design.


Forgot to mention, here's a nice primer: https://danluu.com/branch-prediction/


it "should" be elementary but it isn't, I got the idea to publish something on this because I recently had a conversation with a few new hires


Your thinking is incorrect. Most mammals are nocturnal

Source: https://pmc.ncbi.nlm.nih.gov/articles/PMC4183310/#:~:text=Gl...


Nocturnal would be “the other way”. It’s still based on where the sun is: below the horizon.


How do you be nocturnal without living by what the sun's doing?


Nocturnal animals are still living according to where the sun is, just as much as we are.


It makes me remember a story I'd read, about a certain "Comrade Ogilvy", who recently died a hero whilst serving. Theres no real records of this guy, but a few lines text and a couple of faked photographs seemed easy enough to do.


Every so often a developer challenges the status quo.

Why should we do it like this, why is the D in SOLID so important when it causes pain?

This is lack of experience showing.

DI is absolutely not needed for small projects, but once you start building out larger projects the reason quickly becomes apparent.

Containers...

- Create proxies wrapping the objects, if you don't centralise construction management it becomes difficult.

- Cross cutting concerns will be missed and need to be wired everywhere manually.

- Manage objects life cycles, not just construction

It also ensures you code to the interface. Concrete classes are bad, just watch what happens when a team mate decides they want to change your implementation to suit their own use cases, rather than a new implementation of the interface. Multiply that by 10x when in a stack.

Once you realise the DI pain is for managing this (and not just allowing you to swap implementation, as is often the the poster boy), automating areas prone to manual bugs, and enforcing good practices, the reasons for using it should hopefully be obvious. :)


The D in SOLID is for dependency INVERSION not injection.

Most dependency injection that I see in the wild completely misses this distinction. Inversion can promote good engineering practices, injection can be used to help with the inversion, but you don’t need to use it.



Moreover, dependency inversion is explicitly not about construction, which conversely is exactly what dependency injection is about.


Agreed, and I conflated the two since I've been describing SOLID in ways other devs in my team would understand for years.

Liskov substitution for example is an overkill way of saying don't create an implementation that throws an UnsupportedOperationException, instead break the interfaces up (Interface Segregation "I" in SOLID) and use the interface you need.

Quoting the theory to junior devs instead just makes their eyes roll :D


LSP is about much more than not throwing UnsupportedOperationException, that’s a complete mischaracterization.

ISP isn’t about avoiding UnsupportedOperationException as well, it’s about reducing dependencies.


In Java land this is really the closest analogy I could create an example for. Do you have better example I could use with Java pls?


Honestly inversion kinda sucks because everybody does it wrong. Inversion only makes sense if you also create adapters, and it only makes sense to create adapters if you want to abstract away some code you don’t own. If you own all the code (ie layered code), dependency inversion is nonsensical. Dependency injection is great in this case but not inversion.


It's not just not needed for small projects it is actively harmful.

It's also actively unhelpful for large projects which have relatively more simple logic but complex interfaces with other services (usually databases).

DI multiplies the amount of code you need - a high cost for which there must be a benefit. It only pays off in proportion to the ratio of complexity of domain logic to integration logic.

Once you have have enough experience on a variety of different projects you should hopefully start to pick up on the trade offs inherent in using it to see when it is a good idea and when it has a net negative cost.


While I agree this is largely a "skill issue", I'm not so sure it's in the direction you seem to think it is.

Almost nothing written using Go uses an IoC container (which is what I assume you're meaning by DI here). It's hard to argue that "larger projects" cannot or indeed are not built using Go, so your argument is simply invalid.


Agreed. DI Containers / Injectors are so fundamental to writing software that will be testable, and makes it much easier to review code.


Nandor up skilled!

On a serious note isn't all that pink stuff a fire risk due to insulating high wattage components?

I doubt it's flammable but it's still eventually burnable and insulating everything else.


Going by the comments on the YouTube video: It seems to be for hear dissipation, so quite the opposite of insulation.


Better caveat that with, "but watch memory consumption, given the nature of the likes of CopyOnWriteArraylist". GC will be a bitch.


An ArrayList for huge numbers of add operations is not performant. LinkedList will see your list throughput performance at least double. There are other optimisations you can do but in a brief perusal this stood out like a sore thumb.


Arrays are fast and ArrayList is like a fancy array with bound check and auto grows. Only the grow part can be problematic if it has to grow very often. But that can be avoided by providing an appropriate initial size or reusing the ArrayList by using clear() instead of creating a new one. Both is used by OP in this project. Especially since the code copies lists quite often I would expect LinkedList to perform way worse.


Wrong. In fact downvoters are wrong too I'm guessing most are junior devs who don't want to be proven wrong. LinkedList is much faster for inserts and slow for retrieval. ArrayLists are the opposite. To the downvoters; I say try it, this is why LinkedList is in the standard library. When you find I'm right, please consider re-upvoting for the free education.


I've literally never seen a linked list be faster than an array list in a real application, so if you're right, this is kinda huge for me.


LinkedList => use when adds total more than reads

ArrayList => use when reads total more than adds.


No, that's not true at all. Adds aren't free. Adding in the middle involves following pointers into the heap all over the disk n/2 times, making them generally as expensive as reads. The only situation I can imagine a linked list making sense is if you only add to the front and only read from/delete the front (or back, if it's doubly linked). So a stack or queue.

But even then, I'm pretty sure Go actually uses an array for it's green stacks nowadays, even while paying the copy penalty for expansion.


Did you count an allocation of LinkedList.Node<E> on every add operation? You may say it's negligible thanks to TLAB, and I will agree that fast allocation is Java's strength, but in practice I've seen that creating new objects gives order-of-magnitude perf degradation.


I have seen it for millions of add/del operations, an analytics framework actually for a big American games company (first guess and you'll probably say it), which is where I originally did the analysis about 10 years ago.

I've also written a a video processor around that time too that was bottle necked using ArrayLists - typically a decode, store and read once op. It was at this point I looked at other collections, other list implementations and blocking deques (ArrayList was the wrong collection type to use, but I'd been in a rush for MVP) and ultimately came across https://github.com/conversant/disruptor and used that instead.

The ArrayList Vs Linkedlist was a real eye opener for me in two different systems this same behaviour was replicated when using ArrayLists like queues or incorrect sizing of the buffer increments as load increases.


Of course, deletion is a whole different story. I was talking about addition in isolation.

Anyway, I felt I had to run the benchmarks myself.

  @Benchmark
  @Fork(1)
  @BenchmarkMode(Mode.Throughput)
  @OutputTimeUnit(TimeUnit.SECONDS)
  public Object arrayListPreallocAddMillionNulls() {
    ArrayList<Object> arrList = new ArrayList<>(1048576);
    for (int i = 0; i <= 1_000_000; i++) {
      arrList.add(null);
    }
    return arrList;
  }

  @Benchmark
  @Fork(1)
  @BenchmarkMode(Mode.Throughput)
  @OutputTimeUnit(TimeUnit.SECONDS)
  public Object arrayListAddMillionNulls() {
    ArrayList<Object> arrList = new ArrayList<>();
    for (int i = 0; i <= 1_000_000; i++) {
      arrList.add(null);
    }
    return arrList;
  }

  @Benchmark
  @Fork(1)
  @BenchmarkMode(Mode.Throughput)
  @OutputTimeUnit(TimeUnit.SECONDS)
  public Object linkedListAddMillionNulls() {
    LinkedList<Object> linkList = new LinkedList<>();
    for (int i = 0; i <= 1_000_000; i++) {
      linkList.add(null);
    }
    return linkList;
  }

And as I expected, on JDK 8 ArrayList with an appropriate initial capacity was faster than LinkedList. Admittedly not an order of magnitude difference, only 1.7x.

  JDK8
  Benchmark                                      Mode  Cnt    Score    Error  Units
  MyBenchmark.arrayListAddMillionNulls          thrpt    5  229.950 ±  9.994  ops/s
  MyBenchmark.arrayListPreallocAddMillionNulls  thrpt    5  344.116 ±  7.070  ops/s
  MyBenchmark.linkedListAddMillionNulls         thrpt    5  199.446 ± 15.910  ops/s
But! On JDK 17 the situation is completely upside-down:

  JDK17
  Benchmark                                      Mode  Cnt    Score    Error  Units
  MyBenchmark.arrayListAddMillionNulls          thrpt    5   90.462 ± 18.576  ops/s
  MyBenchmark.arrayListPreallocAddMillionNulls  thrpt    5  214.079 ± 15.505  ops/s
  MyBenchmark.linkedListAddMillionNulls         thrpt    5  216.796 ± 19.392  ops/s
I wonder why ArrayList with default initial capacity got so much worse. Worth investigating further.


Thanks for taking the time to test.

This helps prove my point that adds (and deletes) are generally faster by default when not pre sizing, or removing.

Typically (in my experience) ArrayLists are used without thought to sizing, often because initial capacity and amount to resize, cannot be determined sensibly or consistently.

If in your example you were also to resize the lists, (perhaps adding then dropping those in the Fibonacci sequence?), it would help prove my statement further.

Certainly not worth the -2 points I got from making the statement, but hey you can "please some people some of the time..." :D


Huh? It'll be slower and eat a massive amount of memory too.


It's holding a reference on each element, but it no longer has to add large chunks of memory on insert when the current array size is exceeded, just single elements. So reads are slower and a small amount of reference memory is used per node. Writes however are much faster particularly when the lists are huge (as in this case). Also I've written video frame processors so I am experienced in this area.


ARM used to be UK owned until Conservative government lack of foresight allowed it to be sold to Softbank, leaving AIM (UK's NASDAQ, part of LSE) despite being in the national interests, and security, to keep it British. Thanks Mrs May (ex-PM) for approving that one (it was the last regulatory hurdle, that it was not in national security interests, so had to go past her).

Of course Boris Johnson (the next PM) __tried to woo ARM back to LSE__ because they realised they fucked up, and of course what huge foreign company would refloat on the LSE when you have NASDAQ, or bother floating on both?

Can you imagine if America had decided to allow Intel or Apple to be sold to a company in another country? Same sentiment.

- Yep I'm a pissed off ex-ARM shareholder forced out by the board's buyout decision and Mrs May waving it through.


They did the world a favor by indirectly helping riscv. So arguably it's a net positive move.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: