Hacker News new | past | comments | ask | show | jobs | submit | rbkillea's comments login

Shirley published a paper in 1991 showing that low discrepancy samplers worked well in a ray-tracing context. So I wouldn't say that's particularly new.


I'm not sure what exactly you are responding to, but low discrepancy samples is not at all what this page is about. There have been a lot of papers on many different techniques with various upsides and drawbacks when it comes to reducing noise.

Comparing this overview to one of the most basic techniques that is used everywhere and is a given is like reading an article on a modern car engine and dismissing it because you saw someone light some gas on fire 30 years ago.


I'm sorry you took such offence to my comment. I guess I should have quoted the statement that I was replying to within the overview section: "Recently, the use of low discrepancy sampling [Jarosz et al. 2019] and tillable blue noise [Benyoub 2019] has been used by Unity Technologies, Marmoset Toolbag and NVIDIA in real time ray tracers."


Those papers are about specific low discrepancy sampling patterns, their ease of use, their speed, their flexibility and the scalability of their properties into higher dimensions. Papers written by knowledgeable researchers in 2019 were not used in all the things you listed.

I understand you know what low discrepancy samples are, but equating the very first demonstration that random sampling wasn't ideal for day tracing, to the state of the art that has evolved over three decades of research is ludicrous.

I don't know why you are desperate to be dismissive but it has no basis whatsoever in reality.


The paragraph was a chronological account of when techniques were introduced with accompanying citations. The 1998 paper in the preceding sentence would lead the reader to believe that low discrepancy sampling came after the robust sampling methods (and potentially even that low discrepancy sampling was a new technique). Both have had further work that continues to this day, and both are much more complex and well-understood today.

> desperate to be dismissive

How is clarifying what I was commenting on "desperate"? I feel like you're trying to escalate here.


whoa, calm down. The post makes it sound like low-discrepancy sampling is a recent development, and 'rbkillea is pointing out that it is 30 years old.

That's it, no need to attribute malice anywhere. I think you are reading way too much into 'rbkillea's comments.


There is nothing about low discrepancy sampling being a recent development. This article is about recent research and suggesting anyone involved would imply that one of the most trivial aspects of rendering is somehow new is total nonsense.

There is a recent paper about generalizing n-rooks sampling to higher dimensions which seems to have been misunderstood by yourself and others. It was written by researchers who already have dozens of high profile papers on many different topics.


Now subtract 3 (or 6) percent to account for what money does under inflation (or market growth). If you don't geometric average a growth that outpaces at least 3%, you are working for less than you were when you started. If it's between 3% and 6%, you are being outpaced by institutional investors because your capacity for savings are extremely limited in comparison.


This is the only sane way to look at it. If you compare it with historical correlations between compensation and productivity[1], then things look pretty bleak.

Over the last decade, engineer pay certainly hasn't kept up with increases in cost of living expenses, especially in places like SF or NYC.

[1] https://thumbor.forbes.com/thumbor/960x0/https%3A%2F%2Fblogs...


http://causality.cs.ucla.edu/blog/index.php/2016/02/01/works...

Is this sufficiently applied for you? Pearl may not apply his work on causality often, but there are a lot of people who are on the applied side who do.


By Ctrl+F'ing, I find 5 mentions of the word "master", none of which are the master theorem. I prefer this to CLRS as, while it's a neat trick, it tends to result in a bunch of people memorising the cases (and taking a "because the book told me to" level of understanding away from that part of the course).


Exactly. The students should be the masters, not the theorem.


I agree as well. Maybe it serves to alert the reader to when they are missing a piece of prerequisite knowledge, but there are ways to do this that don't create as much friction.


This has always bugged me: what reasons outside of convention do we have for preferring O(V + E) to O(E) on algorithms that only make sense in the context of a single connected component? I know this is slightly OT, but tangentially related.


I think that the distinction that dustingetz was trying to make is related to the fact that trees are graphs. Many applications naively nest data in tree structures with links from nodes to other nodes that are implied by the data. Maps are powerful enough to represent arbitrary graphs, so if you don't take advantage of this you end up with a lot of duplicated data in tree format. Then you need to do a bunch of tree traversals to link the data to itself at query time.


That happened in 2012.


They are in the units "greenhouse gas emissions" which corrects for the extra potency. The sad thing is that people often double-correct for such things.


Could someone explain what differentiates the Google Knowledge Graph from a Markov Logic Network implementation? Is that what it is?

For reference: https://sites.google.com/site/slgworkshop2013/accepted_paper...


MLNs are one possible way to implement the inference component that any knowledge graph needs.

Google's Knowledge Vault uses a fusion of a number of different extraction methods. Their exact methodology is laid out in their "Knowledge Vault" paper[1].

If you want to go deeper (ha!), then Deep Dive[2] is open source, and pretty much state-of-the art. It does inference using Gibbs sampling on a Factor Graph (Markov models/MLNs can be represented as Factor Graphs).

[1] http://www.cs.ubc.ca/~murphyk/Papers/kv-kdd14.pdf

[2] http://deepdive.stanford.edu/doc/general/kbc.html


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: