Hacker Newsnew | past | comments | ask | show | jobs | submit | et1337's commentslogin

I’m no Google fan, but deprecating XSLT is a rare opportunity to shrink the surface area of the web’s “API” without upsetting too many people. It would be one less thing for independent browsers like Ladybird to worry about. Thus actually weakening Google’s chokehold on the browser market.

> but deprecating XSLT is a rare opportunity to shrink the surface area of the web’s “API” without upsetting too many people

There's a lot of back and forth on every discussion about XSLT removal. I don't know if I would categorize that as 'without upsetting too many people'


We are largely the nerds that other nerds picked on for being too nerdy. I’d bet that a hugely disproportionate share of all the people in the world who care about this subject at all are here in these conversations.

Actual normies don’t think of the Internet at all. They open Facebook The App on their iPads and smartphones and that’s the internet for them.

Passionate nerds giving a shit can build a far more rosy world than whatever that represents, so I don’t see why anyone should give a damn if this happens to be somewhat niche.


At $WORK we have taken interface segregation to the extreme. For example, say we have a data access object that gets consumed by many different packages. Rather than defining a single interface and mock on the producer side that can be reused by all these packages, each package defines its own minimal interface containing only the methods it needs, and a corresponding mock. This makes it extremely difficult to trace the execution flow, and turns a simple function signature change into an hour-long ordeal of regenerating mocks.

> a single interface and mock on the producer side

I still believe in Go it is better to _start_ with interfaces on the consumer and focus on "what you need" with interfaces instead of "what you provide" since there's no "implements" concept.

I get the mock argument all the time for having producer interfaces and I don't deny at a certain scale it makes sense but I don't understand why so many people reach for it out of the gate.

I'm genuinely curious if you have felt the pain from interfaces on the producer that would go away if there were just (multiple?) concrete types in use or if you happen to have a notion of OO in Go that is hard to let go of?


> or if you happen to have a notion of OO in Go that is hard to let go of?

So much this. I think Go's interfaces are widely misunderstood. Often times when they're complained about, it boils down to "<old OO language> did interface this way. Why Go won't abide?" There's insistence in turning them into cherished pets. Vastly more treasured than they ought to be in Go, a meaningless thin paper wrapper that says "I require these behaviors".


> Rather than defining a single interface and mock on the producer side that can be reused by all these packages

This is the answer. The domain that exports the API should also provide a high fidelity test double that is a fake/in memory implementation (not a mock!) that all internal downstream consumers can use.

New method on the interface (or behavioral change to existing methods)? Update the fake in the same change (you have to, otherwise the fake won't meet the interface and uses won't compile!), and your build system can run all tests that use it.


> The domain that exports the API should also provide a high fidelity test double that is a fake/in memory implementation (not a mock!)

Not a mock? But that's exactly what a mock is: An implementation that isn't authentic, but that doesn't try to deceive. In other words, something that behaves just like the "real thing" (to the extent that matters), but is not authentically the "real thing". Hence the name.


There are different definitions of the term "mock". You described the generic usage where "mock" is a catch-all for "not the real thing", but there are several terms in this space to refer to more precise concepts.

What I've seen:

* "test double" - a catch-all term for "not the real thing". What you called a "mock". But this phrasing is more general so the term "mock" can be used elsewhere.

* "fake" - a simplified implementation, complex enough to mimic real behavior. It probably uses a lot of the real thing under the hood, but with unnecessary testing-related features removed. ie: a real database that only runs in memory.

* "stub" - a very thin shim that only provides look-up style responses. Basically a map of which inputs produce which outputs.

* "mock" - an object that has expectations about how it is to be used. It encodes some test logic itself.

The Go ecosystem seems to prefer avoiding test objects that encode expectations about how they are used and the community uses the term "mock" specifically to refer to that. This is why you hear "don't use mocks in Go". It refers to a specific type of test double.

By these definitions, OP was referring to a "fake". And I agree with OP that there is much benefit to providing canonical test fakes, so long as you don't lock users into only using your test fake because it will fall short of someone's needs at some point.

Unfortunately there's no authoritative source for these terms (that I'm aware of), so there's always arguing about what exactly words mean.

Martin Fowler's definitions are closely aligned with the Go community I'm familiar with: https://martinfowler.com/articles/mocksArentStubs.html

Wikipedia has chosen to cite him as well: https://en.wikipedia.org/wiki/Test_double#General .

My best guess is that software development co-opted the term "mock" from the vocabulary of other fields, and the folks who were into formalities used the term for a more specific definition, but the software dev discipline doesn't follow much formal vocabulary and a healthy portion of devs intuitively use the term "mock" generically. (I myself was in the field for years before I encountered any formal vocabulary on the topic.)


> "mock" - an object that has expectations about how it is to be used. It encodes some test logic itself.*

Something doesn't add up. Your link claims that mock originated from XP/TDD, but mock as you describe here violates the core principles of TDD. It also doesn't fit the general definition of mock, whereas what you described originally does.

Beck seemed to describe a mock as something that:

1. Imitates the real object.

2. Records how it is used.

3. Allows you to assert expectations on it.

#2 and #3 sound much like what is sometimes referred to as a "spy". This does not speak to the test logic being in the object itself. But spies do not satisfy #1. So it is seems clear that what Beck was thinking of is more like, say, an in-memory database implementation where it:

1. Behaves like a storage-backed database.

2. Records changes in state. (e.g. update record)

3. Allows you to make assertions on that change in state. (e.g. fetch record and assert it has changed)

I'm quite sure Fowler's got it wrong here. He admits to being wrong about it before, so the odds are that he still is. The compounding evidence is not in his favour.

Certainly if anyone used what you call a mock in their code you'd mock (as in make fun of) them for doing so. It is not a good idea. But I'm not sure that equates to the pattern itself also being called a mock.


> 3. Allows you to assert expectations on it.

I think this is the crux that separates Fowler's mock, spy, and stub: Who places what expectations.

Fowler's mock is about testing behavioral interaction with the test double. In Fowler's example, the mock is given the expectations about what APIs will be used (warehouseMock.expects()) then those expectations are later asserted (warehouseMock.Verify()).

Behavioral interaction encodes some of the implementation detail. It asserts that certain calls must be made, possibly with certain parameters, and possibly in a certain order. The danger is that it is somewhat implementation specific. A refactoring that keeps the input/output stable but achieves the goal through different means must still update the tests, which is generally a red flag.

This is what my original statement referred to, the interaction verification. Generally the expectations are encoded in the mock itself for ergonomics sake, but it's technically possible to do the interaction testing without putting it in the mock. Regardless of exactly where the assertion logic goes, if the test double is testing its interactions then it is a Fowler mock.

(As an example: An anti-pattern I've seen in Python mocks is asserting that every mocked object function call happens. The tests end up being basically a simplified version of the original code and logic flaws in the code can be copied over to the tests because they're basically written as a pseudo stack trace of the test case.)

In contrast, a stub is not asserting any interaction behavior. In fact it asserts nothing and lets the test logic itself assert expectations by calling the API. ie:

> 3. Allows you to make assertions on that change in state. (e.g. fetch record and assert it has changed)

How is that change asserted?

A Fowler stub would be:

> myService = service.New(testDB.New()) > myService.write("myKey", 42) > assert(myService.read("myKey") == 42)

A Fowler mock would be:

> testDB = testDB.New() > testDB.Expect(write, "myKey", 42) > myService = service.New(testDB) > myService.write("myKey", 42) > testDB.Verify()

These concepts seem distinct enough to make mock a simple.

Fowler's spy seems to sit half-way between mock and stub: It doesn't assert detailed interaction expectations, but it does check some of the internals. A spy is open-ended, you can write any sort of validation logic, whereas a mock is specifically about how it is used.

I have used spys in Go basically whenever I need to verify side effect behavior that is not attainable via the main API.

By Fowler's definition, nocks are a niche test double and I suspect that what many folks would call a mock are not technically a mock.


Yes, this is exactly the problem with go's recipe.

Either you copypaste the same interface over and over and over, with the maintenance nightmare that is, or you always have these struct-and-interface pairs, where it's unclear why there is an interface to begin with. If the answer is testing, maybe that's the wrong question ti begin with.

So, I would rather have duck typing (the structural kind, not just interfaces) for easy testing. I wonder if it would technically be possible to only compile with duck typing in test, in a hypothetical language.


> I wonder if it would technically be possible to only compile with duck typing in test

Not exactly the same thing, but you can use build tags to compile with a different implementation for a concrete type while under test.

Sounds like a serious case of overthinking it, though. The places where you will justifiably swap implementations during testing are also places where you will justifiably want to be able to swap implementations in general. That's what interfaces are there for.

If you cannot find any reason why you'd benefit from a second implementation outside of the testing scenario, you won't need it while under test either. In that case, learn how to test properly and use the single implementation you already have under all scenarios.


> The places where you will justifiably swap implementations during testing are also places where you will justifiably want to be able to swap implementations in general.

I don't get this. Just because I want to mock something doesn't mean I really need different implementations. That was my point: if I could just duck-type-swap it in a test, it would be so much easier than 1. create an interface that just repeats all methods, and then 2. need to use some mock generation tool.

If I don't mock it, then my tests become integration test behemoths. Which have their use too, but it's bad if you can't write simple unit tests anymore.


> then my tests become integration test behemoths.

There are no consistent definitions found in the world of testing, but I assume integration here means entry into some kind of third-party system that you don't have immediate control over? That seems to be how it is most commonly used. And that's exactly one of the places you'd benefit from enabling multiple implementations, even if testing wasn't in the picture. There are many reasons why you don't want to couple your application to these integrations. The benefits found under test are a manifestation of the very same, not some unique situation.


Not really. Sometimes you just want to mock some bigger system that is still internal/local. And sometimes it is an external system, but it makes no sense to wrap some sdk in yet another layer, if you won't ever swap it out.

> Sometimes you just want to mock some bigger system that is still internal/local.

What for?


I 100% agree with what you've written, but if you haven't checked it out, I'll highly suggest trying mockery v3 for mocks: https://vektra.github.io/mockery

It's generally faster than a build (no linking steps), regardless of the number of things to generate, because it loads types just once and generates everything needed from that. Wildly better than the go:generate based ones.



AFAICT that uses go/types, loaded uniquely per execution via packages.Load¹, which is by far the primary reason why e.g. go.uber.org/mock (previously github.com/golang/mock) can become extremely slow.

mockery v3 does not do this. it type-checks just once for ALL mocks, regardless of the number, so it essentially does not grow slower as you create more mocks (since type checking is usually FAR slower than producing the mock).

1: https://github.com/maxbrunsfeld/counterfeiter/blob/000b82ca1...


What is the alternative though? In strongly typed languages like Go, Rust, etc.. you must define the contract. So you either focus on what you need, or you just make a kitchen-sink interface.

I don't even want to think about the global or runtime rewriting that is possible (common) in Java and JavaScript as a reasonable solution to this DI problem.


I'm still fiddling with this so I haven't seen it at scale yet, but in some code I'm writing now, I have a centralized repository for services that register themselves. There is a struct that will provide the union of all possible subservices that they may require (logging, caching, db, etc.). The service registers a function with the central repository that can take that object, but can also take an interface that it defines with just a subset of the values.

This uses reflect and is nominally checked at run time, but over time more and more I am distinguishing between a runtime check that runs arbitrarily often over the execution of a program, and one that runs in an init phase. I have a command-line option on the main executable that runs the initialization without actually starting any services up, so even though it's a run-time panic if a service misregisters itself, it's caught at commit time in my pre-commit hook. (I am also moving towards worrying less about what is necessarily caught at "compile time" and what is caught at commit time, which opens up some possibilities in any language.)

The central service module also defines some convenient one-method interfaces that the services can use, so one service may look like:

    type myDependencies interface {
        services.UsesDB
        services.UsesLogging
    }

    func init() {
        services.Register(func(in myDependencies) error {
             // init here
        }
    }
and another may have

    type myDependencies interface {
        services.UsesLogging
        services.UsesCaching
        services.UsesWebCrawler
    }

    // func init() { etc. }
and in this way, each services declaring its own dependencies means each service's test cases only need to worry about what it actually uses, and the interfaces don't pollute anything else. This fully decouples "the set of services I'm providing from my modules" from "the services each module requires", and while I don't get compile-time checking that a module's service requirements are satisfied, I can easily get commit-time checking.

I also have some default fakes that things can use, but they're not necessary. They're just one convenient implementation for testing if you need them.


tbh this sounds pretty similar to go.uber.org/fx (or dig). or really almost any dependency injection framework, though e.g. wire is compile-time validated rather than run-time (and thus much harder for some kinds of runtime flexibility - I make no claim to one being better than the other).

DI frameworks, when they're not gigantic monstrosities like in Java, are pretty great.


Yes. The nice thing about this is that it's one function, about 20-30 lines, rather than a "framework".

I've been operating up to this point without this structure in a fairly similar manner, and it has worked fine in the tens-of-thousands-of-lines range. I can see maybe another order or two up I'd need more structure, but people really badly underestimate the costs of these massive frameworks, IMHO, and also often fail to understand that the value proposition of these frameworks often just boils down to something that could fit comfortably in the aforementioned 20-30 lines.


yeah, if it's only 20-30 lines then it's likely overkill to do any way except by hand.

most of the stuff I've done has involved at least 20-30 libraries, many of which have other dependencies and config, so it's on the order of hundreds or thousands of lines if written by hand. it's totally worth a (simple) DI tool at that point.


Maybe your actual issue is needing to mock stuff for tests to begin with. Break them down further so they can actually be tested in isolation instead.

My favorite place as a kid XD I still get excited thinking about it haha

I’ve heard this a lot, but… doesn’t Hetzner do the same?


Love this. Personally I like Excalidraw [0] for UI mock-ups but I might use a Google Sheet next time for sharing/multiplayer purposes.

[0] https://excalidraw.com/


The grid layout in spreadsheets works well for prototyping UI, but I also love there is no grid system in excalidraw. In other UI tool, I can't stop aligning every elements or it really bothers me. In excalidraw I can be satisfied with good-enough design. The same with other things like text size, or colors.


I think the author is one of those folks who were able to fully grasp the beauty of the Git data model for the first time by switching to Jujutsu. It makes it easier to see the “DAG of commits” vision than Git with its index and stashes and confusingly named commands with fifty flags.


Yeah, exactly, and I've fruitlessly read too many guides on git's data model.

What was holding me back turned out to be the fact that git has too much magic (it updates branches automatically when you commit, rebasing "does stuff", conflict resolution was just arcane).

Jj exposes all that into simple, composable principles, making everything click.


> What was holding me back turned out to be the fact that git has too much magic

Considering jj is built on top of git, doesn't that mean jj has even more magic? That's like saying React is too magical so we should use Next.js instead (which is built on React).

Maybe you just mean that jj has a more intuitive CLI than git?


Jj uses git's data model, that doesn't mean it uses git commands for everything under the hood. Creating a commit with jj doesn't move the branch tag automatically, whereas git does. That's what confused me with git, it didn't expose the internals enough, so even though I had read about the data structures, it never clicked what was changed when, because git did it under the hood.


Which is why I always make sure to show that graph to co-workers new to git (we have a lot of code still on svn):

  git log --graph --oneline --decorate --all -100
I keep it as an alias, but it is annoying that seeing the whole structure is so hidden away.


Isn't this the same graph, that every Git GUI program shows?


New startup idea: point a C2PA camera at a screen and launder videos through it at $1 per minute.


I think this is a good kind of function coloring. It would avoid some scars I have from:

- seemingly harmless functions that unexpectedly end up writing four different files to disk.

- Packages that do I/O or start threads when you simply import them.


To make it a fair comparison, you also need to consider all the old-school Apache and PHP config files required to get that beautiful little script working. :) I still have battle scars.


Ahh lamp stacks… I remember there was a distro that had everything preconfigured for /var/www/ and hardened but for the life of me I can’t remember its name.


A lot of distros did and still do that. Getting an Apache instance up and running with PHP running as a CGI process was just a matter of installing the right packages on RedHat-derived distros going back to the early 2000s, for example.


They weren’t hardened at all. Installing lamp is one thing, ensuring it’s secure is another. Even RedHat would send a SA to your place to do that for you.


Fair enough. I wasn't getting the emphasis on hardening in your comment since the parent was just talking about the "battle scars" of configuration.

Re: hardening - I guess I deployed a lot of "insecure" LAMP-style boxes. My experience, mainly w/ Fedora Core and then CentOS, was to turn off all unnecessary services, apply security updates, limit inbound and outbound connectivity to only the bare minimum necessary w/ iptables, make sure only public key auth was configured for SSH, and make sure no default passwords or accounts were enabled. Depending on the application grubbing thru SELinux logs and adjusting labels might be necessary. I don't recall what tweaks there were on the default Apache or PHP configs, but I'm sure there were some (not allowing overrides thru .htaccess files in user-writeable directories, making sure PHP error messages weren't returned to clients, not allowing directory listings in directories without a default document, etc).

Everything else was in the application and whatever stupidity it had (world-writeable directories in shitty PHP apps, etc). That was always case-by-case.

It didn't strike me as a horribly difficult thing to be better-than-average in security posture. I'm sure I was missing a lot of obvious stuff, in retrospect, but I think I had the basics covered.


My point was there was a distro circa 1997-2003 or so that had all of that pre-baked. No having to mess with SELinux (or disabling it!), iptables, php.ini, apache's httpd.conf, or any of that other than putting your project into /var/www/ and doing a chown -R www on it.


You actually don't need to. Just upload this little php script to a shared host for $1/mo and call it a day.


You’re getting a lot of negative feedback but I think it’s mostly just people who don’t speak (or actively hate) enterprise jargon. Hacker News is not super enterprisey. Just don’t respond. I work for a company called StrongDM and we basically do exactly what Octelium does. I was able to determine that pretty quickly from your website which is not common. Enterprise security is just inherently a buzzwordy, vague cloud of companies all competing to own the magic quadrant.

That said, you are also including some buzzwords on your homepage that appeal to Hacker News folks, like “self-hosted”. That will get a blank stare from enterprise folks.

So I think you should pick one audience or the other. Tailscale took the strategy of appealing to Hacker News types and then shifting up market from there. My company appeals directly to the biggest enterprises we can find and the difference is stark.

I think you’ll get less negative feedback if you choose one of these target audiences and focus on them exclusively.

edit: by the way, Octelium looks awesome, well done!


Thank you really for your kind comment. I am not really against negative comments because they might actually lead to improvements. And btw I am personally a fan of what StrongDM has been doing lately especially when it comes with ABAC and Cedar. This is what I've been trying to achieve in Octelium with CEL and OPA.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: