This article seems super weird -- do people actually use Grunt/Gulp to run a single command at a time? Most uses that I've encountered are along the lines of 'we've got this huge build process involving compass, sass, uglify, ng-annotate, and 13 other things', and Grunt/Gulp allow you to do the whole thing, automatically re-running on file changes, with a single command.
The author's argument is actually "NPM is a better build tool than Grunt or Gulp."
He followed up with an extensive tutorial on how to do complex builds with NPM [1]. The tutorial concludes with an example where NPM is used to:
* Take my JS and lint, test & compile it into 1 versioned file (with a separate sourcemap) and upload it to S3
* Compile Stylus into CSS, down to a single, versioned file (with separate sourcemap), upload it to S3
* Add watchers for testing and compilation
* Add a static file server to see my single page app in a web browser
* Add livereload for CSS and JS
* Have a task that combines all these files so I can type one command and spin up an environment
* For bonus points, open a browser window automagically pointing to my website
He accomplishes this with ~20 lines in the "script" object of package.json. The whole process is initiated with a single npm run dev command. He asserts that "to do the equivalent in Grunt, it'd take a Gruntfile of a few hundred lines, plus (my finger in the air estimate) around 10 extra dependencies."
Agreed, the ability to watch for changes and react to them is one of the better features. Plus, allowing the standardisation of practices through a team (build process, testing process) and the codification of the application's dependencies.
You (and the rest of the comments to your post) miss the point: Now you need grunt, a gruntfile, the tools for the build process and grunt specific wrappers.
What you instead could have had was a simple script running the tools with the proper command line arguments.
Strawman... I'm neither against build tools nor the idea of a js build tool.
Make would basically just run CLI in this instance, so it's exactly what I like. I like maven, but I hate when I need a special maven plugin for the tool I want to use. All these tools expose a nice CLI that can be used, no need to complicate it by adding more abstractions. If it weren't for how Grunt can watch files, I would have scrapped it for our project. Because the config Gruntfile for our build process is impossible to understand and maintain, but the end results are just some simple commands to various tools.
No, you're right. The use case for Grunt/Gulp is in the name: task runners, to automate the running of tasks, to stabilise and formalise the build process to the point of repeatability across a team.
It's a bit strange to have him bash on the Grunt/Gulp ecosystem's plugin dependence and SemVer configs, and then go on to recommend using npm instead, which is also dependent on node and its ecosystem.
That said, the npm solution looks attractive for simpler build flows, but hook in a fat custom plugin to a watch task on a subset of files and I'm sure you'll get the same complexity in package.json.
Yeah last time I saw this article that's pretty much what people were saying.
It's like saying you shouldn't use a web framework, and using as an example a website where you type a word into a textbox and it alerts whatever you typed. Yeah, of course that's a simple project so there is no need for higher organization, but it's hardly an argument to stop using frameworks altogether.
I simply use Make for all my web projects, whether it's a Ruby JSON API, or a React front-end. It's reliable, pre-installed everywhere, and can wrap anything in one consistent CLI. Why have to remember whether to type lein repl, pry, or rails console, when make repl will do everytime? Ditto, bundle install, npm install, lein deps => make deps. make server can easily wrap whatever command launches the server, make dev can easily use a filesystem watching tool to restart make server on any change. And beyond that, make can be used as usual to "compile" the project, ie. run whatever static analysis and preprocessing is necessary in the web world (jshint, jsx etc.).
We default to Make too since we're all (all two of us) running a *nix OS. I'm currently sub-contracting for an agency with devs all running Windows so I'm hoping `npm run` works well enough. If I had it my way I'd just tell them to install MinGW but they might take that as me telling them to go fuck themselves.
keep in mind that make isn't going to work on Windows. For a lot of people, that's okay (including myself). Still, a ton of developers still work on Windows.
It's different in the way that it's trivial to use npm to install grunt and run builds on Windows, but not trivial to get a Makefile to run (if you don't want to resort to running cygwin bashs all the time).
You have to install make some how. That's the problem. You're tying all these cross platform tools together with something that isn't cross platform. Grats.
They have a setup.exe that downloads the components you select. I gather it's not as painless as "apt-get install cygwin" would be, but it's as painless as installing Windows software is.
> and slow too
They are painting a Linux-environment on top of a Windows OS. That's no small feat. In return you get a sensible file space, a decent command line and all the development tools you'd need on Linux. It's slower than a Linux box running on the metal, but it's as fast as a VM would be and has direct access to the host operating system (and, for me, a working Python environment where I can simply say "pip install ipython" and everything just works. I would not even consider working from Windows without it.
> Why have to remember whether to type lein repl, pry, or rails console, when make repl will do everytime?
This is why I don't think the CLI is a very efficient interface for operating the computer. There are more commands than you can remember so, you're forced to take the commands that you can't remember or be bothered to type in and stick them into a file, thereby creating new commands that must be identified, understood and remembered by the next developer who should really only use them if they know all of the original commands in the file.
Regarding plugin dependencies, gulp maintains a blacklist of plugins that are not recommended for various reasons (bloat, poor design, unnecessary, etc): https://github.com/gulpjs/plugins/blob/master/src/blackList.... They also link to an article about why you should resist the urge to create gulp plugins on their wiki: http://blog.overzealous.com/post/74121048393/why-you-shouldn... Additionally, they maintain a collection of examples of common use cases that tend to send people searching for plugins unnecessarily: https://github.com/gulpjs/gulp/tree/master/docs/recipes Other popular projects also recommend against plugins for integration into gulp, such as browserify and karma test runner. There are a number of great support packages for gulp that handle the handful of special file stream manipulations that you may need to perform (vinyl-buffer, vinyl-source-stream) and since gulp operates on node streams (object mode) many useful npm packages already apply.
I found all build systems for all platforms to be a total PITA and eventually i just resort to using a shell or python script to invoke the commands i want. With this method you can very easily add user promting (do you want to build debug or release?), database reporting of build and test results, source control integration etc. Whatever i learn here can be applied and reused for new things in the future instead of knowing all the 150 flags of whatever build chain of the day.
This does not mean invoking compile manually on every file but you you invoke the smallest possible build file for your intended platform just for building and absolutely nothing else. Compiling c projects without make would otherwise take hours instead of minutes.
Really, Gradle requires you to read the first 12 chapters of the manual before you can even begin doing something more advanced than running compile. Msbuild likewise, it's like a completely new programming language just that it's written in xml. Bamboo is just like imperative programming except you have to drag and drop boxes that invokes commands or plugins (yep, another plugin hell). CMake works kindof painlessly but it also requires a lot of black magic when you want to add dependencies outside of the c buildchain.
The only one that actually doesn't make you cry is 'make' because it's so simple, it's just doing one thing instead of trying to tie itself into the entire ecosystem badly. Make also has it's problems but when you start hitting those it's more of a sign that you are trying to add too much weird logic that should not be in the build script to begin with.
Hmm, I don't think not fully understanding existing tools and not wanting to read documentation is a good reason.
You can easily add prompting with maven, gradle etc, test result reporting, source control integration is all built in.
The 150 flags you mention provide flexibility, you dont need to know them all to get started. With your custom build process, instead of finding a flag that does what you need, you need to implement what you need from scratch, and document it for future developers.
If you use a standard build tool its also easier to hire someone who already knows it, plus understanding them will help you fit into new teams easier.
Unless maybe this is for personal projects, in which case I think its probably less of an issue
> Gradle requires you to read the first 12 chapters of the manual before you can even begin doing something more advanced than running compile
The purpose of Gradle, like other products in the Groovy echosystem, is to sell consulting fees and conference seats for their backers, in this case Gradleware. Hence providing easy-to-use doco would defeat the point of the product.
You still have to get all tools to all build environments (developer computers, build servers). For javascript you can write the build script in javascript without Gulp, Broccoli or Grunt, it works great. But you still want some tools to be installed and then you probably start to use NPM anyway.
You cannot compare something like Maven/Gradle/SBT/Leiningen to Make.
Make doesn't do incremental building and testing, it doesn't do test coverage reports, it doesn't download the dependencies for you, it doesn't build the Javadoc documentation for you, it does not package and publish your signed artifacts on Maven Central, it does not wrap your stuff in an OS specific installer, etc, etc...
And you know, these things are standard stuff that one would like to do for every project. It's unfathomable why anybody would want to do this manual plumbing for each project with Make. You call Make "simple", I call it "dumb" and it's the reason for why people end up with aberrations of nature, such as Autotools.
> Gradle requires you to read the first 12 chapters of the manual before you can even begin doing something more advanced than running compile
You're exaggerating. Compiling a project is a hello world of a couple of lines. What "advanced" usage are you talking about anyway?
Say what? That's one of the main purposes of make.
> it doesn't do test coverage reports, it doesn't download the dependencies for you, it doesn't build the Javadoc documentation for you, it does not package and publish your signed artifacts on Maven Central, it does not wrap your stuff in an OS specific installer, etc, etc...
... without the appropriate rule supported by whatever tools you prefer. Which you, once you have a workflow that works for you, can put in a file and include by reference in your makefiles going forward.
It's perfectly possible it's simpler for you to use one of the tools you list than doing so, but the nice things about relying on basic make is that very often people will want to tweak your assumptions about what "one would like to do for every project":
> And you know, these things are standard stuff that one would like to do for every project.
Most of what you listed is stuff that I rarely want to do, and some of them are things I've never done.
> I call it "dumb" and it's the reason for why people end up with aberrations of nature, such as Autotools.
The reason people end up with evil stuff like autotools is usually to (try) handle platform API differences, not usually to handle any of the stuff you listed. We seem to agree that Autotools is an awful mess, though.
>And you know, these things are standard stuff that one would like to do for every project
And its totally stupid, and the stupid persists, because 'its standard' now. This is how stupid stuff becomes standard - people say "hey, this stupid stuff - its standard now".
You're using the argument that its a good thing to have to do all of that stuff with one tool .. really? (Besides, Make can do all of that - you just have to configure it to do so, like any other tool..)
It's interesting to watch the commentors to your post say the way to fix the problems with tools is with a tool to fix the tool which is at least one of the issues outlined in the article.
Looks like he just traded grunt/gulp for browserify which is also a build tool with lots and lots of configuration and plugins and I'd rather be using webpack in that category.
Just switch to webpack and you don't have to bend npm or browserify to your will either.
What NPM can't do is the async workflow of Grunt watch (or serve) and dynamically trigger specific (sub)processes in real-time. Turns out, this is the main way I'm using Grunt.
Also some plugins are complicated (or dynamically) to configure. His examples were very simple, he could just put it into a batchfile aswell.
The author addresses this issue directly in his follow up post [1]. He describes two ways to get watch functionality when using NPM as a build tool.
First, "most tools facilitate this option themselves - and usually are much more in tune with the intricacies of the files that should be listened for. For example Mocha has the -w option, as does Stylus, Node-Sass, Jade, Karma, and others." These options are simply invoked when the tool is called from NPM.
Second, "not all tools support this, and even when they do - you might want to compose multiple compile targets into one task which watches for changes and runs the whole set. There are tools that watch files and execute commands when files change, for example watch, onchange, dirwatch, or even nodemon."
I can fully understand not liking Grunt, it's configuration is impossible to read and doesn't make a ton of sense to me. Gulp on the other hand is VERY easy to read and I haven't have any issues with my colleagues adding/editing the gulpfile.
Is running `gulp jshint` similar to running `jshint .js`? yes but this is not a fair comparison. First of all one of those can be run in the root of the project while the other needs to be run in the right location (or have the folder in the path) so that we don't jshint libs we didn't write. Also the author glosses over the fact that people rarely run `gulp *` and in fact will run `gulp build` (or similarly named task) that will call ALL of the other tasks you need to run. Also the author doesn't mention how wonderful `gulp watch` (or similarly named task) is for watching files for changes and then running tasks when certain files change.
I can tell you with 100% certainty that without gulp we would not be jshint-ing our code, we probably will still be using regular CSS and not SASS (SCSS-style), would still have a large gross master.js file instead of multiple js files with logic cleanly separated out, etc. Could we use make? Yes but having worked at a company that used make to build various parts of their web app I can tell you that this get really gross really fast, feel free to disagree with me but after seeing how bad it can get I won't touch make for web projects again.
Gulp (I won't argue for Grunt, I don't like it at all) is easy to write and easy to read and running the programs/tasks Gulp does for us individually would have been a non-starter. As far as npm/make goes it would have been harder to do everything we are doing and I'm not even sure `gulp watch`-type features would have been possible without escalated privileges to install other software.
I am very comfortable with make, my coworkers however are not. Every preprocessor comes with a command line interface, easily accessible and understood.
The only useful feature of these build tools is hot reloading. `npm gaze` even provides a somewhat working fs.watch interface which works good across platforms.
I am not sure if that justifies 70mb of additional dependencies and the "we can solve that with a grunt plugin" mentality devs get. Our Gruntfile is 440 lines long :/
I'm a novice-ish web developer with decent UNIX knowledge. For my latest Angular project which involved some CSS-preprocessing, I spent a few hours on learning the and setting up one of these tools.
Then I found out what it was actually doing, and replaced it with a small shellscript that NPM calls.
I don't understand the attention this is getting - sure, you don't need grunt on a lot of projects, I never start a personal project with it and only a fraction of my personal projects ever have me add it.
However for work, it saves a stupid amount of time. grunt watch - that's enough of an argument alone for me to use it. Then add it in with a less compiler and live reload, and less now is magically being compiled to css and appearing in my browser a fraction of a second after I save a file.
Sure, you could write a script to do that, but by the time you've got it working, you essentially wrote a grunt plugin, albeit one that isn't extensible and doesn't have a community of thousands of developers writing plugins and documentation for it.
I think the use case of this author is very trivial. I don't see how I could use npm for a much more complicated build process with different targets (live, staging, local) or piping (coffee -> js -> min.js etc).
I'm still waiting for the day when the JS community will discover tup [1]. Tup's greatest strength is that is can discover build dependencies automatically by file I/O monitoring in any kind of compiler or build script.
Many source formats eventually gain the capability to include other files (e.g. jade partials) or reference them in other ways (e.g. you can check the size of an image while you are generating a Sprite CSS fragments) as part of the build process and this is where Grunt or Gulp or any other naive build system will fail immediately. Specialized build systems can handle the dependency checking for particular build targets (e.g. Cmake for C++), but tup makes it extremely easy to have proper minimal re-builds with any kind of compiler.
There is also an automatic re-build mode that's triggered as soon as you save the source file in the editor and it's a joy to use in web projects.
It's hell for those of us outside of the Unix ecosystem. I have MSYS on my Windows machine and make works only half of the time.
Also the learning curve is just not as easy, nor the files as readable. With Gulp and Grunt you are exploiting the fact that almost everyone can write a couple of lines of JavaScript. The only thing you are adding is an api composed of three to four functions.
If you look at make on the other hand, you'll need to learn quite a lot.
Yes I'm on windows and already had the case of a npm package I would have liked to contribute but the build process was using make, couldn't make it work...
It's a bit disapointing when you have a very good multiplatform ecosystem to see stuff like that, sometimes it's people "forgetting" there's also windows (and I can get that), but some other times it's just a big "fuck you windows user".
I'm not sure what precisly you're referring to (proprietary software/drivers?), I love linux, used it almost exclusively for 5 years before my current .NET job.
I'm talking here about pure javascript npm packages, something you can legitimatly expect to be able to build on windows and yet some people make childish judgements on windows users and choose tools which exclude a part of the community (not always the case or the reason but it happens).
On a similar subject, I stopped counting the times I've seen someone get told to get a Mac when discussing a problem on cross platform tools.
One reason might be so that contributors on Windows don't have to install cygwin and nobody has to worry about the differences between various flavors of make. If you want things to work cross-platform, it's easier to start with a portable language.
Because it isn't portable, not even across UNIX/POSIX systems.
Each make has it own set of extensions outside of what POSIX requires.
Additionally, for almost every task you need to call out to external tools that also have their own set of compatibility issues across UNIX/POSIX systems.
Ideally everyone writing a Make script would need to read the POSIX standard and only use common features, but since that isn't the case, scripts break across systems.
I know people who develop (and use make) on BSD and OS X (I do), but, seriously, is any OS on our lists (OS X, (Open)Solaris-derivations and *BSD excepted) likely to be the target for the kind of software Grunt is used to develop?
But autotools/autoconf solve a whole set of different cross-platform problems to "make runs and does things". The make syntax doesn't change across platforms; nor does the sh syntax. What you execute might change, but I'm not sure that's too much of a problem for the usual web pipeline use case.
I'm not sure how serious your comment is but this was one of the reasons for the invention of Apache Ant.
For years, the front page of Ant had a rant towards Makefiles on it, including this highly unprofessional quote: '"Is my command not executing because I have a space in front of my tab!!!" said the original author of Ant way too many times.'
Make has really horrific syntax and semantics, and doesn't integrate as nicely if you're dedicated to JavaScript. I mean, the Unix philosophy is about small programs that work together, not "only use old Unix tools".
The "horrific syntax and semantics" claim seems pretty drive-by, the variables and some function names might be arcane, but newer releases have nicer aliases for those who care for them. I don't know what is so horrific about the semantics though, they are pretty straightforward. A Makefile is a set of rules, each rule being a target file, its dependencies, and the recipe to turn the dependencies into the target. Make can figure out which steps need to actually run based on the states of the targets and dependencies (prereqs in Make terms). It's a really powerful way to describe the relationships and transformation steps for files, and you get partial builds for free! Not that make is without problems (the initiative behind DJB's redo addresses them at length) but unsubstantiated FUD isn't necessary.
...but to be fair, regardless of the make syntax, you've also got to look at the history of make, and what it's designed to achieve.
Make is for executing shell commands on file patterns with dependency rules.
It's not designed to use an ecosystem of plugins or have long running processes (eg. to run a local server or watch for file changes). It's not designed to be a scripting language (although it can be, since it's technically turing complete).
Now, when you look at other things which have come along to replace make as a build tool: cmake, grunt, gulp, ant, rake, maven, scons, premake, etc.
I think it's probably a little bit superscillious to suggest that all the people building these tools were just too retarded to realize how good make was.
More likely, they had specific needs that make didn't address.
Rust, for example, just recently had a reasonable make based solution, but they decided to depreciate it in favour of cargo, because it was simply too difficult to support in an appropriate cross platform manner.
Make isn't a bad tool; for very doing some specific things. Specifically building c code on unix-ish systems.
Is it the right tool for fetching the dependencies of and invoking a large set of ruby/javascript/python scripts and plugins to build the assets for a website, running a local development server and watching file changes and pushing livereload changes to the browser as the files change?
No. Make is absolutely rubbish at doing those things.
...not because make is rubbish, but because it's not for doing those sorts of things.
> Make is for executing shell commands on file patterns with dependency rules.
Yep absolutely. Make is a system for generating files from other files, it is that simple. Compiling templates and code, generating source maps, concatenating, minifying -- many of these tasks all fall under that umbrella.
> It's not designed to use an ecosystem of plugins or have long running processes (eg. to run a local server or watch for file changes).
You're right, I wouldn't advocate that either. Procfiles and tools like foreman (and it's various reimplementations, like honcho) are for managing those processes: development servers, file watchers, etc. That's actually my own personal approach, I go for the Makefile/Procfile combo and run `foreman start` to boot up the a server and any other processes that project might need. Works pretty well for me.
> Make isn't a bad tool; for very doing some specific things. Specifically building c code on unix-ish systems.
That's a pretty narrow view, like I said earlier, it's good for any application where you're generating files from other files. Building an ebook, managing unwieldy SSH configurations (smdh at lack of include directive in SSH config), etc.
So yeah I agree make is no good at being an all-in-one everything-and-the-kitchen-sink suite. And that's great. It has one job and does it well enough, and is even better complemented by other tools. Some people like the suite approach better, but I wouldn't disparage make for something it wasn't intended to do to begin with.
Great explanation of limitations of make. Regarding when someone says make i availible everywhere and forgets Windows I think we can start saying Javascript/Node is everywhere (not preinstalled but easily availible). It is probably more true.
I wouldn't use make for file watching and live reloading. I would express the Sass and CSS dependencies in a Makefile, and then describe the dev server, file watching[1], and live reloading[2] processes in a Procfile, using something like foreman[0].
> Of course, to get this Grunt configuration to work you'd still have to run grunt jshint in the terminal, which as it turns out is no shorter than jshint .js.
That's only true if you're not using task arrays. I'm not sure what most people are doing but that's covered all my bases so I run Grunt once and done. Is installing a plugin really bloated? It's only 5 lines in the example but they provide flexibility needed since many projects have different directory structures and need different options.
I'd say that even if the plugin you wan't doesn't exist (and most do for reasonably popular tools), well, make one for yourself. There's plenty of documentation and countless examples.
It's strange that the author seemingly criticizes having to write a plugin for yourself and then goes on to suggest something like you writing your own (if you want dynamic options, etc you will be writing a wrapper around your package.json).
I use Gulp to build all of our frontend dependencies and it's great.
Before that I was using Grunt, and it was alright. My only complaint was how unbearably slow it was.
It's true that when you're just doing one-off things you probably don't need something like Grunt or Gulp.
For doing certain stuff, it's usually a lot easier to call programs directly than trying to call it from Gulp, for example: protractor.
What I do is I put all my tasks (some of which are Gulp tasks, others which are just calling the executable directly) inside of the npm scripts section, so you end up with a really nice consistent interface.
A lot of people get too caught up in small details when really most people just wanna get their job done. Our build/dev processes are complicated enough that using Gulp alleviates a lot of pains.
If you ask me, Gulp provides a pretty elegant syntax. Before using Gulp we had a custom script to build the project. Using Gulp has been much better.
His comments about the Gulp ecosystem not fully supporting streaming is true. The onus of implementing streaming correctly in plugins is on the plugin author. Unfortunately, there are a few plugin developers who are making bad plugins. You have to understand which libraries you are installing before blinding running npm install.
Something which is very interesting about Gulp is it's underlaying Virtual File System (Vinyl) https://github.com/wearefractal/vinyl The entire ecosystem is now improving as more Gulp plugins are being decoupled into vinyl virtual file system adapters ( think sftp / http / ftp / webdav )
I decided to code my own build library (https://github.com/jeswin/fora-build) instead of Grunt. Documentation needs to be better, so don't use this yet.
a) I wanted to use simple shell commands, like cp and mv. b) Instead of trying to learn how to use a plugin, it's often easier to just write a few lines of code. c) Wanted to use async goodness with generators. d)The important task the build library does is "watching".
In a project I worked recently we started using it because it was just "cool".
It only had a jshint task. but gradually as the project evolved we added lot of tasks including minifying assets, configuring build for different API endpoints ( prod.api/dev.api ) , creating a zip file and deploying the code .
No. I refuse. I use CoffeeScript and SCSS and often install components via bower. I like grunt watch+serve+livereload to drive my development server environment. I like continuous testing with karma. Grunt helps me manage all of this centrally. npm simply does not have the functionality to be a legitimate replacement.
I've found Grunt useful as I ended up using it for a large variety of tasks mostly related to building. However I also decided to stop using it but for a different reason than the article.
Grunt and Gulp kinda create a siloed set of tasks that are annoying to run in-line with the rest of the node application (different contexts). I never think I'm going to run into it until I somehow run into a situation where I want to run or re-run a specific task on an application that is already deployed. To do so now I have to install grunt in a production system and restart the application...or I simply code up my tasks into reusable bits of JavaScript that the normal application can use plus I can use npm to run them under normal conditions.
Honestly Grunt does help get things going really quickly but there are a lot of immature things about it and Gulp.
I have some similar feelings about buildout (http://www.buildout.org). Both Grunt and Buildout (and Maven and so many others) can do many things beyond running commands based on their dependencies and that's what makes me uncomfortable with them: they violate the do-one-thing principle. They try to do everything, be The One Tool To Do It All.
I am myself guilty of somehow abusing make (https://github.com/rbanffy/testable_appengine) and convince it to do things it was not really designed to do, but that's only because make is so easy (and because it's already there on every machine I touch - or shortly after I touch them).
Dude's big example was running a single tool with their Grunt/Gulp setup. Who does that? I kept expecting the article to explain why using Grunt or Gulp was a bad thing when I want to do multiple things at once without running multiple scripts. What if I want my JavaScript to be hinted/linted, concatenated into one file, and minified every time I save a JavaScript file, plus tests should be run, plus I want the final JavaScript file to be saved in my build/ directory?
Just skimmed through the article and wanted to point out that it's a joy to read. Clear, readable fonts, good use of spacing, great code snippets with proper syntax highlighting, bravo to the author, I wish more technical articles were going the extra mile like that.
The author's npm scripts approach is great for a small scale project needing to run just jshint but if you start getting into something serious, running more than one task multiple times is a serious dent in your workflow.
This guy has obviously not built a large JS app if he thinks he can rely on NPM for building. I actually thought he would promote Make, which would have made much more sense.
tl;dr: The author meant to say "Use Browserify instead of Grunt/Gulp" but then got all confused and wrote some unrelated stuff about how npm has a case..esac statement built in.
While we're at the subject, might I suggest using webpack instead of browserify? It's roughly the same, but less buggy.
It isn't a replacement for Grunt/Gulp. It is a replacement for browserify or perhaps require.js.
It builds your javascript and css (and images!) into one or more bundles which does overlap with what people often use grunt for. In my experience it generally obviates the need for grunt because what's left to do once you've packaged and minified your js, css and images and you have your source maps, etc.?
I suppose the rest can be done with npm scripts, but simple tasks like running tests could even be done from bash or a makefile anyway.
i've used gulp for building frontend dependencies, as well as running automated tests whenever a file changes. It's been working well, with the exception that sometimes it just dies, and sometimes there's a failure and some streams dont work properly.
As long as you're not trying to make the world's most complex build file do not feel that you need to use overly complex build tools.
A better approach is to learn the basics of writing Makefiles.
Simple tools are good.
And Make is reusable.
Make also has the benefit that you can copy-paste lines into your shell to test them. This is something that is very awkward to do with Grunt or Gulp and it makes it a lot easier to debug problems.
Generally when I say it to people they say "but if you have a complicated build process you need these tools!", and my response is "what business value did your 400+ line build process create and will it be worth the inevitable problems you will have to debug later?"
It might be unpopular advice but if you create a project with far less dependencies then there is less that can break later on.