clickhouse-local is a single binary that enables you to perform fast data processing using SQL - effectively database features without a database. This tool supports the full breadth of ClickHouse functions, many popular file formats and recently added automatic schema inference. You can query not only local files, but also remote files (from S3/HDFS/static files accessed by URL). Moreover, clickhouse-local tool has interactive mode where you can create tables, play with data and do almost everything that you can do wih ordinary database. And let's not forget, this tool is written in C++, so it's incredibly fast.
I couldn't agree more. clickhouse-local is great as a CLI tool as well as a relay for web driven functions, delivering all the clickhouse functionality and speed for ad-hoc tasks with local or remote storage on S3, Parquet files, etc.
SPyQL is really cool and its design is very smart, with it being able to leverage normal Python functions!
As far as similar tools go, if you're interested, I recommend taking a look at DataFusion[0], dsq[1], and OctoSQL[2].
DataFusion is a very (very very) fast command-line SQL engine but with limited support for data formats.
dsq is based on SQLite which means it has to load data into SQLite first, but then gives you the whole breath of SQLite, it also supports many data formats, but is slower at the same time.
OctoSQL is faster, extensible through plugins, and supports incremental query execution, so you can i.e. calculate and display a running group by + count while tailing a log file. It also supports normal databases, not just file formats, so you can i.e. join with a Postgres table.
You may also want to have a look at the DuckDB command line client [1]. The shell itself is based on the SQLite client, and DuckDB can be used to natively query CSV and Parquet files. Using extensions, DuckDB can also query SQLite and Postgres databases, and query files over HTTPS and S3.
The command line client also has some nifty features like syntax highlighting, and context-aware auto-complete that is coming in the next release.
One thing I really miss in the DuckDB is that it doesn't insert the entire query if you use C-p/Up arrow for multi-line queries (it just cycles through the lines of the query). This behaviour is inherited from SQLite, and it trips me up every time even after years of SQLite CLI usage.
And if you're looking for a similar experience (very fast analytical SQL queries) but over HTTP, for example, to power a public dashboard or a visualization, you can try ROAPI [0] or Seafowl [1], also built on top of DataFusion (disclaimer: working on Seafowl):
It could be the NDJSON parser (DF source: [0]) or could be a variety of other factors. Looking at the ROAPI release archive [1], it doesn't ship with the definitive `columnq` binary from your comment (EDIT: it does, I was looking in the wrong place! https://github.com/roapi/roapi/releases/tag/columnq-cli-v0.3...), so it could also have something to do with compilation-time flags.
FWIW, we use the Parquet format with DataFusion and get very good speeds similar to DuckDB [2], e.g. 1.5s to run a more complex aggregation query `SELECT date_trunc('month', tpep_pickup_datetime) AS month, COUNT(*) AS total_trips, SUM(total_amount) FROM tripdata GROUP BY 1 ORDER BY 1 ASC)` on a 55M row subset of NY Taxi trip data.
The thing that worried me when looking into SQL-tools for CSV-files on the commandline, is the plethora of tools available, and it being hard to find one that feels solid and well-supported enough to become a "default" tool for many daily tasks.
I want to avoid investing a lot of time learning the ins and outs of a tool that might stop being developed in a year from now. I wish for something that can become the "awk of tomorrow", but based on SQL or something similar.
Does anyone have any experiences related to that? Is my worry warranted? Are some projects more well supported than others?
Once your data is at a certain size, it might be worth considering tools that does the job quickly enough while still being simple to use. This comparison is very interesting:
Author of the benchmark and of SPyQL here.
ClickHouse is fantastic. Amazing performance. SPyQL is built on top of Python but still can be faster than jq and several other tools as shown in the benchmark. SPyQL can handle large datasets but Clickhouse local should always show better performance.
SPyQL CLI is more oriented to work in harmony with the shell (piping), to be very simple to use and to leverage the Python ecosystem (you can import Python libs and use them in your queries).
The best part is that doing analytics via the command line often means that you're doing analytics locally, which often gets you performance superior to a small computing cluster.
Very useful, seems to be an effective bridging tool between relational and NoSQL database types, and from the command line! Nice clear documentation page as well.
Was I the only one thinking of something like google analytics but for command line? A system of usability telemetry for command line utilities might be useful?
I can't help but mention clickhouse-local tool: https://clickhouse.com/docs/en/operations/utilities/clickhou...
clickhouse-local is a single binary that enables you to perform fast data processing using SQL - effectively database features without a database. This tool supports the full breadth of ClickHouse functions, many popular file formats and recently added automatic schema inference. You can query not only local files, but also remote files (from S3/HDFS/static files accessed by URL). Moreover, clickhouse-local tool has interactive mode where you can create tables, play with data and do almost everything that you can do wih ordinary database. And let's not forget, this tool is written in C++, so it's incredibly fast.
Disclaimer: Work at ClickHouse