I'm really concerned about performance/latency with the json approach. I couldn't find anyone discussing that yet completely.
Even when using jq (written in C) my quick tests show that parsing json is really slow compared to parsing with simple unix tools like awk. I suspect that to come from the fact that the parser has to check the full json output first in order to print a result, while awk does not care about syntax of the output.
I compared two shell scripts both printing the ifindex of some network device (that is the integer in the first column) 10 times.
Using awk and head combined gives me 0,068s total time measured with the time command.
Using ip with the -j flag together with jq gives 0,411s.
Therefore the awk approach is 6 times faster. And here I used a binary (ip) that already supports json output and doesn't even need the mentioned jc.
While this whole test setup is somewhat arbitrary I experienced similar results in the past when writing shell scripts for, e.g., my panel.
Reach out to me if you are interested in my test setup.
Even when using jq (written in C) my quick tests show that parsing json is really slow compared to parsing with simple unix tools like awk. I suspect that to come from the fact that the parser has to check the full json output first in order to print a result, while awk does not care about syntax of the output.
I compared two shell scripts both printing the ifindex of some network device (that is the integer in the first column) 10 times.
Using awk and head combined gives me 0,068s total time measured with the time command.
Using ip with the -j flag together with jq gives 0,411s.
Therefore the awk approach is 6 times faster. And here I used a binary (ip) that already supports json output and doesn't even need the mentioned jc.
While this whole test setup is somewhat arbitrary I experienced similar results in the past when writing shell scripts for, e.g., my panel. Reach out to me if you are interested in my test setup.