The issue with the emoji, at least in their current depictions, is that they are guaranteed to be higher in visual hierarchy (among the few things of undying relevance that we were taught in university) than any surrounding text. They stand out thanks to their different nature and a lot of visual complexity (intricate features).
Good visual hierarchy means you end up looking first at what is important. Good visual hierarchy sets correct context.
Bad visual hierarchy adds mental overhead. Bad visual hierarchy means that any time you look, even when you don’t consciously realize it, you end up scanning through hierarchy offenders, discarding them, then getting to the important part, and re-acknowledging offenders when it comes to them in appropriate context. This can happen multiple times: first for the screen as a whole, then when you focus on a smaller part, etc. As we encounter common visual hierarchy offenders more and more often, we train ourselves to discard them quicker, but it is never completely free of cost.
There are strategic uses for symbols in line with visual hierarchy principles. For example, using emoji as an icon in an already busy GUI is something I do as well.
However, none of those apply in terminal’s visual language of text and colours, and unlike a more or less static artifact fully under designer’s control (like a magazine or a GUI) in a fluid terminal screen where things shift around and combine in different ways it is almost impossible for software author to correctly predict what importance what has to me.
Those CLI tool authors who like to signify errors with bright emoji: have you thought that my screen can be big, and after I ran your program N times troubleshooting something there can be N bright red exclamation marks on my screen, N-1 of which are not even remotely close to where the message of interest is? have you thought that your output can coexist in a multiplexer with output from another program, which I am more interested in? should other programs compete for attention with brighter emojis? and so on.
As to joyful touches, which are of course appreciated, those can be added with the old-style text-based emoticons.
This post nicely doubles as an explanation of why current-gen LLM's tendencies to spray emoji everywhere is annoying. Replacing bullet points with rocket ships and hand symbols pointing at things and spraying the emoji everywhere, rather than highlighting the things the emoji are trying to decorate, actually tends to obscure the thing being nominally highlighted under the massively over-powerful colorful symbols entirely. The initial visual impression of these texts is "a splattering of colorful symbols, with some incidental boring text around them" and it takes time and cognitive effort to filter out the blaring color, which has no useful content in it, and get to the real content.
I think this is more about age cohorts than anything intrinsic to emojis.
I’m also of an age where emojis are more distracting than informative, but I notice younger colleagues use them liberally and with significant information value.
Like if I were to write three bullets about the results of an experiment, I would use three actual bullet points for maybe describing the hypothesis, the test methodology, and the result.
Plenty of people I work with would use a light bulb bullet for the hypothesis, a clipboard for the methodology, and a chart up or down for results.
It’s overly cute to me, but it works for them, and it does kind of provide a visual index.
I specifically referenced LLM use because it seems to rather pointlessly just spray them everywhere. I didn't complain about them existing in general. I can old-fogey quibble with specific uses when humans add them with purpose but can also easily acknowledge it's just a taste matter. But LLMs seem to love that rocket ship, the green checkmark, that celebration emoji with the confetti coming out of the cardboard tube, and a few other ones that are only loosely related to the content. Also loves to point fingers at links, which are often already distinguished in the text. And seeing through all that is a real pain.
That your eyes are drawn more to color and shape than monochrome text is not an old person thing. That's a human thing.
In many cases this becomes an arms race too, where people start competing to make their content more colorful than the last, and that arms race has only one end, where the "engagement hooks" completely overwhelm the content. We've seen that one play out in a number of places already.
I really should have included the word "default" in there somewhere. It's effectively impossible to make any blanket statement about "what LLMs do" because it's one prompt away from doing almost literally anything else.
However, it's a style that currently has a lot of popularity.
Indeed, asking for answers in the style of Alice in Wonderland is one of my favorite things to do, like programming questions. The extra frisson from something so non-whimsical being expressed so whimsically via such a complicated technology goes all the way around the "cringe/cool" circle at least twice; you can decide for yourself where it lands in the end.
I did finally hear about the students getting wise to LLM style issue. I just saw a YouTube video about a student saying he would 1. have the LLM write his essay 2. rephrase the first two paragraphs in his own style 3. tell the LLM to rewrite the essay from step 1 in the style exemplified from his rewrite. AI detection tools, which are really "default AI detection tools", call it 0% AI. Stick a fork in them, they're done at that point. I don't think any "AI detection tool" is likely to defeat that, unless LLMs suddenly freeze in advancement for, oh, at least 3 years or so, which seems unlikely.
It’s the same with colors in terminal. Some tools produces them even when piping it through another tool and then you have a mess of ansi codes on the output.
Emoji should be always user configurable and opt-out add some —-fancy flag or some env variable if someone really wants them (you readme screenshot can let them know of they exists).
I agree that configurability helps, and flags to make output more/less plain exist. Just wanted to present a viewpoint based on a concept I learned studying visual design. Colourful text and emoji are on totally different levels when it comes to attention grabbing!
Not feasible. What happens if the actual data you're trying to process happens to contain a sequence of bytes which could be interpreted as an escape sequence? Now you've ruined the user's data by modifying it.
I think lower-detail emoji versions (monochrome or with sparse use of colour) could definitely work better! Massive amount of work though, creating such a version for every possible emoji…
They sort of already exist, many emoji are turned "on" (colorful presentation or Emoji Presentation) with Variation Selector 16 [0] and many can be forced "off" (monochrome presentation) with Variation Selector 15 [1].
(Not all fonts handle all variations, though, in both directions.)
The placement in the visual hierarchy of emojis is their main feature. I think it's totally backwards to say that the visual hierarchy of terminal UIs must remain constrained to text with colors.
I'm sorry, but it's absolutely just as valid to indicate an error or other status with a bright emoji as with bright red text and exclamation points - as long as there is some support for greppability as well (when relevant).
Your point about multiplexers etc. apply to anything in the terminal, including bright red text.
> Your point about multiplexers etc. apply to anything in the terminal, including bright red text.
You did not read my comment. There is a concept of visual language. I specifically said that text colour (along with background colour, text style, etc.) constitutes the visual language of the terminal.
Bright red text follows general complexity pattern of text, with a distinguishing quality. Let’s call it standout factor x2, maybe x3 if you see in colour and red means danger. An inserted full colour image full of tiny details falls out of it completely, especially compared to Latin. The question of distinguishing qualities does not even make sense. It is text x10000.
Yes, red text in the next pane will also be slightly distracting, but it is nothing like a bunch of images sprinkled around my buffer.
emoji width bugs mostly come down to how terminals interpret Unicode's "grapheme clusters" vs "codepoints" vs "display cells". emoji isn't one codepoint - it's often multiple joined by zero-width joiners, variation selectors, skin tone modifiers. so the terminal asks wcwidth(), gets 1 or 2, but the actual glyph might render wider or combine into a single shape.
some emoji even change width depending on font. family emoji is like 7 codepoints, shows up as one glyph. most terminals don't track that. they just count codepoints and pray.
unless terminal is using a grapheme-aware renderer and syncs with the font's shaping engine (like freetype or coretext), it'll always guess wrong. wezterm and kitty kinda parse it right often
TBH grapheme clusters are annoying but day 1 learning material for a text display widget that supports beyond ascii. It honestly irks me how many things just fuck it up, because it's not an intractably hard problem - just annoying enough to be intractable for people that are lazy (*).
(*) the actually hard problem with grapheme clusters is that they're potentially unbounded in length and the standard is mutable, so your wcwidth() implementation needs to be updated along with standards to stay valid, particularly with emoji. This basically creates a software maintenance burden out of aether.
> Why do you need to sync with the shaping engine?
GP explained already. Grapheme clusters ≠ glyphs. To find the number of glyphs you need the font.
An emoji can render as one or two or three or more glyphs depending on what font the user has installed, because many emoji are formed by joining two or more emoji by a ZWJ)
(Also even in a monospace font not all glyphs are of ﷽ equal width)
> An emoji can render as one or two or three or more glyphs depending on what font the user has installed,
And how the program that prints such emojis should deal with this? Like, how should e.g. readline handle the user pressing the Backspace key after inputting such an emoji after prompted for input? It needs to know precisely how many lines and columns user's input takes: a huge chunk of code in that library is devoted precisely to this, because simply emitting "\b \b" doesn't work.
And if the user opens the terminal emulator's settings and changes the font, should the program be sent some signal to redraw the window, as it happens when the window size changes? E.g. that emoji was in a 10-columns wide edit field and so characters after it fit when that emoji was 1 column wide, but now it's 2 columns wide, so the ncurses should now trim the last character in that field.
Or try this funny little experiment, for instance: resize your terminal to something like 30 cols by 5 rows and run "script -c bash temrinal_log.txt". Now hold "a" key until you enter enough "a"s that the shell prompt is no longer visible. Now hold Backspace until you've erased all "a" and cursor no longer moves. What do you see on the screen? Now press Ctrl-D to exit the "script" session, and study the transcript in temrinal_log.txt in a hex editor. Ponder on the mechanisms that bash (readline inside it, really) uses to implement line-editing.
It's not the font that is deciding how emoji sequences are rendered. The renderer may decide based on which characters exist in the available fonts, but it doesn't have to. Same for glyph width in terminals. It wasn’t uncommon for non-double-width-aware terminals to only draw half an emoji in a regular-width cell.
How else are you going to render a sequence such as Emoji ZWJ Emoji other than as two glyphs, if no composed glyph is defined in the user's font? That's how it's supposed to be rendered, for backwards compatibility.
Yeah, unfortunately I feel like despite all the advances in Unicode tech, my modern terminal (MacOS) still bugs out badly with emojis and certain special characters.
I'm not sure how/when codepoints matter for wcwidth: my terminal handles many characters with more than one codepoint in UTF-8, like é and even Arabic characters, just fine.
`wcwidth` works by assigning all codepoints (strictly, code units of whatever size `wchar_t` is on your system, but thankfully modern Unixen are sane) a width of -1 (error), 0 (combining), 1 (narrow), or 2 (wide).
`wcswidth` could in theory work across multiple codepoints, but its API is braindead and cannot deal with partial errors.
This is all from the perspective of what the application expects to output. What the terminal itself does might be something completely different - decomposed Hangul in particular tends to lead to rendering glitches in curses-based terminal programs.
This is also different from what the (monospace) font expects to be rendered as. At least it has the excuse of not being able to call the system's `wcwidth`.
Note that it is always a mistake to call an implementation of `wcwidth` other than the one provided by the OS, since that introduces additional mismatches, unless you are using a better API that calculates bounds rather than an exact width. I posted an oversimplified sketch (e.g. it doesn't include versioning) of that algorithm a while back ...
Doing that adds a lot of round trips, so you still really need to do the initial estimate.
(also, probing for whether the terminal actually supports various features is nontrivial. At startup you can send the basic "identify the terminal" sequences (there are 2) and check the result with a timeout; subsequently you can make a request then follow it with the basic terminal id to see if you actually get what you requested. But remember you can get arbitrary normal input interspersed.)
The main problem is not even if the terminal itself can track the grapheme width "correctly". It's a) the fonts suck; b) does the terminal user tracks the width correctly?
About a): some fonts have the glyphs for e.g. the playing cards block that are 1.5 columns wide even though the code points themselves are defined to be Narrow. How do you render that properly? Then there are variation selectors: despite what some may think, they don't affect the East Asian Width of the preceding code point, so whether you print "\N{ALEMBIC}\N{VARIATION SELECTOR-15}" or "\N{ALEMBIC}\N{VARIATION SELECTOR-16}", it still, according to wcwidth(), takes 1 column; but fonts have glyphs that are, again, 1.5 and 2 cells wide.
And then there is the elephant in the room problem b) which is management of cursor position. But the terminal, and the app that uses the terminal need to have exactly the same idea of where the cursor is, or e.g. readline can't reliably function, or colorful grep output. You need to know how many lines of text you've output (to be able to erase them properly), and whether the cursor is at the leftmost column (because of \b semantics) or at the rightmost column (because xenl is a thing) or neither. And no, requesting the cursor position report from the terminal doesn't really work, it's way too slow and it's interspersed with the user input.
The TUI paradigm really breaks down completely the moment the client is unsure how its output affects the cursor movement in the terminal. And terminals don't help much either! Turning off autowrap is mostly useless (the excess output is not discared, it overwrites the rightmost column instead), the autobackwrap (to make \b go to the previous line from the leftmost column) is almost unsupported and has its own issues, there is no simple command/escape sequence to go to the rightmost column... Oh, and there is xenl behaviour, which has many different subtle variations, and which original VT100 didn't even properly have despite what terminfo manual page may tell you — you can try it with the terminal emulator mentioned in TFA for yourself: go to setup, press 4, 5, move with right arrow to the block 3 and turn the second bit in it on by pressing 6 so it looks like "3 0100", exit setup (what you did is put the temrinal into the local mode so you can input text to it from your keyboard and turned the autowrap on), then do ESC, print "[1;79Hab", do LINE-FEED, print "cd" — you'll see that there is an empty line which shouldn't really be there, and it is not there if you do e.g. printf "\033[1;1Hxx\033[1;79Hab\ncd" on xterm (ironic, given how xterm's maintainer prides themself on being very faithful to original VT100 behaviour) or any other modern terminal.
It's more down to whatever monospace font the terminal uses not having those emojis and the (likely proportional) font they come from giving them a different width.
This is such a fun idea. I never expected the terminal to have this kind of retro way to “blow up” emojis. Seeing a whole row of giant faces honestly made it feel like the terminal had emotions.
Now I kind of want to throw a giant warning emoji into a monitoring script. No way anyone’s ignoring that.
I checked before I posted — the big font works for me, the mixed-emojis do not. Also, the big font is terribly pixellated in wezterm, both in terms of the emoji and the text. Maybe font configuration would help :shrug:
Most terminals that supports double-height/double-width seems to just scale the bitmap of the glyphs instead of properly rendering the font at twice the scale even if the font used is a vector font, for no particularly good reason.
Which however does not support DECDHL. So if you want to try what this post is about, Ghostty is not the right terminal. (It's great in general though)
Large text didn't work at all on either of the two terminal emulators I normally use (guake/gnome terminal). Normally I use toilet/figlet for larger text.
That got me wondering, are there any figlet fonts that support a lot of emoji? Certain non-ascii characters are already pretty well supported.
This character was code 0x01 in the original IBM PC code page (https://en.wikipedia.org/wiki/Code_page_437), and hence in DOS. It was displayed single-width and monochrome just like any other 8-bit character, never causing any rendering issues, unlike emojis today. It was added to Unicode for round-trip compatibility with that code page.
Emoticons are not the same as emojis. For one they allow for more expression or personal style by having different variants, e.g. :-) vs :) or for absolute maniacs: (:
They are also not limited to what some consortium and a couple of megacorporations think you should be able to express.
I think there's a difference. The code point will always mean "eggplant", it just happens that the concept can be interpreted in different ways according to context—just like the word itself. But ":-)" can only ever mean "colon minus rparens" before further interpretation.
Actually, according to Unicode, "-" doesn't mean minus - U+002D is hyphen-minus.
And as for the eggplant, your semantics-as-specified are useless when 99.9% of the usage has a different intended meaning due to the inherent lack of expressiveness in a corporate-approved emoji language.
What’s it called is syntax, what it’s means is always context dependent. That’s why we invented formal notation, so that we can have context free interpretation (it’s bundled with its semantic so you don’t need to apply some context to it)
If I make a terminal emulator, I do not want to support colourful emoji at any size. (There are a few other things I also would not want to implement, but also some things to do that other terminal emulators don't do.)
Konsole handled both perfectly - emojis and the large text too. Large text is particularly mind-blowing to me, I think I have never seen it, and definitely not in scripts. But it looks like it could be mighty useful.
In my fever dreams of maintaining utf8 supporting text widgets that work and never need to be updated, there's a zero-width whitespace grapheme cluster that represents the number of codepoints in the next grapheme cluster if they're different from the previous.
The situation today is basically the same as null terminated C strings. Except worse, because you can define that problem and solve it in linear time/space without needing to keep an up to date list of tables.
Combining characters and joiners should have been prefix rather than suffix/infix operators (and preferably in blocks by arity) so you'd always know without lookahead whether a grapheme cluster was complete.
(Prefix combining accents would also have made dead keys trivial rather than painful.)
Combining characters have already made Unicode text stateful.
Although I agree that encoding length hints into it seems like a bad idea - it creates an opportunity for the encoding to disagree with the reality of the text. You need _some_ way of handling it if it says that the next grapheme cluster is 4 characters long but it's actually only three.
> Alternatively, you might not want to use literal 1970s technology and be interested that Kitty recently introduced a more modern way to get different sized text in a terminal.
Kittys “modern” way of doing it is still 1979s tech. Kitty just decided it would discard the standard escape sequences because of “reasons”.
Honestly, much as some of Kittys custom sequences have improved things, this particular sequence doesn’t.
Ignoring the escape sequences a terminal doesn't understand, or doesn't want to deal with is explicitly allowed (required, even) by the ECMA-48 standard. And DECHDL is not "standard" by any means.
> Ignoring the escape sequences a terminal doesn't understand, or doesn't want to deal with is explicitly allowed (required, even) by the ECMA-48 standard.
Indeed, but what's that got to do with my comment?
I've written a terminal emulator so very familiar how escape sequences are parsed. However your comment doesn't relate to my previous comment at all. At no point did I claim that unsupported codes wouldn't be ignored. What I said was Kitty's codes here are a pointless addition given we already have codes that more widely supported in terminal emulators and have been for several decades now.
> And DECHDL is not "standard" by any means
I assume you meant DECDHL not DECHDL ;) There's also DECDWL for double width too.
I never actually said they standardised.
What you've done here is conflate "standard" (typical/normal/de facto) with "standard" (ISO/ANSI/et al). It's a common way people misconstrue comments when they're looking for arguments rather than reading other people's comments charitably.
There's absolutely nothing wrong with the way I used the term "standard" -- it's pretty clear from the context that I wasn't claiming it is part of ECMA-48.
The reason I did use that term is because DECDHL and DECDWL are both supported on pretty much all DEC terminals (ie vt100 onwards) and xterm. They're a standard part of any hardware terminal or terminal emulator that seeks vt100 compatibility...which is most terminals arguably ostensibly anything created in the last 30 years in fact.
It would be more weird to advocate the use of some uncommon sequence created for one specific graphical terminal emulator that is a toddler in comparison. Yeah I get Kitty is popular, but it's hardly a de facto standard like vt100. Yet here we are -- people reinventing the wheel and thus making the already needlessly complicated problem of feature detection even harder still.
Both kind of "standard" are worth to look at; often existing codes would be suitable. However, there are sometimes some things that there does not seem to be a code for.
I'm an advocate for creating codes to solve problems. I've literally written a terminal emulator with that in mind: https://github.com/lmorg/ttyphoon -- there I defined codes for graphical features like interactive tables and inlining AI elements in non-intrusive ways.
My qualm isn't with defining new escape sequences. It's:
1. calling Kitty's specific double height/width escape sequences "modern" when it's using the same 1970s principles as the original double height/width codes.
2. Kitty reinventing the wheel here when it adds no benefit to the original codes.
It's the same reason I get annoyed that there are 4+ different proprietary codes for inlining images.
All of this continual reinvention of the same ideas just creates more problems than it solves.
I don't buy that argument. The old sequence is part of vt100 and supported by xterm. The new sequence is only supported by Kitty. If the Kitty dev wanted to make feature detection easier then they wouldn't have duplicated a feature that has been a staple for terminals for literally decades.
Also the tool you shared has nothing to do with this. It’s just a 3rd party utility.
I'd be happy if we could get terminals to agree on how wide the warning triangle emoji renders. The emoji are certainly useful for scripts, but often widths are such a crapshoot. I cannot add width detection to every bash script I write for every emoji I want to use.
If only there was a standards body that could perhaps spec how these work in terminals.
And shapes, though you can get that with some ASCII art as well.
I've had a few scripts some time ago that took a long time to run, so I wanted a progress indicator I could see from across the room - that way I could play some guitar while monitoring the computer doing stuff in the evening.
Hence, the log messages got prefixed with tags like:
> ]
>> ] # normal progress
/!\/!\] # it had to engage in a workaround
x_x ] # if it had to stop.
> Rendering [failed] in red and [passed] in green would achieve the same. It's not emoji vs text. It's color vs no color.
True, but my prompt is full of colour ASCII characters so emoji stand out. And also, emoji fare better than escape codes when they pass through pipes and stuff.
And, frankly, why even bother with lower-case characters? Upper case is plenty good -- it was good enough for the VT05, it should be good enough for your laptop.
What a coincidence that I spent a good portion of time trying to deal with the warning triangle emoji and see your comment today. Incidentally the info and green ticks are not so bad. Wonder why that specific one has width issues.
You could ship a terminal with your script. This is how apps like Slack deal with inconsistent handling of standardized content by shipping an embedded chromium.
The ChromeOS terminal (hterm[1]) is actually a pretty good terminal, so even a terminal might justify a browser context. Blink[2] on iOS for example uses it.
[1]: https://hterm.org/ (although in the way they do Google seems to have lost interest in updating that site and the GitHub repo, there's still fixes in the upstream Chromium repo)
Can we just move past the plain text nature of terminals and move to fully graphical terminals, please? Plan9 did this in 1995, for crying out loud.
Terminals would still run command line programs, but also graphical programs, or simply display some graphical elements and you’ll still do a lot of typing, but graphics won’t be shoehorned in as they are now.
Am I the only one who actually dislikes the recent trend of putting emojis everywhere in CLI tools? I am ok with red and yellow text for errors and warning, and I can stand green for success (though I find it useless), but emoji's are just distracting.
I also find fully rendered/colored emojis distracting even in repo readmes because I feel they give off a casual chat messaging vibe, since before colored emojis became part of Unicode proper they were exclusively used for chat messengers.
There's a Unicode sequence that tries to use a monochrome glyph instead if it's supported which I prefer as it's more in keeping with the rest of the text (though an issue with some of those variants is legibility at small sizes/PPI).
> I feel they give off a casual chat messaging vibe, since before colored emojis became part of Unicode proper they were exclusively used for chat messengers.
This is mostly cultural, though. Some people are used to this.
I’m the same. I hate emojis anywhere that is intended to be informative reading. Whether it is terminal output, markdown documents (even titles), git commit messages, etc.
I get they bring people a little bit of joy, but as a dyslexic who likely also has ADHD, they bring me unnecessary distractions and visual clutter.
The only time I like emojis in a formal setting is when used in Slack to denote a thread (the thread/sewing emoji).
I dislike emojis in general when combined with running text. Especially in terminals or character-based interfaces with fixed-width fonts.
On top of that, there are only very few emojis that can be read properly at the same size of the current line height. It works for a few simplified faces and symbols, but that's it.
The fact that emoji fonts override the font color rendering is an aggravating factor. I don't want text to change color behind my choice (it SUCKS with customized color themes).
They feel like a punch in the face to me when I'm reading documentation or even worse when reading code.
Sadly, it's really hard to avoid them nowdays. I'm using a few lisp scripts with emacs to translate the common ones back to ascii for rendering.
I can point out that "Noto Emoji" is a b/w version of Noto Color Emoji, which contains a MUCH more suitable version of emojis that can be used in running text. As noted before, it's only a partial solution as I find most emojis are still not readable when scaled at the same size as the text and when simplified sometimes they also lose the original meaning (just use the damn word dammit!). But at least they don't override the color. On linux, you can force a font substitution with fontconfig to force the b/w version whenever color-emoji is used and can't be customized.
Emojis do not belong in the CLI, ever. Hell, I personally think they shouldn't be in Unicode at all (as they are not text), but that ship has long since sailed unfortunately.
I'm fine with them used sparingly in documentation, but in actual terminal output they mostly don't get rendered properly so I'd stick to nerd fonts if I want "icons" of any kind.
The argument for emojis in Unicode was that existing chat protocols had them. But I don't buy that argument since many chat protocols also supported custom smileys which Unicode doesn't. Trying to standardize creative expression is a mistake IMO.
No, the argument was that existing character sets used in Japanese feature phones had them. Because Japanese characters are wider than Western characters, those platforms could be more creative with pictographic characters as well, and could easily add color due to the proprietary phone OS. Unicode added them because Unicode's goal is to provide round-trip compatability with existing character sets.
It's dumb because a font a allowed to re-interpret the actual image, but in doing so you also frequently change the meaning of the symbol. This is not a problem for text, but for images just changing the color of the fill might completely change the meaning of the sentence.
See the old apple gun vs squirt gun. The same is true also when using stuff like whatsapp on android, where the os keyboard shows you one image from the system theme, but the one which you see inserted in the text is not what you selected, but at least is partially better than sending something without knowing how it will be rendered, which is what most chat messages have realized after trying to simply using the system font.
So at that point, you have to switch to a different custom font just for the emoji block, and you're still limited to what unicode allows instead of just bundling whatever image you want (which is a great excuse to sell new phones with "new emojis" I guess).
> and you're still limited to what unicode allows instead of just bundling whatever image you want (which is a great excuse to sell new phones with "new emojis" I guess).
Except that every chat client now supports stickers, which are nothing but custom images that are guaranteed to render the same way for the recipient that they do for you. The recipient does not need to have them installed.
But stickers have to be their own full message in the clients I know of. Once they start to be integrated into textual messages, clients will have developed all the way to where MSN messenger was in 2003.
I like them when they're used as bullet points in lists for instance. Just like we've always used small icons of phones and envelopes in contact information boxes/business cards to identify the fields at a glance.
Without activist moderation, that would appear to be the default outcome. Most humans seem to have an urge to stamp on dissonant opinions. Unfortunately.
I think they are overused everywhere. Most annoyingly as a workaround to put pictures in what should be text - email subject lines for example.
I like coloured text, and I like TUIs. To be fair, nothing I use has noticeable emojis. I am not really bothered about enhanced terminals - I would rather keep terminals simple and use a GUI if I need more complex presentation.
Unless the emoji is serving the purpose of a button or icon, then at the CLI (and TUI) I prefer not to see them. A good example (IMO) of their proper use would be as a traffic light indicator for something.
Always consider the output of your program may be used as the input for another program to paraphrase klt.
Agreed. Emojis are even more prominent than colors, so they should be very sparingly used. I'm not against the use of emojis in terminals per se (regardless of my opinion of the very introduction to emojis in Unicode), but they are now too many to be visually ignored.
Good visual hierarchy means you end up looking first at what is important. Good visual hierarchy sets correct context.
Bad visual hierarchy adds mental overhead. Bad visual hierarchy means that any time you look, even when you don’t consciously realize it, you end up scanning through hierarchy offenders, discarding them, then getting to the important part, and re-acknowledging offenders when it comes to them in appropriate context. This can happen multiple times: first for the screen as a whole, then when you focus on a smaller part, etc. As we encounter common visual hierarchy offenders more and more often, we train ourselves to discard them quicker, but it is never completely free of cost.
There are strategic uses for symbols in line with visual hierarchy principles. For example, using emoji as an icon in an already busy GUI is something I do as well.
However, none of those apply in terminal’s visual language of text and colours, and unlike a more or less static artifact fully under designer’s control (like a magazine or a GUI) in a fluid terminal screen where things shift around and combine in different ways it is almost impossible for software author to correctly predict what importance what has to me.
Those CLI tool authors who like to signify errors with bright emoji: have you thought that my screen can be big, and after I ran your program N times troubleshooting something there can be N bright red exclamation marks on my screen, N-1 of which are not even remotely close to where the message of interest is? have you thought that your output can coexist in a multiplexer with output from another program, which I am more interested in? should other programs compete for attention with brighter emojis? and so on.
As to joyful touches, which are of course appreciated, those can be added with the old-style text-based emoticons.
reply