Hacker Newsnew | past | comments | ask | show | jobs | submit | celrod's commentslogin

I use kakoune, and don't understand why helix seems to be taking off while kakoune (which predated and inspired helix) remains niche.

Kakoune fully embraces the unix philosophy, even going so far as relying on OS (or terminal-multiplexer, e.g. kitty or tmux) for window management (via client/sever, so each kakoune instance can still share state like open buffers).

A comparison going into the differences (and embracing of the unix philosophy by kakoune) by someone who uses both kakoune and helix: https://phaazon.net/blog/more-hindsight-vim-helix-kakoune

Sensible defaults and easy setup are a big deal. No one wants to fiddle with setting up their lsp and tree-sitter. There's probably more to their differences in popularity than just this, though.


I think the easy setup is exactly the reason Helix has taken off compared to Kakoune. It probably has the most simple onboarding experience I've had with any text editor. Things just make sense, and tools that should be built in are.

I think the philosophy of delaying the plugin system as long as possible is one of the reasons helix has achieved that.

With Helix I just have to learn selection first, and few different binds compared to vim. With Kakoune, I have to onboard into a more complex ecosystem, in addition to that. A lot of people already have vim/neovim config fatigue so that's not very compelling.


I genuinely don't like the concept of the keyboard interaction in helix and kakoune, selecting things to modify them. I don't know what it is, but it somehow just feels much less satisfactory to me personally compared to the vim way.


The biggest benefit is multiple cursors. The helix and kakoune multiple cursor implementation are probably the best in any editor. It just goes hand in hand with selection first.


The problem with that editing model for me is that it makes text objects much more cumbersome.

In Vim you can for example do "dap" to delete around a paragraph, but you cannot easily invert it ("pad") because 'p' is too common and is already bound.

You can also easily do the "select first" in Vim by first pressing 'v' to start a visual selection, so I just don't see the point.


This bugged me for reasons I can’t quite explain. I think it’s that I can write and edit the command before making the modification and the ease of going back and reusing a historic search and replace relatively easily.


I spent about a month trying to get used to Kakoune. It never clicked with me and I went back to vim.

My biggest beef with Kakoune’s editing philosophy is that it seems to emphasize “editing in the large” as its preferred mode of interaction. This is totally backwards to me. Editing in the large (making multiple identical edits throughout a buffer) is a rarity. Most edits in day to day use are single edits. So the fact that Kakoune likes to leave a bunch of extra cursors in your wake (like a trail of breadcrumbs) as you jump around a file to make single edits is extremely infuriating to me, like it’s trying too hard to be helpful.

The irony of Kakoune using a clippy-style contextual help window is not lost on me!


This is unfortunately exactly why I never used (neo)vim or kakoune (or tbh, sublime text whose lsp integration I have never successfully gotten working). Going from school (Java + NetBeans/C# + Visual Studio) to work (C#/JS + Visual Studio -> C#/TS Visual Studio Code) I had expectations for certain language features being available by default. Helix is the first editor of its ilk to get configuration out of my way so I can effectively write code the way I'm used to.


Aside from the other replies, marketing matters. This is the first I've heard of this thing which apparently dates to 2011.


I don't have direct experience with either Helix or Kakoune but after only a few minutes tinkering around, I can see one big reason: In Helix, most of the basic commands seem to be the same as vi. Whereas I understand Kakoune inverts the action/movement paradigm of vi. Maybe that's a more sensible design, I don't know. I didn't check to see whether or not the key bindings were similar but at that point, it's rather moot.

I've been using vim for 25 years, my muscle memory isn't going to tolerate switching to a whole new text-editing "language" at this point. But I could perhaps learn to live with a new dialect.


Helix inverts the verb-selection paradigm in the same way as Kakoune.


Appreciate the clarification, I guess I didn't get far enough into `:tutor` to see that.


Kakoune's problem is the bad UI (eg LSPs hover), and that scripting it is simply too complicated.


I think they're arguing

Cause -> cognitive disease Cause -> plaques

That is, that the same cause is behind both.

There may be some arrows from plaque to disease as well (i.e., that plaques also increase disease).

I dont know the truth, but just trying to understand/follow Alzheimers news and reading comments.


Is including batteries the main reason helix seems to have started taking off, while kakoune hasn't?

I use kakoune, because I like the client/server architecture for managing multiple windows, which helix can't do. The less configuring I do the better, but I've hardly done any in the past year. It's nice to have the option.

I do use kakoune-lsp and kak-tree-sitter.


I agree.

The problem with eager diagnostics and templates is that the program could define a `Base<int>` specialization that has a working copy constructor later. [0]

I think if you define an explicit instantiation definition, it should type check at that point and error. [1] I find myself sometimes defining explicit instantiations to make clangd useful (can also help avoid repeated instantiations if you use explicit declarations in other TUs).

[0] https://en.cppreference.com/w/cpp/language/template_speciali...

[1] https://en.cppreference.com/w/cpp/language/class_template.ht...


I use Wshadow personally. I highly recommend it. I think code that violates it (even if correct) is harder to understand.


> I don't want my web browser or video player to be resized because I open a new program

I've been using niri (a tiling WM) recently. This is their very first design principle: https://github.com/YaLTeR/niri/wiki/Design-Principles Maybe other PaperWM-inspired WMs are similar. niri is the first I've used.

If your windows within a workspace are wider than your screen, you can scroll through them. You also have different workspaces like normal. I'll normally have 1 workspace with a bunch of terminals, and another for browsers and other apps (often another terminal I want to use at the same time as browsing, e.g. if I'm looking things up online).


Do you not often quickly look between files? If so, odds are you're using tiles within tmux, vim, emacs, vscode, or something.

I use kakoune, which has a client/server architecture. Each kak instance I open within a project connects to the same server, so it is natural for me to use my WM (niri) to tile my terminals, instead of having something like tmux or the editor do the tiling for me. I don't want to bother with more than one layer of WM, where separate layers don't mix.


How feasible would it be for something like gdb to be able to use a C++ interpreter (whether icpp, or even a souped up `constexpr` interpreter from the compiler) to help with "optimized out" functions?

gdb also doesn't handle overloaded functions well, e.g. `x[i]`.


GDB does have hooks for interpreters to be executed within it, but I haven't managed to make this work. https://sourceware.org/gdb/current/onlinedocs/gdb.html/JIT-I....


It does though? Just compiled a small program that creates a vector, and GDB is perfectly happy accessing it using this syntax. It will even print std::string’s correctly if you cast them to const char* by hand. (Linux x86-64, GDB 14.2.)


I've defined a few pretty printers, but `operator[]` doesn't work for my user-defined types. Knowing it works for vectors, I'll try and experiment to see if there's something that'll make it work.

  (gdb) p unrolls_[0]
  Could not find operator[].
  (gdb) p unrolls_[(long)0]
  Could not find operator[].
  (gdb) p unrolls_.data_.mem[0]
  $2 = {
`unrolls_[i]` works within C++. This `operator[]` method isn't even templated (although the container type is); the index is hard-coded to be of type `ptrdiff_t`, which is `long` on my platform.

I'm on Linux, gdb 15.1.


> This `operator[]` method isn't even templated (although the container type is)

That might be it. If that operator isn’t actually ever emitted out of line, then GDB will (naturally) have nothing to call. If it helps, with the following program

  template<typename T>
  struct Foo {
      int operator[](long i) { return i * 3; }
  };
  
  Foo<bool> bar;
  template int Foo<bool>::operator[](long); // [*]
  
  int main(void) {
      Foo<int> foo;
      __asm__("int3");
      return foo[19];
  }
compiled at -g -O0 I can both `p foo[19]` and `p bar[19]`, but if I comment out the explicit instantiation marked [*], the latter no longer works. At -g -O2, the former does not work because `foo` no longer actually exists, but the latter does, provided the instantiation is left in.


Can confirm, this works for me in my actual examples, thanks!


> It will even print std::string’s correctly if you cast them to const char* by hand

What does that mean? I think `print str.c_str()` has worked for me in GDB before, but sounds like you did something different.


I was observing that `p (const char *)str` also worked in my experiment, but I’m far from a C++ expert and upon double-checking this seems to have been more of an accident than intended behaviour, because there is no operator const_pointer in basic_string that I can find. Definitely use `p str.c_str()`.


If your std::string was using a short string optimization, that would explain the “accident”.

Some implementations even put char[0] at the first byte in the optimized form.


That explanation doesn't work IMO, unless `str` is a std::string pointer, which is contrary to the syntax GP suggested with `str.c_str()`.

It doesn't seem possible in actual C++ that the cast from non-pointer to pointer would work at all (even if a small string happens to be inlined at offset 0.) Like GP, I looked for a conversion operator, and I don't think it's there. Maybe it is a feature of the gdb parser.


Good point, but if it’s a long string, 2/3 of the most common implementations would make the first word the c_str()-equivalent pointer:

https://devblogs.microsoft.com/oldnewthing/20240510-00/?p=10...


So it's actually printing *(const char **)&s?


The first pointer-sized chunk of the string structure is a pointer to the C-string representation. So the cast works as written.


Well, no, because (const char *)str is nonsense, if str is an std::string.


Not to the debugger. If the first 8 bytes of the object referenced by str is a char* the debugger is perfectly capable of using it that way.


this "optimized out" thing is bullshit as hell


Skymont little cores have 4x 128-bit execution. They could quadruple-pump.

But looks more like they're giving up on people writing code for wide vectors, instead settling on trying to make the existing code faster.


Any suggestions for ECC?

Would you suggest going with an ASRock Rack motherboard, even for desktop use, like you used here? https://www.phoronix.com/review/amd-ryzen9-ddr5-ecc

I'm strongly tempted to get a Zen5 CPU, but am unsure of the motherboard.


I haven't yet tested ECC with any Zen 5 desktop CPU. But yes in general with Zen 4 that ASRock Rack and Supermicro boards have worked out well. With time will try out ECC on Ryzen 9000 series.


Zen5 appears to officially support up to DDR5 5600, but unfortunately all of the ASRock Rack or Supermicro boards I looked at only supported DDR5 5200.

I may wait for new Zen5 boards, or maybe take a gamble on something like the Asus ProArt, where I saw comments online indicating that ECC is (unofficially?) supported.

Looking forward to Ryzen 9000 ECC benchmarks.


Or other ASUS mainboards. For now ASUS seems to be the only desktop mainboard manufacturer that officially mentions in the docs support of "ECC and Non-ECC, Un-buffered Memory".


Yes, I see now that while not advertised on seller's websites, Asus's product pages do indeed say that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: