Because muscle-memory is a powerful force, and it's a convenient universal input language.
It's why after years of powershell I still reflexively reach for "ls" not "Get-ChildItem"
( "ls" is aliased by default to Get-ChildItem thankfully! )
Markdown is familiar, easy to use, and very human readable and editable without fear of breaking a compilation step.
Typesetting languages however, TeX, LaTex, postscript or the nightmare of the PDF spec, are not.
If I send someone who doesn't know anything about a document markdown, then there's a good chance they'll be able to make simple edits in their favourite text editor and not screw things up.
If I send someone HTML, they'll likely be able to do simple edits but there's greater risk of breaking elements or styling. Not as bad as with stricter languages, but still significantly more chance than with markdown annotations.
If I send someone some TeX, they'll probably not know what to even do with it, they'll likely get confused and ask for a file format they can use.
Reducing how many different languages we have to master is important, so I think it's great if there's a way to write everything in flavours of markdown.
Write markdown everywhere, and let the backends compile to relevant outputs.
In this instance, it's left perhaps a bit function-heavy so there's still larger "risk", but that's likely something that can be solved through templating.
We have flavours of MD, for github issues, for typesetting, for stackoverflow questions. Slack and Discord have (limited) MD support. We can write blogs in MD through jekyll.
MD is everywhere, it's a convenient universal input language.
fake quote > Please add the reward and fees to: 187e6128f96thep00laddr3s9827a4c629b8723d07809
And if you make a fake block that changes the address, then the fake block is not a good one.
This avoid the same problem with people stealing from pools, and also evil people listening to new mined blocks that pretend that they found it and send a fake one.
Realistically, the bot owner could notice you're running MoneroAnubis and then would specifically check for MoneroAnubis, for example with a file hash, or a comment saying /* MoneroAnubis 1.0 copyright blah blah GPL license blah */. The bot wouldn't be expected to somehow determine this by itself automatically.
Also, the ideal Monero miner is a power-efficient CPU (so probably in-order). There are no Monero ASICs by design.
I doubt you could do this efficiently enough such that an mining-business optimised mining rig can be kept busy enough with web-scraped honey pots to be worth the think time of setting it up vs. just scrape and skip pow protected sites + dedicated crypto mining operation as 2 seperate things.
The conclusion I draw from this is that some debuggers are so bad it's not delivering value over "print debugging".
But perhaps it's a lack of knowing what a debugger can deliver.
A debugger gives you the ability to immediately dive in and instantly inspect the complete stack trace, all the values on the stack, all the values of local variables, etc. Without restarting the program or calling out specifically what you're going to want to inspect ahead of time.
A debugger will "pause the world" and let you see all the parallel stacks.
A debugger will let you set conditional breakpoints. If you need to debug the conditions for when a variable gets set to a particular value, a conditional breakpoint on the variable set to break when it's set to that particular value makes that a doddle.
A debugger will let you enumerate collections and re-write values on the fly.
A debugger will let you break on exceptions, even ones normally caught and handled.
A debugger can launch and attach itself.
All of that is a massive force-multiplier and time-saver, which print debugging doesn't deliver.
A debugger doesn't show you the evolution of your program over time. A few well placed prints give you that.
I use my debugger often, but sometimes you need to track the value of something over time (say, over 1000 iterations) and a debugger can't show you that.
So prints are still relevant. Debugger are more for "needle-in-haystack" stuff I think.
A debugger that could record the execution over time (and query that execution), that would be great. I know there are some, but are there for python for example ?
I've spent some quality time with debuggers but have come to the conclusion that it is a seductive way to spend time that doesn't necessarily get me there any faster than with printf or equivalent.
Admittedly, I mostly develop with sbcl and slime, so the judiciously place 'break' statement brings you to a stack trace.
Also with sbcl/slime, you can do ESC-. on a function and go to the actual source, even deep within the core of sbcl itself.
Nonetheless, I mostly depend on logging statements and print statements. When I need to go to python, print statements give most of what I need.
Eh, even that is too strong. Discord already recorded the conversations and makes them publicly available to anyone who joins the server at any point in the future. Anyone who thought they were having an ephemeral conversation that wouldn't ever be seen again by anyone has never tried scrolling back on a Discord server.
The researchers clarify that it's only those servers that are listed in the discovery tab - you don't need an invite link to join those.
> In this regard, this paper introduces the most extensive Discord dataset available to date, comprising 2,052,206,308 messages from 4,735,057 unique users across 3,167 servers – approximately 10% of the servers listed in Discord’s Discovery tab, a feature designed to highlight public servers that users can join.
This is not a UUID problem, this is a Microsoft problem from the 90s. Just don't use Microsoft software (</s>) and use big endian as specified by the standard.
It is a general UUID specification problem. The dashes represent a struct breakdown. That struct has internal endian issues. That struct is also weirdly laid out in a "made sense at the v1 time way" that doesn't make sense for versions after 1. Why is the version number in the middle? Why is the relatively static "Machine ID" at the end? If you were trying to cluster your sort by machine, you have to sort things "backwards". That's what SQL Server did, and why you might blame it on being a Microsoft problem, trying to avoid clustered index churn by assuming GUIDs were inserted by static Machine IDs. That assumption broke hard in later Versions of UUID when "Machine ID" just became "random entropy". But the idea to sort like that in the first place wasn't wrong for v1, it had a good sense to it. Just like it makes sense to sort v7 UUIDs by timestamp to get mostly "log ordered" clustered indexes. At least there the sort data is all up front, but it crosses "struct field" boundaries if you are still relying on the v1 chunking.
(Ultimately UUID v1 was full of mistakes that we all will keep paying for.)
In my experience, LLMs in general are really, really bad at C# / .NET , and it worries me as a .NET developer.
With increased LLM usage, I think development in general is going to undergo a "great convergence".
There's a positive(1) feedback loop where LLM's are better at Blub, so people use them to write more Blub. With more Blub out there, LLMs get better at Blub.
The languages where LLMs struggle, with become more niche, leaving LLMs struggling even more.
C# / .NET is something LLMs seem particularly bad at, and I suspect that's partly caused by having multiple different things all called the same name. EF, ASP, even .NET itself are names that get slapped on a range of different technologies. The EF API has changed so much that they had to sort-of rename it to "EF Core". Core also gets used elsewhere such as ".NET core" and "ASP.NET Core". You (Or an LLM) might be forgiven for thinking that ASP.NET Core and EF Core are just those versions which work with .NET Core (now just .NET ) and the other versions are those that don't.
But that isn't even true. There are versions of ASP.NET Core for .NET Framework.
Microsoft bundle a lot of good stuff into the ecosystem, but their attitude when they hit performance or other issues is generally to completely rewrite how something works, but then release the new thing under the old name but with a major version change.
They'll make the new API different enough to not work without work porting, but similar enough to confuse the hell out of anyone trying to maintain both.
They've made things like authentication, which actually has generally worked fine out-of-the-box for a decade or more, so confusing in the documentation that people mostly tended to run for a third party solution just because at least with IdentityServer there was just one documented way to do it.
I know it's a bit of a cliche to be an "AI-doomer", and I'm not really suggesting all development work will go the way of the dinosaur, but there are specific ecosystem concerns with regard to .NET and AI assistance.
(1) Positive in the sense of feedback that increased output increases output. It's not positive in the sense of "good thing".
My impression is also that they are worse at C# than some other languages. In autocomplete mode in particular it is very easy to cause the AI tools to write terrible async code. If you start some autocomplete but didn't put an await in front, it will always do something stupid as it can't add the await itself at that position. But also in other cases I've seen Copilot write just terrible async code.
I rather suspect that it's bad at C# simply because there's much fewer open source C# code to train on out there than there is JavaScript, Python, or even Java. The vast majority of C# written out in real world is internal corporate apps. And while this is also true for Java, it has had a vast open source ecosystem associated with it for much longer than .NET.
I dunno, when I review code, I don't review what's automatically checked anyways, but thinking about the change/diff in a broader context, and whatever isn't automatically checked. And the earlier you can steer people in the right direction, the better. But maybe this isn't the typical workflow.
It's a waste of time tbh; fixing the checks may require the author to rethink or rewrite their entire solution, which means your review no longer applies.
Let them finish a pull request before spending time reviewing it. That said, a merge request needs to have an issue written before it's picked up, so that the author does not spend time on a solution before the problem is understood. That's idealism though.
Well there's C# / .NET, which ticks off all of those boxes, even the functional syntax is well supported since it added pattern matching and people write a lot of fluent functional style anyway with LINQ.
It also interops nicely with F#, so you can write a pure functional library in F# and call it from a C# program in the "functional core, imperative shell" style.
It has an incredibly solid runtime, and a good type system that balances letting you do what you want without being overly fussy.
It misses the frontend/backend symmetry and has too large a coupling to Microsoft and Windows in my head. I know that these days it's supposed to be cross platform, but every time I've tried to figure out how to install it I get lost in the morass of nearly-identical names for totally different platforms and forget which one I'm supposed to be installing on Linux these days.
That doesn't mean there's anything wrong with it and I've often thought to give it another shot, but it's not a viable option right now for me because it's been too hard to get started.
>but every time I've tried to figure out how to install it I get lost in the morass of nearly-identical names for totally different platforms and forget which one I'm supposed to be installing on Linux these days.
I realize Microsoft is terrible at naming things, but for .NET/C# it's really not that hard these days. If you want to use the new, cross platform .NET on Linux then just install .NET 8 or 9.
New versions come out every year, with the version number increasing by one each year. Even numbered versions are LTS, odd numbered releases are only supported for about a year. This naming scheme for the cross-platform version of .NET has been used since .NET 5, almost 5 years ago, it's really not too complicated.
Fair enough, I guess I haven't looked in the last few years. The last time that I did a search for .NET there were about five different names that were available and Mono still turned up as the runtime of choice for cross platform (even though I knew it wasn't any more).
To clear things up for you a bit more (hopefully, or I'll just make it worse):
Any legacy .NET projects are made with .NET Framework 4.x (4.8.1 8s the latest). So if it's 4.x, or called .NET Framework instead of just .NET, it's referring to the old one.
.NET Core is no longer used as a name, and hasn't been since .NET Core 3.1. They skipped .NET Core t completely (to avoid confusion with the old one vut I think they caused confusion with this decision instead) and dropped the Core for .NET 5. Some people will still call .NET 5+ .NET Core (including several of my coworkers) which I'm sure doesn't help matters.
Mono isn't 100% completely dead yet, but you'll have little if any reason to use it (directly). I think the Mono Common Language Runtime is still used behind the scenes by the newer .NET when publishing on platforms that don't allow JIT (like iOS). They've started adding AOT compilation options in the newest versions of .NET so I expect Mono will be dropped completely at some point. Unless you want to run C# on platforms like BSD or Solaris or other exotic options that Mono supports but the newer .NET doesn't.
It's why after years of powershell I still reflexively reach for "ls" not "Get-ChildItem"
( "ls" is aliased by default to Get-ChildItem thankfully! )
Markdown is familiar, easy to use, and very human readable and editable without fear of breaking a compilation step.
Typesetting languages however, TeX, LaTex, postscript or the nightmare of the PDF spec, are not.
If I send someone who doesn't know anything about a document markdown, then there's a good chance they'll be able to make simple edits in their favourite text editor and not screw things up.
If I send someone HTML, they'll likely be able to do simple edits but there's greater risk of breaking elements or styling. Not as bad as with stricter languages, but still significantly more chance than with markdown annotations.
If I send someone some TeX, they'll probably not know what to even do with it, they'll likely get confused and ask for a file format they can use.
Reducing how many different languages we have to master is important, so I think it's great if there's a way to write everything in flavours of markdown.
Write markdown everywhere, and let the backends compile to relevant outputs.
In this instance, it's left perhaps a bit function-heavy so there's still larger "risk", but that's likely something that can be solved through templating.
We have flavours of MD, for github issues, for typesetting, for stackoverflow questions. Slack and Discord have (limited) MD support. We can write blogs in MD through jekyll.
MD is everywhere, it's a convenient universal input language.
reply