Hacker Newsnew | past | comments | ask | show | jobs | submit | more AkshatJ27's commentslogin

Bandwidth is expensive and videos take up way more bandwidth than text. I'd say it would be pretty much impossible to run something like YouTube, on that big of a scale, solely on donations.


why are you so personally offended by these jokes? You have spammed the exact same reply at least three times on this thread.


I think that is the joke, Implying most of the HN content nowadays is AI generated.


> If you know RegExp's, the syntax will immediately make sense

If I know RegEx, why would I use pomsky?


Because if you know regexp, you know how terrible it is.


I've yet to meet anyone who thinks this.

The greatest problem with regexp is not its complexity nor learning curve: it's its apparent complexity & learning curve. That tends to put learners off, but anyone I know who's persisted has found the journey faster than expected. After they're over it, everyone I've talked to thinks pretty highly of the syntax: it's simple (small set of compostable components) & expressive, especially if using labelled groups.

The second greatest problem with Regexp is learning query optimisation: that's definitively complex but this new syntax doesn't even attempt solving that.


Eh, I can manage to write a regexp with a little effort, but reading them has never stopped being painful. So in my anecdotal, sample-size one experience it leans heavily towards write-only language.

This might do better there.


Do you use the /x modifier?


Meet me. I think it. Now you've met one.

You can of course disqualify me in saying I can't possibly have persisted at learning it, if I don't like it. But I am inevitably "the guy who knows regular expressions" in the places I've worked. I do use them, for stuff like fixing tedious one-off data munging jobs without having to write an actual program to do it. But I

* Do it step by step, to avoid writing anything resembling a complex regex, because I've yet to meet anyone who can write a complex regexp without bugs.

* Use "undo" a lot in that process.


> I've yet to meet anyone who thinks this.

Hello *waves*

The only thing regexp has on the plus side is that it's compact. I'd prefer something that's more readable without knowledge. I know how to use it but I always have to google it. I only use it once per year and so there's no big win in mesmerizing it.


Do you remember at least:

  . * | + ? ( )
This is 95% of it. After that, it's character classes and lazy matching.

Character classes can often be given as ranges:

  [a-z] [A-Z] [0-9] ^ $
You can use the pipe | to combine then.

Lazy matching finds the shortest matching string instead of the longest one. This is given by the symbols:

  *? +?
Instead of + and * respectively.

> I only use it once per year

If you use a text editor, any kind of repetitive find+replace activity can be sped up using regexes. Seems strange you only use it once per year.

Some people tell you that you can't parse XML using regexes. THEY'RE WR- right, but you can often do decent one-off jobs on an XML document using regular expressions, as long as you're aware that it's a one-off job that won't work on any other document.


It is not that it is hard to understand the syntax or even write them, it is hard to read and change them. Here is an example I just grabbed off a random webpage:

^(?=.\d)(?=.[a-z])(?=.[A-Z])(?=.[!@#$^&()_-]).{8,18}$

That is hard to quickly* read and understand what it is. Sure a comment would help, but if someone said it isn't working, I have to work all that out and then fix it.

There is a reason Perl lost to Python. It just seems like we can do better.


And that's an expression without any escaped parens, braces, etc.


> compostable components

I've certainly seen regexes that remind me of compost heaps.

(I agree that regex syntax is fine. I just liked the typo.)


> The second greatest problem with Regexp is learning query optimisation: that's definitively complex but this new syntax doesn't even attempt solving that.

Query optimisation is totally unnecessary if you use Rust's or Go's implementation.


:grimace_face:

I'm a big believer that "premature optimisation" is a bad thing but making a blanket statement that any given implementation is 100% performant for all use-cases is going a bit far.


I doubt whatever you're doing is going to work in Rust or Go.

Say that the length of your string is n, and the length of your regular expression is m. Isn't Regular Expression matching in those languages guaranteed O(m n) time? In plain English, that's only linear time - which you practically can't improve on - even if you tried! It can only do worse if your implementation uses backtracking instead of Finite Automata - which was very common at one time, but is now considered The Wrong Way: https://swtch.com/~rsc/regexp/regexp1.html

There is actually a way to optimise your queries, which is by precompiling them to DFAs, but I don't think that's what you're doing.


I stand completely corrected. This is cool.

> There is actually a way to optimise your queries, which is by precompiling them to DFAs, but I don't think that's what you're doing.

You can do that, but the primary advantage of DFAs in practice is backtracking avoidance, so simply reduction of backtracking in manually constructed queries is more what I was referring to. The missing part of the puzzle here being that not supporting backtracking at all doesn't limit capabilities, so removal rather than reduction is the go/rust approach.


The /x mode (extended regexp) lets you use any amount of whitespace or commenting. I use that mode whenever a regex gets complicated enough to merit an explanation (which is quickly), and also indent groups and whatnot. Examples of where I used this in the past are here:

Complex password validation: https://gist.github.com/pmarreck/4c5f1076498da1a86062

Email header parsing: https://gist.github.com/pmarreck/8476538

An attempt at a JSON validator (yeah, I know): https://gist.github.com/pmarreck/2775709

Remember that commenting is not just for others, but also for "future you"!


I feel like this opinion is from people who use regexes only rarely. I use them all the time at work, and you become pretty adept at parsing and understanding even dense ones. Of course using the /x modifier is still recommended.


Disagree, honestly. I really don't like this True Scotsman absolutist statement either. If pomsky was able to guard against ReDos then maybe I'd consider switching. Otherwise, I have no problems reading and understanding regular expressions.


Variables and comments seem nice.

Insignificant whitespace is needed to support it, and as an added bonus it would make it easier to break up patterns across multiple lines.

The syntax changes and added verbosity do not seem great, though. They'd trip me up for sure.

In general, I think I'd like to see a language more like "Regex 2.0", ie. an extension that doesn't depart too far from what we're used to.


While a language like Raku (formerly known as Perl 6) is unlikely to catch up in the current landscape, it did bring a lot of improvements to regular expressions (link to section that starts to use interesting examples [1]).

Way back somewhere in the 2010s when I was still keeping an eye on Perl 6, I was kind of hoping that all these improvements would make their way into some kind of PCRE v3 that all other tools that already use PCRE would switch to. Would have been nice.

[1] https://docs.raku.org/language/regexes#Alternation:_||


(?x) mode in regexes does comments / whitespace / multiple lines.

+1 to the syntax


I wrote a lot of regex but I find reading regex extremely hard. Even expressions I wrote myself become unreadable after a few months.


If you know RegEx, you have two problems.


If you solve a problem with RegEx, you now have two problems!


Code readability isn’t for you.

It is for the next person that maintains the code.


...which is often "future you" :)



It cuts off at "Here is how they did it." for me. Had to enter (made up) credit card details to access the rest of the post.

http://web.archive.org/web/20221229110741/https://thepalindr...


Your archive.org link still shows the same paywall for me.



The only thing extra in the post is the relation between this method and the Newton-Raphson method. You can find the method here: https://en.wikipedia.org/wiki/Newton%27s_method#Square_root


There’s also a bit more historical info in:

https://en.wikipedia.org/wiki/Methods_of_computing_square_ro...

It would seem we don’t know exactly what method they used.

I would like to propose: really, really long and accurate ruler, just because the mental image is funny. The Greeks hadn’t come around yet to make everybody’s homework harder by insisting on compass and straight-edge (Note: this comment is not historically accurate).


I would agree on the long and accurate ruler. The actual problem here is understanding what the square root is (and how it can be used), not finding it's value for some arbitrary number. And despite all even 4000 years ago our ancestors could make a very precise tools.


It’s ridiculously more likely that they found the approximation by calculating rather than via any experimental means.


Entirely agree. To be honest, I think it rather marvellous that people were able to imagine, then reason about, so much precision so long before any physical representation of it was possible. The Babylonians could calculate to a ten thousandth of an inch, but no human could measure the physical equivalent until Henry Maudslay. In a way the whole Industrial Revolution rested on people realising that the five-thousand-year-old mathematical view of abstract truth of "shape" could be rendered as physical reality with the right techniques... but coming up with those abstractions in a world where the roundest thing came off a potter's wheel and the basic unit of length was some fellow's forearm was an even more impressive achievement. We should not take it for granted just because it was so long ago.


Without multiplying non-integers, you could seek the square root of 200,000,000 using bisection. Sort of a "long ruler" story.


Your first guess of that should be within .5% of the actual value.


Why? Long rope, make a right triangle. The proportion of one of the equal sides to the remaining side is square root of 2. Seems more likely to be experimentally approximated before calculated.

This would be useful for figuring out land rights. And for that, you need very long ropes.


I would be shocked if anyone could measure sqrt(2) to half a part per million using ropes.

You'd need to be accurate to a half mm per km of rope, which is well beyond what you need for practical surveying. You also need to keep the rope essentially perfectly straight, which means doing the experiment on almost perfectly flat ground. You'd need to keep the rope at a controlled tension so it doesn't stretch. You'd be hosed by temperature and humidity changes while walking those kilometers. The rope would also need to be marked or measured to < 1mm accuracy when you're done walking. You'd need to make sure the right angle is square to < 1ppm as well, though maybe there's a clever procedure that corrects this.

It's not even easy to do this with laser surveying equipment.


I'm sorry, but have you thought this through?

Real-world ropes flex and bend, which alone causes experimental error orders of magnitude larger than one in a million. There does not exist a rope material that doesn't flex at least one millimeter per a km of rope, and certainly did not exist 6000 years ago. Hell, even if we assume a perfectly flat plain, the curvature of Earth would cause an error larger than 1e-6. And obviously if you try to hang the rope from both ends and pull it taut, it will flex and it's also impossible to remove all the slack.

You need a laser, or at least something like the Michelson–Morley experiment, exploiting the wave nature of light, to measure any real-world distance to the precision required. And obviously for that to work, you also need to know the speed of light to the same precision. And given that we're not in a vacuum, even light bends as it travels through air, especially air near the ground. You know heat haze? Yeah, that's going to mess up your experiment too.

And how on Earth would they ensure the ropes form a right angle to a precision of one in a million? Given a 1km long side, the sideways margin of error is again 1mm. The angular precision needed is sub-arcsecond, a resolution that you need a large-ish optical telescope to achieve.

And obviously then there's the question of measuring the length of the hypothenuse.


Neat game, Works fine on chrome/firefox but is unplayable on edge due to lag.


Firefox on Android. I couldn't find anything to interact with. Allegedly there's a button though.


It should be a 3D scene with a button and light on a table connected by what looks like a WireWorld sim. Pretty neat. Works on Mac Firefox but not Safari. I don't know enough about front end development to guess at what the console error means.


Got to a windows machine to test it again. The button is now pressable. Now I'm supposed to connect something using a wire. No wire is present. Hm.


You draw a line yourself, like paint


My problem was, in order to do that, you have to click the red box that says "EDIT". To be fair, it does say you have to "use EDIT". However also, to be fair the other way, I thought I already was using it because it was red.


You’re absolutely right.

I have a newer (wip) version that allows editing while the simulation is running, removing the “play” and “edit” cycle.

Newer editor allows much larger bitmaps (about 2000hz for a 1024x1024 bitmap) and allows the player to zoom, pan, scroll, area select, move, copy and paste.

Newer version is written in Unity and I want to feature increasingly complex levels like programming a vacuum robot or robot lawn mower, traffic lights, train signals, battle tanks AI (like battle city NES, where the bitmap controls the player), and i’ve been fantasizing about attaching some kind of simulated audio chip to play chiptune (although I’m not sure how to create a level around it)

I’m targeting around 25 levels with increasingly difficult challenges or variants, maybe some kind of leaderboard or something. For some levels, I imagine I can make the bitmap AI (like the battle tank) fight other player’s submissions, and have a ranked AI bitmap leaderboard.

I intent to share individual levels once they’re in a decent state to hopefully market the game a bit since there’s already about 250 hours poured into it, and I’m not sure if there’s even a market for it.


Sound takes lesser time to travel in smaller rooms, hence the difference in time is not very large between the original sound wave and the reflected one which makes it harder to distinguish to human ears.


Might be good to let people know you are the Founder of the above mentioned service.

Also, I couldn't find any information about the data it was trained on, and comparison with OpenAPI since you are offering yourself as a replacement for their API.

edit: found the following blog post: https://text-generator.io/blog/over-10x-openai-cost-savings-...

I can't see how its a 10x saving when your API performs worse than davinci (according to the blog post), and is 0.01$ per request. Assuming each request to use up 1000 tokens, the next OpenAPI model Curie is just 0.002$ per request.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: