Even though I prefer Python over PHP, for web projects, I'm probably down with PHP forever. Because it so nicely supports stateless request handling without long running processes.
In PHP, you can just throw a php file on a webserver and it works. To update, you just update the file. You don't have to restart anything.
On the dev machine, you can just have Vim in one window and Firefox in the other, change code, hit F5, and you see what you did.
I don't like having to run a code watching process which then recompiles the whole codebase every time I save a file.
PHP is underrated, especially as a learning resource. I'm very surprised I only ever built something with vanilla PHP on the job a few months ago - and how fast I could go from idea to prototype.
There is a very natural learning progression to: First build apps that run purely locally; then build a few static websites, maybe starting with hand-crafted HTML and eventually using something like Hugo; then build a small dynamic website with vanilla PHP; then finally build something with a more complex framework, like Laravel or Django. Going upwards in these iterations I think would help a lot ofd newer devs internalize where the tradeoffs of inital complexity vs. future ease of development lands for them.
The differences you speak about arise due to the nature of their design. PHP was created for the web whereas web programming support in python wasn't part of the language design. When you speak about web programming in python, the seam between the language and the web is usually a wsgi/agsi layer. This is where all the things you mentioned come into play. There's a whole lot of benefit imo to using python over php beyond that seam.
There's a fair bit of nuance and it really depends on the setup. You can run Python with CGI and execute once per request but it's much more common to use wsgi/asgi. Likewise, I think php-fpm is still pretty common which runs long running PHP processes.
The difference is striking the balance between developer's speed and performance. In case we don't want to reload everything and run from scratch, it's pretty easy to do with python too. But we "chose" to run it always and quick reload, so the requests aren't too slow [and scalable].
Rest of the things you mentioned are pretty same for Python as well.
You talk about performance, but I think that is another point for PHP. In my experience, PHP handles the same requests faster.
Yes, I could build everything from scratch myself in Python and have the same statelessnes as in PHP. But parsing headers, creating headers etc feels like it should be handled by a framework. In PHP, it is build right in.
And if I would build it, it would talk to the webserver via CGI. But I think CGI is slow. For PHP, you have mod-php which is super fast.
If PHP doesn't recompile your script imports, how does the reload tracking work with dependencies? Or does it punt somehow (eg only reloads your script but not is imports)?
There are various implementations of this kind of autoreload system for Python but it always seems to come with compromises on semantics (different initialization order causes behaviour differences).
Most major PHP frameworks like laravel, symfony, Drupal, Magento develops all kinds of complex caching layers to work around PHP's stateless 1 request/1 execution model, essentially poorly recreating shared application state you would get for free with a long running worker process.
Python's import model is not without its flaws either, but at least you have a working application state, no need to fully initialize your app for every single request.
For simple apps that are contained within a few files, PHP is hard to beat for simplicity and speed.
Imports are also updated the moment you update the imported file.
I'm not sure if PHP stores any compiled binary or byte-code at all. Maybe it compiles it all on each request. It's super fast though, even with tons of imports.
My guess would be that it keeps compiled versions of each file in memory and on each request, it walks down the whole import path. And when it encounters a changed import, it compiles only that one.
Would be cool if someone with more knowledge could shed a light on what is actually happening.
AFAIK, before PHP even accesses opcache for bytecode, it checks the file's metadata (last modified, most likely). So an updated file might never trigger a cache hit or miss.
In PHP, dependencies are just .php files just like your own code, it all works the same.
PHP code is also typically far less complex than Python modules - modern PHP code consists of a single index.php with procedural code, and everything else is just class / function definitions, so there are no side effects.
When a framework "supports auto-restart", that usually means it has its own webserver for development and the auto-restart is supported when you use that.
I don't like having a different webserver in development and production.
For Python, gunicorn is suitable for production and development use and has a --reload option to reload on changed files. This functionality is framework-independent.
The way I understand the gunicord documentation, this has the same effect as if you set up a script which listens for file changes and restarts the server (or workers) every time a file changes.
That's way less efficient compare to how PHP handles it.
I don't want processes to be killed and new ones to be started every time I change a file.
PHP does it the right way: Only when a request that touches outdated code hits the server is that outdated code reparsed. As long as you just edit files, it uses up no resources at all.
I’ve never switched to the browser and reloaded the page fast enough to “beat” a gunicorn reload after editing a file. So I get not “wanting” a process restart but I don’t get why it’s such a big deal in a practical sense.
But hey if using what you use does what you want, then you do you.
It depends on your provider though. I can tell from experience that with OVH and their API, it's been easy to set up the automatic renewal via DNS verification. Apparently, the official client has support for the DNS API of 159 providers: https://github.com/acmesh-official/acme.sh/wiki/dnsapi
And then ctrl+click links on the site, it opens the new tab and shows the auth part in the tab title as long as the link loads. It seems the "current url" in Firefox code is stored with the auth part, and it passes that part on to local links.
These issues make it insecure to use auth urls because as soon as someone looks over your shoulder (or there is a camera like in many cafes), you are p0wned.
I wish we had a better way to log into a website from the command line, like ssh keys. But for now, we are stuck with what we have. And Firefox makes it insecure to use it. So for now, I continue to use Chromium.
You're literally putting the password in plain-text into the (unencrypted) browser bookmarks (and also into your terminal where it's likely logged to your ~/.bash_history).
That is the bigger security issue you have, not how Firefox is handling the display of the URL.
If anything, Firefox is highlighting your insecure security practice.
I have a hard time believing you even do what you're claiming. The number of sites that support logging in that way is basically (pun intended) 0. In fact, firefox is the only browser that warned me that someone is probably trying to scam me with a url like that, the other browsers just dropped the auth part and went to the site without logging in.
Yes. The auth part should not be displayed when you hover over a bookmark. Chromium does not display it.
In the end, every security mechanism is "plain text". Even ssh keys. When someone gains access to your ssh key, which is just an ascii string, they can log in as you.
My SSH keys are protected with a password, on top of that I have a biometric lock (MacBook fingerprint reader) on my SSH keys. So they would only grant access to someone that 1. has access to my computer, 2. knows the password (which only I know) and 3. has my finger. Definitely more than just plain text!
I strongly suggest looking into multi-factor authentication, or other modern authorization/security mechanisms if you want to see examples of security systems that are not just plain text.
> Chromium does not display it.
Security by obscurity is not ideal, although I can understand that the lack of this feature in Firefox hinders your usecase.
Same here. You can't just access my auth data over the internet.
You would also have to get hold of my machine and get past it's security mechanisms.
You can put as many layers on top of what you call "obscurity". But at the bottom it's still just a simple string that holds the power to authenticate you.
And "multi-factor authentication" does not help with the situation "User is allowed to use this script, so they are also allowed to use that website. Let's open it for them.".
My problem with Firefox is that a new Firefox windows can only be launched from the same environment (or only from a child process?) from which the first window has been started. Even the same user who started FF cannot launch a new window from a new shell. That constantly interferes with my workflow.
Example: Say firefox has been started from the Destop already and now I want to start a new Firefox window from a root terminal:
su desktopuser firefox
It does not work. It gives me "Firefox is already running, but is not responding.".
Why are you using root terminals in the first place? This has always been considered poor security practice. Consequently, your workflow is a very peculiar one, and while you personally might feel inconvenience, I don’t think that this frustrates many other Firefox users out there.
He's using 'su' to switch to desktopuser, that's not necessarily, and quite likely not, an account with root privileges. 'su' stands for 'switch user', it's not just to become root.
He uses the term “root terminal”, he writes “Start a root terminal (or a normal one and then ‘sudo su’)”, and he says he only has a single user. One will naturally assume the worst, and in spite of repeatedly posting here he hasn’t exactly made much sense about his workflow and needs beyond launching the browser.
Curious what use case you are looking at for over here? I don't want to make any assumptions. I'm wondering if what you are looking for might be covered by the functionality of tab containers.
He doesn't want a completely new browser instance, he wants Firefox to open another window but depending on its mood Firefox refuses to do this.
Chrome can do this just fine btw.
Running Firefox from the command line isn't an obscure thing either, clicking a link on other apps like Signal or Telegram also use this method to spawn new tabs/windows. And depending on how you executed the first instance, you run into problems with clicking links on other apps as well.
I pretty much need to open Firefox first before clicking links on other apps, because otherwise I can't run Firefox normally later without killing the process.
No. Sorry. I have a list of complaints with Firefox but this is clearly user error. I do this literally every day for a decade or more. It works fine. It particularly helps to not unnecessarily use things that can affect the env like launching a browser with su. In fact I can I think it a list of things that could go wrong with that.
Yes, sorry. This is a Firefox bug and not user error.
On my Linux system, this works with any other browser I try. It only fails with Firefox. If this is user error, it seems every other browser handles user errors much better.
- Firefox: First window is open, running `firefox URL` from the other terminal hangs for 15-20 seconds and then shows the popup "Firefox is already running, but is not responding. To use Firefox, you must first close the existing Firefox process, restart your device, or use a different profile.". No new tabs or windows are spawned.
- Chromium: First window is open, running a `chromium URL` command from the other terminal opens a new tab on the existing window.
- Microsoft Edge: First window is open, running a `microsoft-edge-stable URL` command from the other terminal opens a new tab on the existing window.
- Ladybird: First window is open, running `ladybird` in the other terminal opens a new ladybird window.
- Emacs/eww: First window is open, running `emacs --eval '(eww "example.com")'` on another terminal opens a new browser window.
- Netsurf: First window is open, running `netsurf URL` from the other terminal opens a new netsurf window.
- Dillo: First window is open, running `dillo URL` from the other terminal opens a new dillo window.
- Links/XLinks: First window is open, running `xlinks -G URL` from the other terminal opens a new links window.
You can see a clear pattern. Got any other browsers that refuse to run with a message like that? Or is this not a fault of Firefox, and something super extra that everything from other mainstream browsers to more obscure ones somehow handle for the user?
I have never had Firefox do this. I've used all 3 major OS types in the last 5 years and Firefox on all of them. New windows open, no muss no fuss.
>I pretty much need to open Firefox first before clicking links on other apps, because otherwise I can't run Firefox normally later without killing the process.
The only way I imagine this happening is if one doesn't save the browser session between launches, maybe? I've had "cold start" link opens and it just adds another tab to my existing pinned tabs/open tabs/etc.
It seems to be some magic Firefox puts somewhere (In the environment?) that prevents it to launch a new window from a fresh terminal that is not inherited from the same parent that launched the first window.
To make this work you need to set the XDG_RUNTIME_DIR environment variable to the same value it has in the environment Firefox is running in. For example:
I believe --no-remote should get around that if using a separate profile doesn't. Of course, you should ensure the distinct processes are not using the same config/cache files.
Did you try what I said? I'm pretty sure you just need to set some env vars right. (Not exactly sure which though. But some Googling should help, e.g. maybe how to setup dbus this way.)
I have very recently begun to see this bug; A workaround is that if you start a new Firefox from a shell on your desktop, then running “firefox” from a shell, even a different shell window, will work fine. I haven’t been able to find out why.
I found what I think was the problem for me: From the desktop, I was starting Firefox with the “--no-remote” option. If I remove that option, then it works. I did not use that option when starting in a terminal, but if I do, then the same problem appears.
I am probably not understand your use-case correctly but there's a `--new-window` flag on the firefox binary. You can use that to open new windows under the same profile.
You can easily try jourself. 1) Have Firefox open 2) Start a root terminal (or a normal one and then "sudo su" 3) sudo -u normaluser firefox --new-window example.com
I'm not sure what you mean. What I try to achieve is start a new firefox window (for the normal user) from a root terminal. So part of the command has to be "sudo -u normaluser" or "su -l normaluser" or something.
You have to pass those environment variables to Firefox through sudo.
sudo has a --preserve-env=list option but you must know the value for DBUS_SESSION_BUS_ADDRESS. That's usually
unix:path=/run/user/1000/bus
where 1000 is the id of the user Firefox is running for. Your root console could have no DBUS_SESSION_BUS_ADDRESS or have a different value for it.
To be 100% sure of the value, as you are root you could look into the environment of one of the running Firefox processes. Or wrap Firefox in a script that echoes that variable to a file before starting the browser.
I'd add -H to the options of sudo to make it set the home to normaluser's one.
I assumed that your DISPLAY is :1, which is what my Debian has set for me. That's another variable that you could read from the environment of Firefox.
You might have to pass/preserve other environment variables to make Firefox work.
The parent gave all information which should help you in figuring it out by yourself now. The message it, it should work with the right env vars, and you have multiple example env vars to check for (DBUS related, DISPLAY, etc) and Google for. This is your work now.
(You could also test the extreme case: Just copy all the env vars.)
Because this is not a support forum or a stack exchange. I give the idea, then all the test and debugging is for who has to actually make the code work.
Coinbase has 100M customers and less than $1B daily trading volume. $10 per customer per day does not seem unusual. It is an average of less than $4000 per year.
They suspect fake trades, from one person to themselves. They can pay any price because no real money changes hands, aside from commissions. The presumed goal is to make it appear as if there is high demand.
So... Things with intrinsic value and often tangible uses. The only thing in that graph somewhat comparable to Bitcoin is money, since it's also part store of value, part currency.
It is amazing what length people are willing to go in the hope of "safety".
Is there anybody out there who successfully build their own product, not via VC or other investments, but on their own time and own money who likes types?
To me it seems that types are something consultants and people who get paid for their time instead of their productivity like. Because it makes things more complex. So to achieve the same task, you can bill more time.
I have not seen anybody successfully build something on their own time, money, risk who like types. If there is someone here, I would love to see it!
On the other hand, I have seen many examples of lean code like the following turning into successful life style businesses and companies:
Quite an unusual take. There are millions of developers who prefer TypeScript over JavaScript for any job, including self-founded products.
But even if they didn’t exist, it’s a non sequitur. There are very good reasons for type systems regardless of how projects are founded.
You might not realize that Go has a very limited and awkwardly designed static type system that is the sole reason reason people prefer it for certain tasks over Python, which has both a dynamic, and by now an optional static type system.
Your final point, that types make things more complex and harder to develop is also flawed. Type systems exist to decrease the mental overhead and unload it from the programmer onto the compiler.
They might require more work upfront, but make software more solid, easier to reason about and easier to change. If you’re building a one-off that you will never change then a type less language might be of interest to you.
But it’s short sighted to think that you will be able to write it once and keep it running forever. Only managers who don’t code believe fairy tales like that. In reality many people sit down on a codebase, they fix things, and add functionality. When they do, they want to avoid bugs, they want the compiler to help them, they want LSP, they want to avoid null pointer dereference, double free, they want to be safer, and more productive.
They’d rather spend a week upfront instead of a day so that every one of the later changes takes a day instead of a week.
Also, types are a spectrum. C++ has types, Objective-C has types. Those alone have enough success stories to fill tomes! But of course you’re talking about strong static type systems like Haskell, Ocaml, Scala or ML, so you should read up on Jane Street. Or Mercury. Blockchains like Cardano.
Personally I think the onus is on you to prove that types are _not_ useful. Many of us enjoy types.
Pretty much all successful indie games use typed languages. Of course, there is a mix with untyped scripts in there too. But many only depends on typed languages.
You should watch this video [1] from Martin Odersky, creator of Scala.
It shows how the entire industry has moved towards stronger typing. Whether it's existing languages like JavaScript and Python or new ones like Rust and Swift. The trend is undeniable.
I use Scala and Rust every day and I personally think they are better for startups because they allow you to rely more heavily on the compiler to verify things rather than having to check at runtime with unit tests. That means more time on new features and less time fixing bugs and writing tests.
I don’t disagree on the summary of your post. Especially for refactoring, types can be great.
But it is funny you chose the creator of Scala as the example. The gold plated banana cables of typing. The entire language and its endless lang mailing list discussions is a meme by now.
1. A whole lot of programmers who didn't know how to use the language effectively in 2010. Less of an issue now the patterns are well established and industry wide the level of knowledge has increased (even Java has algebraic data types now)
2. Some idiots in the community, who have now left
3. Some rough edge cases in the Scala 2 type system; fixed in Scala 3
4. Ignoring the fact that not only language but also tooling is a significant factor affecting developer experience. Extremely long compile times, weird build system, lack of good IDE support.
5. Breaking backwards compatibility with each release. This is something that businesses care a lot about.
> 5. Breaking backwards compatibility with each release. This is something that businesses care a lot about.
This is a myth. Scala has had one significant compatibility break in 10+ years. Not even Java beats that. Meanwhile the likes of Python really do break compatibility every release, and businesses continue to use them.
I think people forget just how bad compatibility was a decade ago. The compiler promised no ABI compatibility even between minor versions, which on its own could have been fine; Go and Rust don't promise ABI compatibility either. The problem was that people still used maven and sbt for dependency management, which used binary dependencies, not source like Go and Rust.
So to get a Scala project going with several dependencies, you needed all of them to agree on a specific Scala compiler version and publish JARs built against it. If even one dependency lagged behind (and many often did), you couldn't update the compiler version for your project, locking you out of new versions of every Scala library you use.
This led to the ironic situation that it was easier to use Java JARs from Scala than to use Scala JARs. If the ecosystem had used source dependencies instead of binary dependencies, this could have been avoided, even if compile times may have been horrendous. Rust walks this tightrope steadily enough these days, and Go even makes it look easy.
Finally, even though the language didn't meaningfully break compatibility until Scala 3, the ecosystem as a whole did not respect backwards compatibility. For an example I'm sure many people here will remember viscerally, Lift Web kept making substantial backwards-incompatible changes regularly even after they knew they had commercial users who faced real-world costs for churn. Even the most foundational things like how to bind template elements to variables changed completely. Adding further insult, the documentation was not nearly up to the standard that would make migration productive even for those who could afford the churn.
I loved Scala but these factors burned me hard. It remains to be seen how Rust libraries will compare, but Go libraries have already proven backwards compatible to a fault, and both build dependencies from source so ABI is not an issue. I do however appreciate that Scala taught me that a beautiful language is not enough, real world usage requires responsible commitments from projects to users. It sounds like they've learned this, but a decade too late.
> I think people forget just how bad compatibility was a decade ago.
I think you're forgetting how long ago the issues were. It was pretty bad in the 2.7-2.9 days, but a decade ago was already into the 2.10 era.
> This led to the ironic situation that it was easier to use Java JARs from Scala than to use Scala JARs. If the ecosystem had used source dependencies instead of binary dependencies, this could have been avoided, even if compile times may have been horrendous.
I don't think that would've been worth it. Binary dependencies work great given that libraries cross-build; the real issue was back when source compatibility was broken often enough that that wasn't practical.
> For an example I'm sure many people here will remember viscerally, Lift Web kept making substantial backwards-incompatible changes regularly even after they knew they had commercial users who faced real-world costs for churn.
I don't think there'll be many remembering that. Lift was big, what, 15 years ago? It was already fading by the time I started Scala back in 2011.
That's just the thing, Scala and its ecosystem had these problems right at the height of its popularity, meaning that the situation was the worst right when the most people were around to experience it, and understandably many never looked back. Even if many individuals can look past it, it's become a very hard sell to teams.
It's the same story with any kind of merit vs trust. When you have a clean slate, you can build trust based on your merit, but once you break trust, merit alone isn't enough to win it back. In this industry in particular, far too many other options are already holding and building trust.
I truly wish Scala had indeed become the mainstream industry language instead of say Go. I hope Rust polishes off its rough edges and succeeds where Scala failed and where Go doesn't go far enough. But I don't see any future where Scala regains industry relevance, not even if both Go and Rust self-destruct as well.
Meh. OCaml spent over a decade underappreciated, fixed a few sticking points like package management, and then underwent a resurgence. I'd like to think that merit will ultimately win through.
No, I meant binary compatibility. Upgrading to a newer version always required recompiling all dependencies. One dependency didn't get an update and you were essentially locked on the old version. Some projects took years to upgrade from 2.11 to 2.12 and then 2.12 to 2.13 e.g Spark. This was a real pain for us.
Source compatibility based model would be ok if Java/Scala build systems worked like Cargo/Go/Npm and compiled dependencies from sources. However, that would make compile times orders of magnitude worse, and they were already unacceptable without it. I remember our project that had about 30k lines of Scala and 300k lines of Java took about 7-12 minutes to build from scratch (and the biggest contributor of compile time was scala). And we didn't even use advanced and slow stuff like macros or typelevel programming. I remember we were seriously looking into stuff like keeping the compiler in memory between the runs (thanks Bloop) and that made it much better although it came with it's own set of new problems.
Comparably on the same (now outdated) hardware Rustc is able to compile 500k lines of all dependencies in about 28 seconds (including linking).
> One dependency didn't get an update and you were essentially locked on the old version. Some projects took years to upgrade from 2.11 to 2.12 and then 2.12 to 2.13 e.g Spark. This was a real pain for us.
Spark is a special case though - it really was just Spark, and you can't really compare that to something like Rust where as far as I know there's no equivalent. Everything else that was maintained at all put out cross-builds very quickly once the changes between versions got to be small.
On the contrary, the power of Scala types means you are less likely to get in a case where the compiler blocks you despite your code being safe.
See the example of the article, where you don't have covariance in Go and the compiler fails. Same in Java where you have to cast (and add unsafety) to make the compiler accept your code.
In Scala when using immutable structure (the default) you have covariance and the compiler will happily let you call a function that takes a List[A] with a List[B] if B extends A.
Now that's true that you can do overly complex things with Scala types, but you don't have to.
I love type safety. When I drop down to a dynamically typed language I'm actually slower as I have to double check my work. I don't understand how someone in 2023 can be against type safety.
Yes, as everyone knows, you are only entitled to an opinion about software if you are also a successful business entrepreneur without external funding.
Good type systems increase velocity by leaving less room for error, so that you don't have to waste half your time debugging and fixing issues that could have been easily prevented. They also provide helpful structure and an IDE experience (the right values, functions, actions, etc are actually suggested for you wherever you are) that really does speed up development a lot.
There's of course always a trade-off: bad type systems can get in your way with boilerplate that you have to wrangle. But I don't think that's the typical experience.
> Yes, as everyone knows, you are only entitled to an opinion about software if you are also a successful business entrepreneur without external funding.
That's not it. I disagree with the GP, but they do have a point: coding for pay, coding for fun, and coding for your own business are all markedly different. So the question whether something works or not in one of those contexts is a valid one.
I disagree with conclusion, though. Successful products are written in whatever the authors know intersected with whatever will get the job done. If you're Chuck Moore and work on embedded stuff, you'll use Forth and will be hugely successful, running circles around your competitors who use C. The relative strengths of a static type system vs unityped language are more than offset by familiarity and experience of the authors.
The story changes a bit when your product grows and you need to hire a larger team of people to work on it. The benefits of a static analysis become more pronounced when the codebase gets bigger. However, if you somehow luck out and get a team of highly competent people, they will use whatever mechanisms are available to make your larger codebase still tractable, even if they use Lisp, PHP, or PERL.
You can bet on something that explodes (Windows Phone...) and then your product gets done in by the choice of tech. Outside of those cases, though, the tech doesn't matter that much. Whatever you do, you'll have problems. Going for a really powerful language will severely limit your choice of libraries. Going with the most common language will leave you fighting decades of accrued nonsense. Going with statically typed language (without sane macros) will lead to lots of boilerplate code and will severely limit what you can do on runtime with the code (unless you use reflection, but then you're back to untyped land). Going with unityped/untyped language will make you hunt the inter-dependencies between your modules and you'll likely become conservative with removing old code (and other forms of refactoring) as a result.
Finding a right trade-off for a tech for your product is never about a single feature of that tech. It's about a complex interplay of what you have, what you think you can get, and what you want to do.
My shell (interactive command line environment + scripting language) is written in a type safe language. It’s tens of thousands of lines of code (maybe even hundreds of thousands, I’ve not counted) and has been actively developed for the best part of a decade.
I’d have completely failed at this project if it weren’t for type safety because refactoring anything that long lived and that much code would have been a nightmare.
So there was no risk involved like "Will I tank my future if this fails?".
What I witnessed is that makers who are willing to bet on something with real consequences dislike types as they slow down the progress and therefore the chances of success.
Why don’t _you_ come up with examples where non-VC funded but successful businesses are doing well in part because they chose to ignore types?
At this point, I’m not even sure what you are trying to prove and I’m starting to feel that whatever it is, you’re trying to prove it to yourself mostly.
I took the comma in your comment to mean “or” rather than “and”. You’re right. It’s not a business. But I am risking my own time (you could argue that time is money :P)
Your point is pretty daft though. If was relevant to self-funded startups then it would be relevant to VC funded startups and hobby projects as well. But the fact that you’re having to dismiss all these counter examples people present demonstrates that your point is wrong and you’re simply not willing to back down from your position.
Plus there’s been a wealth of companies founded on .NET and Java technologies that are older than some people on here have been alive. So it’s not like examples don’t exist. You’re not just receptive to them.
Your non-typed code still uses types, you just have to remember what they are and test functions in more ways to check that your memory and assumptions are correct.
Exactly. If not, any variable is just binary data, and has no basis for interpretability. Any program with a single branch instruction at the assembly level, has implicit types!
I see a TON of open source projects (frameworks, libraries etc.) on the frontend alone that are built using TypeScript. And on the backend Golang is hugely popular and Rust is gaining a lot of traction. All in open source. Not every one of those open source projects is backed by VC money or built by consultants.
> I have not seen anybody successfully build something on their own time, money, risk who like types. If there is someone here, I would love to see it!
Experiment: you go and search for people who have built something on "their own time, money, and risk". Find them by what they've built. First decide whether you respect what they've made, and only then check what language they decided to use to build the thing. I predict that you'll find plenty of people who have built respectable things using typed languages.
C# is my language of choice and the strong typing allows me to refactor and change names of (“internal”) things at a whim on even extremely large solutions with no possibility of runtime failures due to the refactoring tooling doing all the work for me and the compiler complaining if something doesn’t line up.
My view is that people who don’t like strongly typed languages have either just never worked on very large codebases, or they avoid refactors due the general impracticality of it, or they have loads of unit tests to catch runtime “type mismatches” which would have just been caught by the compiler in a strongly typed language.
Choosing an untyped language vs a typed one is a trade off of less upfront work in typing things vs future analysis capabilities of your code. And for me, the upfront cost of typing things is negligible compared to the gains for the future, which in my decade+ of experience pays itself many many times over on any non trivial sized code base.
Some languages are weakly typed though, which can cause confusion at the points where one type is automatically converted to another (e.g. JS, where 1 + "2" = "12"), and you need to keep type conversion and precedence in mind.
It didn't take long earlier in my career to realize that in JS, you need to be aware of the type of your variables and try and avoid any implicit type conversions.
I worked on a large JS app some years ago, before types came to that world. It worked, but only because I had a good memory of what fields and types there were (JS has types, they're just hidden and weak).
But my memory, my individual ability to keep code in my head, to keep an API's datatypes in my head, or to keep function signatures in my head does not scale. Second, it's not reliable, and needs either tests or production errors to verify.
Untyped / weakly typed languages aren't going to go away, and if you prefer them, I'm not going to convince you otherwise. I don't mind them for smaller projects or one off scripts either, utilities. But for larger applications that multiple people are involved in (directly or indirectly, e.g. an API not maintained by myself), I prefer to have a strong type system.
That said, I don't think Go's type system isn't the best, it's kinda loose in some cases (e.g. no enum types). You can't use it to bolt down your application as tightly as e.g. Java or Scala or other such languages. That's probably by choice though, since you need to stay productive as well and not masturbate over your own type system.
>Is there anybody out there who successfully build their own product, not via VC or other investments, but on their own time and own money who likes types?
Only almost every small app software house for Windows and MacOS ever, and almost evert indie game developer?
And anybody who wrote and sells of supports and makes money as FOSS, any libraries/backend tools/etc in C++ and co, so tens of thousands of projects?
Types are like a built-in test suite for your code. They constantly check that what you are doing is not nonsensical.
If you want a high-profile story, take Figma; it's built with C++ which is a highly type-oriented language.
Anecdotally, in the last three startups I worked (including one really tiny), the code base was / is largely in Typescript, Kotlin, Python with heavy type annotations, and small bits of Rust.
I don't agree with your comment in general, but I think I understand where you're coming from. Many formerly loosely typed languages (JS, PHP, Python etc.) have tried to introduce stronger typing - I have the most experience with PHP, so I'll talk about that: sometimes "stronger" typing can be a pain, especially if you try to introduce it to an existing weakly typed project, or if you're interfacing with some other weakly typed system (e.g. JSON files). One example: everything is working fine, then you declare a function parameter as `string`, and suddenly you get runtime errors because one of the many callers occasionally passes `null`, and you forgot the question mark before the `string`.
So what you’re saying is that by introducing type checks, you’re actually uncovering bugs. Seems to me like proof of value right there.
What does surprise me is that adding types to your PHP code adds _runtime_ errors. With Python, type annotations are predominantly used by static typecheckers and are ignored at runtime (except from some libraries like FastAPI that use reflection to inspect the type annotations). I always assumed all dynamically typed languages that introduced explicit types worked the same way
> What does surprise me is that adding types to your PHP code adds _runtime_ errors
There are static analysers for PHP too, but yeah they’re also enforced at runtime.
I think I prefer that, if there’s going to be a non-zero risk of bad types landing in production code, I’d prefer that be an error than for it to just carry on and potentially fail later in strange ways.
That said, one of the reasons cited for PHP not having generics is the performance cost of runtime enforcement. So there are a few people out there now making the case for a generic syntax that is ignored at runtime, but that can be built into static tooling. Which I can get behind—PHPs type system is really not bad at this point, but generics would be really nice.
Well no, maybe there was a legitimate reason to pass null instead of a string (I mean, nullable types exist for a reason), and then you just introduced a bug because you didn't look hard enough at all the places your function is called (which can take some time).
If you have a null-aware type system, then you just can’t call your function with a null, so no bug?
Also, if you just annotated the same implementation that already didn’t handle nulls properly, how is it any different? Your code would just die at runtime a few lines later, with a less explicit errorz
Yes, me. I develop an IDE for Clojure code which is a mix of Clojure and Kotlin. I like both, but what I like about Clojure is the interactive programming, not the lack of types. If there were a statically typed Clojure I’d be all over it.
There's no Typed Clojure, still? I seem to remember people porting Typed Racket to Clojure; a quick search gives me core.typed[1] and typedclojure[2]. There's an example[3] looking like this:
(t/ann hello-world-error [t/Int :-> t/Str])
(defn hello-world-error [a] (inc a) ;#"Simplified a bit")
(both definitions are rejected at compile time; in Racket, it works in the REPL, too, not sure about Clojure)
...and Typed Racket is a really powerful type system (see refinement types[4]). So, I thought it's just a matter of time for Clojure to get to that level of power and support. It should be much easier to do this to Clojure than to Ruby, given that you have a working example of how to do it well. So I'm really surprised Clojure isn't gradually typed by now, with most of the code being annotated and type-checked at compile time.
Typed Clojure is still a thing and I think Ambrose is still working on it, but I don't think anyone actually uses it in practice. A few companies tried it and then backed it out, but that was a while ago now. It may be better recently but I haven't made the time to really kick the tyres on it to see what's changed. Previous it was just a pain ergonomically.
Hmm, that's sad. I guess the difference with Typed Racket is that the Racket devs and researchers behind it were all completely sold on the idea from day one, which resulted in a quick progress in annotating the whole of the stdlib. Without that, you'd need to annotate everything you require (import) yourself, which would be a massive pain. Another difference might be the module system of Racket, which is pretty unique. IIRC Clojure modules are just namespaces, similar to CL, right? In Racket, the module system along with the extensive support for contracts, allowed TR to be much more strict with types, while still interfacing well with untyped code. In effect, Racket as a whole is gradually typed, but within the bounds of TR modules there's a sound, fully static type system.
It's a shame. I thought Clojure, as the only Lisp currently used in the industry, would benefit from the great work of Racket folks the most. After all, static typing in your pet project is much less important (or even desirable) than in a production-quality software made by a team of devs under time constraints. It's strange to see JavaScript and Python getting (widely used) gradual type systems before Clojure when the foundational work on gradual type systems was done in Lisps (Scheme and Racket).
Types, like many things in life, only matter once you are in a situation where you could really use them(or would sidestep a certain kind of problem…) If you are in one of those, and more importantly don’t realize it, that is when you really pay. The daily languages that we all use(99% of us) do not utilize complex type systems and thus cannot solve complex problems.
It broke calling static functions without the need to declare them as static.
It broke adding properties dynamicly to objects.
It broke easy string handling for many functions where null was rendered as an empty string. This is especially annoying as you often get null values from the database. It makes sense to have a null value for "Don't know the color of the car" in the DB. And it makes sense to render it as "Color: " in the user interface. The easy string conversion always was one of the strengths of PHP.
BNB is a token issued by Binance to faciliate trading on their platform. If it dropped in value, it could indicate a lack of confidence in the platform itself (it's price could also be manipulated of course).
"BNB can be used to pay for fees when trading on Binance, and usually at a discounted rate. Due to the primary utility, BNB has seen massive growth in interest throughout the years . Several rounds of token burn events have appreciated BNB price and pushed it up as one of the top-10 cryptocurrencies by market capitalization."
To me, all of the code examples in this article look terrible.
Is there any public website out there (not behind a login) which actually benefits from using these types of components with "magic data binding"?
Every time I see examples, I say "Hey, I could build that in a more readable and more performant way just using plain JS or using a template library like Handlebars".
And the answer I get is "Yes, but not for big enterprise SPAs." And I ask: Ok, where are those? And the answer is "Uuh, well, somewhere behind logins there are some, trust me!".
I tend to not believe it.
When I write real world code, I do not change some data value and expect all kinds of components in the interface to change magically. Instead, I write an event handler, which changes some data and then calls the parts of the UI that need an update. Like when a user changes a slider to change the x-axis range of a chart, then I set the data, call the update function of the chart and that's it. No need for components with data-to-ui-bindings.
It’s a document editing application. A document title might occur in the browser’s titlebar, in the header of the main editor, in a “mention” (a link to the document) - in another document title, or in the text of a document, and in multiple places in the user’s sidebar - like in both their “Favorites” section and in the the contents of their team.
When the user edits the document title, we need to update all those UI bits to render the new title. I have a hard time imagining some imperative code that iterates over all the possible views that may render a title to mutate them. Without binding/data subscription I can’t imagine Notion working at all.
Can you show the code of the "Favorites section"? Because to me it does not sound like it is as simple to update as "Just put a reference in the template". Because what is in the users favorites is probably defined in some other data structure.
Which is what I witness in real life: In theory it sounds cool to just use data in the template and everything gets updated automatically. But in reality it is a mess with a million edge cases. Worse then just calling a list of update functions when the title changes.
Here's pseudocode of the favorite section. The "SpaceView" is a database record that represents a specific user's "view" of a shared workspace. It has a few UUID[] list columns, one of which is the "bookmarked pages" list of Block UUIDs. Methods like `.getBookedmarkedPagesStore()` get a reference "RecordStore" object to a specific column inside the record. Reading the data from a RecordStore in a component implicitly subscribes the component to changes to that column. To render lists in general, we use a <List> component. It manages drag-and-drop behavior and efficient updates, react keys, etc.
Here's the pseudocode for the main editor view. The document itself is a "Page" block database record. We render editable text with the <Text> component. We listen to contentEditable document changes, and reconcile the mutations with the rendered <Text> components to understand where the user typed and how to update our database records in response to that activity.
All updates to database records happen inside a call to `transactionActions.createAndCommit`. Under the hood, this:
1. Builds a list of commands to send to our server, to persist the user's edit to the database. Once the edits are applied on the server, we notify collaborators those records have changed, so the collaborators can pull new data and re-render their own views.
2. Optimistically updates the local, in-memory version of the database records to reflect the change commands.
3. Schedule rendering views and recomputing derivations of the views/derivations that read from the changed records.
Our React components "just" render the state of our database rows, but because we use a generic auto-tracking state system and publish change events, our views respond in near-real-time to both local and remote data changes. A new hire engineer at Notion can create a new collaborative feature on their first day with end-to-end reactivity "just" by a) rendering part of the record in a React component, and b) updating that data in the record with transactionActions.createAndCommit.
Well, this isn't really going to help the case, but yes, behind logins there really are some, trust me. Tongue in cheek aside, where I used to see this a lot as a contractor was on projects where multiple companies were hired and each was responsible for some small part of the system that had to interface with the rest of it. The core problem is that you might not even know which parts of the UI need to update when your data changes, so what you advise is not always possible.
> When I write real world code, ..., I write an event handler, which changes some data and then calls the parts of the UI that need an update.
That's your choice, and I'd argue you're creating messier code slower than someone who uses a UI framework that deals with reactivity in a principled manner. Having to synchronize the UI and data manually creates room for subtle desync errors which just can't happen in, say, Vue.
Someone already linked Notion.so and you didn't seem to find it complex enough?
The reality is there's a continuum between what is fluff and what is providing value you don't see.
I agree a lot of web development ceremony is misguided and of dubious benefit, but you're seemingly against even basic state propagation.
But you also seem biased in thinking of systems where you can hold the whole thing in your head.
When a site like Facebook shows you a chat message, there are is the an iceberg of functionality that you don't see and can't imagine with that bias.
You think "I'd just use JS and add a div styled like X", and Facebook probably has more JS to enable defining what X looks like because that div can have anything from a text message, to a video, to a chess move from a FB game, than you can picture for an entire site.
The analytics and tapbacks and a million little features that would make for hundreds of listeners on some seemingly simple component
—
At the end of the day not everyone is making Facebook, so I agree people are complicating things for dubious benefit, but not so much that basic React is problematic: even for small applications it's a cheap way to support the "combinatorial explosion of edge cases" problem of product development.
You add one edge case to a state change you wired with plain JS and things are fine. But then you add an edge case to that edge case, and so on, and eventually you'll either just have a worse version of what React offers, or an unapproachable mess of code.
So not easy for everybody here to look at and discuss.
If there is not a single public website out there which benefits from frontend frameworks with data-biding components, that probably tells us something.
I mean it does tell us something they have logins: it's expensive to build complex single page applications
There's no incentive to do that for free and in the open.
I started my comment by explaining why you're not going to find the golden case you're digging for: even when provided you'll haughtily share how jQuery could do everything shown, and as I explained you don't see most of what is being done.
Everything is easy until it's not. Maybe you should share examples of things you worked on with a larger team and show us that we're overestimating the limitations of manual state updates.
If in this gigantic sample of websites, not a single one benefits from the techniques which people here describe as essential to build complex UIs - that would tell us that this is either false or that there are not complex UIs on the open web. The latter being unlikely, given the size and variety of sites out there.
If a V12 is so powerful, show me an economy car with a V12.
If in all of the economy cars in existence, none has a V12, then either it's a false statement, or there are not V12s in economy cars. The latter being unlikely, given the size of the economy car market and the variety of models out there
In PHP, you can just throw a php file on a webserver and it works. To update, you just update the file. You don't have to restart anything.
On the dev machine, you can just have Vim in one window and Firefox in the other, change code, hit F5, and you see what you did.
I don't like having to run a code watching process which then recompiles the whole codebase every time I save a file.