1. I do believe that Rust type system needs to be reworked, so all the drama around it is needlessly bloated. We have RustBelt, RustHornBelt and myriads ways of converting HIR/THIR/MIR to CoQ, for better or worse, but no MLIR support for rust codegen whatsoever. Rust is temporary, we'll get a simpler and better Lang further down the road.
2. The upstream fight had always followed the Cash Flow. If you have so called "problems" you either act yourself, or let other people do something about it, the way it won't result in more long-term issues. Deliberate Detraction IS a Sign of Corruption and Abuse of Power.
3. An ambiguous Point of Conflict may not be Conflict at all, but just an outcome of the Lack of Proper Communication and Transparency. Detraction is Accountable, social dynamics and social incline with explicit or implicit motives is Accountable as well. The lack of Effective Code of Conduct, and absence of Proper Punitive Action, for everyone, will cause Authoritarianism (or just Genocide of Engineering Thought).
I do feel bad about the state of Linux Kernel development, but we'll have either to move on, and learn from This Mistake, or do something about it.
2. I don't believe that scala-native was a good idea in the first place, because of all the java-interop boilerplate already present. C/JVM interop design conflicts are very hard to abstract properly. It would've been nice to adopt some WASM-compatible IR instead of MLIR/LLVM lock-in.
3. WASM-first is a very viable AOT+PGO option, it also brings new opportunities for remote exec and some interesting Architectural Approaches like automagic splitting monolith into microservices on the fly by potentialy calculating communications overhead and performing basic discrete optimizations.
So, Scala is still fun, just missing a lot of business oportunitites, it's just that Odersky decided to take another spin of EU Grants acquisition by developing a new lang.
I, personally, don't think that dotty was "good enough" to roll out last year, but overall project traction and cash-flow directions don't look that promising. And I personally choose to call it EU Budget Laundering, because it's really puzzling for me how exactly 60mil Euros grants are not enough to make Dotty stable. If no-one audits than no-one cares about it, or how does laundering and embezzlement work nowadays ?...
From a lang design standpoint, there are three things which make Scala obsolete
1. No proper formal verification - although there are things like stainless (stainless.epfl.ch) and it would've been possible to adopt zero-GC alloc during codegen by adopting CoC similarly to neut (github.com/vekatze/neut)
2. No proper support for protodef and IDL's - nowadays efficient serialization (marshaling) defines software reliability, all these JSON'y/Protobuffy/Flatbuffy thingies really causing some traction, although every single one of them are not something I'd call scalable.
3. Adopting Calculus Of Constructs (CoC) and formal verification alongside bunched-separation logic (like in F-star lang and rust) should be enough to formally prove Mem consuption, amount of IO and the respective computational overheads. Basically it would've worked similarly to CAP: pick either small mem footprint and bandwidth needed - the amount of compute power and the respective latency will be calculated and formally proven.
The exact desings of separation logic for multi-threading apps is a complex subject, but I like what Azalea Raad done with Concurrent Incorrectness Separation Logic (CISL) - it's something that would've allowed rust, for instance, to drop it's boxed types for RAII and a lot of the existing sync primitives (Arc, Barrier, Condvar, PoisonErr etc).
But who am I to talk about that... I never boiled in the Sciency Kettle and Played by the Academic Tribe Rules, spending half of my life just to copy-paste generic paperworks from here and there, filling up the gaps by slaving the Kenya/Nigeria students on forced contract terms, with usual threats, IP Extortion, common worker-contractor misclassification.
Everything professory Academic-related looks so corrupt for me nowadays.
1. If there are no good arguments in the collective - there's no retrospective and it's primarily a management and psychological issue. No one is able to fully self-reflect and it breaks the existing delegation / escalation chains, respectively.
2. If there are no viable data sources, when it can be proven that there's a correlation with an actual business processes, - it's a management problem. People Can't establish viable metrics, once again, mostly due to 1.
This is something any company of any size and any budget can struggle with due to lack of XP and the usual collective XP-accumulation / knowledge sharing deficiency. You can't self-reflect onto something you haven't learned about, yet. And due to 1 this is a closed loop because lack of XP can't be escalated accordingly, most of the time it's also a Workplace Deviance factor.
3. Practically, it ends up in a bouquet of Workplace Deviance because no one in the end will be willing to take the blame and actual responsibility to fix anything.
Any Problem vs Solution type of culture will worsen things a lot i.e. "All the blame and no Compassion". Companies are usually forced to adopt some Teal stuff in the end, maybe for really no other good reason, but just to keep on growing.
The idea of hiring HR that can "work by the booK" and actually build up a personal profile of how anyone could fit into all this mess is impossible by definition - due to Employee Silence and broken retro no one will be willing to expose all the shit that is happening, in the first place... So, most of the time I see Kitchen Sink companies with volatile outcomes where there really no one who could even be able to listen to any arguments, in the first place.
Google's internal ML-driven productivity metrics became a meme already for all the reasons described above. You can't reason with Toxic and Inadequate people.
Also Asana claim that Social Loafing is a myth and everything else is a retro deficiency really wrong - retro can prevent and display certain glorious occasions, but it's not a root cause of any psychological effect by definition.
> "i want to write infrastructure as code from day 1" is not only stupid , its a waste of resources
I tend to disagree.
Depends on the scale... and after you've scaled and grown DevSecOps absence becomes a source of detraction, affect your delivery cycle and indirectly your Sales. Proper DevOps defines some of the business lifecycle operations as well, like BI and A/B testing, which essentially helps in validating pending Business Assumptions. It's something that can help differentiating the market and Validate the actual Product Viability - prove that your MVP actually has any V in it.
Operations wise, First and foremost you have to keep track of the issues that are currently present in AWS solutions and automate workarounds, and there are a lot of security automation and organizational means which can't really be solved with a "Click in Web Console" efficiently.
For instance, setting up a proper EKS cluster by hand, without any hardening, would require at least three hours of clicking through, with all the IRSA roles and EKS specific IAM permissions. While, on the other hand, Terraform automation has ready to use OpenSource modules shipped by both the community and AWS itself (terraform-aws-modules, aws-ia), which introduces some advanced EKS management practices, without any added effort. 10 lines of IaC can easily replace half an hour of click-through.
The cost of Integration is nearly Zero during the product bootstrap phase, but when you're growing integrating proper Organizational Management with AWS Organizations and Control Tower, reordering your AWS Accounts, transferring resources, and hardening security boundaries tends to rise in complexity and cost a lot. Especially if you'll ever want to perform proper security Audits or need some HIPAA/GDPR compliance.
For some Disney companies, for instance, who choose to perform org management by developing custom tools after 5 years of operation, proper integration with AWS Organizations remained a dream, and their unreasonably tight Operational Schedule and On-call deficiency became a source of detraction. The integration cost rose to eight figures.
The cost of DevSecOps hardening basically doubles every quarter, if you're growing fast enough and lack automation.
As for myself, automating everything allowed me manage Kubernetes complexity and develop a fine tuned vertically scalable solution (VPA+HPA on Keda with cluster autoscalers) - about 30 different k8s services deployed in a mix of x86 and Arm instances, with continuous placement and resource limits/requests optimization, completely downscalable. My AWS bill is only 7% of my raw income.
So, if you can hire a DevOps consultancy, and can Actually Measure how much time is wasted during the manual operation compared to the automated one, able to self reflect without a confirmation bias, do that ASAP.
Crossplane, on the other hand, does better with the Terrajet codegen, and all the infra drifts are a part of the reconciliation cycle, which is very handy on simpler deployments but doesn't work with more complex ones due to excessive drift polling model.
1. both pulumi and crossplane just wraps the Terraform providers as is on many occasions, and quite poorly. There are a lot of pending issues with the dependency graphs, state refresh and proper state diffs. Although a lot of the most troubling issues had been resolved, it's still a mine field run.
2. Both TFCDK and dagger.io can be used for multistage TF deployments, although I prefer dagger myself...
Terraform has a major state management design flaw that had been ignored by hashicorp to force TFE upsales. It's impossible to perform multi stage deployments with a single `terraform apply`. You have to manually identify the deplyoment targets for every stage, terraform providers do not support `depends_on` block and they are not a part of the resource dependency resolution graph. i.e. You can't deploy Vault than configure it with the respective provider - terraform will try to perform both deployment and configuration simultaneously and will fail.
3. This is due to strong Sales Opinion that a Single Plan is of a Positive Product Value for Terraform. While in practice it turned out to be False, the actual Product Value of Terraform is in Single Consolidated Infrastructure state, which can be analyzed by the respective static analyzers (infracost, tfsec, checkov, inframap, driftctl etc). And it's a strong pro compared to both Pulumi and Crossplane...
Having a single state is a blessing for large companies with a tight operational schedule - having multiple states with a single lock can cause conflicts quite often, with volatile outcomes. Yet again, an upsale point for TFE.
Even though Terraform "has more providers" you have to be able to support 'em all by yourself, HashiCorp does not provide a Viable Support Plan for the existing Official Terraform providers (on my xp - maybe someone was more lucky).
That's why I'm often saying that DevOps is not a title, it's a methodology... and every DevSecOps guy should be well versed in golang to be able to support, test and extend the respective tools and operators (k8s automation).
As an Ops, who literally replaced Patroni with a Terraform CTS module, in about a week... can say that it would be nearly impossible to do in a non Pizza size team due to communication and confirmation biases, alongside the respective anti-patterns.
1, HashiCorp is forcing enterprise upsales whenever possible, even if it'll hurt Adoption Rates and overall Development Experience
2. Existing TF design issues are ignored, which is causing people some state management trouble irrelevant for TFE. So, yet again, why fix something that will end up in upsales ?
3. MPL requires for the PR's to be available in case someone will really fix something, but it's near impossible to contribute into Terraform with any major design improvements.
4. Existing Providers issues are neglected, and Accepting Working PR's takes around 3-4 weeks...
5. Some Providers (helm) are neglected in favour of the New Product Release (Waypoint provider) and there a Forced Obsolescence Factor alongside with Forced Adoption.
Deficient Relationship Marketing is the Key Factor in deciding who Will actually write Terraform (maybe not even HashiCorp), Who will Wrap Terraform and Into What (terragrunt, terraspace, pulumi, crossplane etc or some custom gitops SaaS), and Who will Support the target providers when Hashicorp solutions will magically turn into an abandonware due to upsales.
Really tired of overcoming the existing design limitations of terraform.
I think that this is a global issue that had been neglected for far too long, and too many people had been struggling with this due to pointless hashicorp excuses, of "being understaffed" or "this goes against the design of a single consolidated plan".
I think that Consolidated Infrastructure State and Consolidated State Locks are far more important than the Consolidated Plan itself.
Consolidated plan is simply impossible for multi-stage deployments and can be omitted during the First Infrastructure Deployment.
Having multiple deferred plan refresh phases, alongside the dependency calculation is something that should be tolerable. We're tolerating the "known after compute" type of situation for everything else anyway.
Yes, a lot can be automated with CUE and it would work similarly to tfcdk, which could potentially solve a lot of multistage deployment issues by manually managing multiple dependent terraform states.
Having unified terraform state on the other hand would help with static analysis and dependency tracking.
There are also fairly interesting Operator Designs, where Kudo like operators could be replaced with the respective terraform modules in terms of consul-terraform-sync.
My struggle for unified terraform state is mostly based on this operator-replacement design possibility. It would be more flexible than Kudo and could help resolving postmortem support cases.
For instance, AWS Support tends to refuse support EKS postmortem cases when any operators had been installed to that EKS cluster... So, having no operators whatsoever has it's own "political benefit".
I don't really like neither crossplane nor pulumi due to the lack of providers.
And to my personal opinion they would've better spent some resources to contribute to the target providers instead of embedding or directly code-generating (https://github.com/crossplane/terrajet) from Terraform. Spreading the efforts and resources doesn't really help that much with the existing Terraform issues.
So, I find both pulumi and crossplane rather parasitic for the existing Terraform Provider Infrastructure. Actually fixing the bicycle would've been better than strapping a jet engine on top.
For me, this situation is very similar to the long resolved issue with docker multistage... with myriads of CLI tools on top obsoleting in result.
2. The upstream fight had always followed the Cash Flow. If you have so called "problems" you either act yourself, or let other people do something about it, the way it won't result in more long-term issues. Deliberate Detraction IS a Sign of Corruption and Abuse of Power.
3. An ambiguous Point of Conflict may not be Conflict at all, but just an outcome of the Lack of Proper Communication and Transparency. Detraction is Accountable, social dynamics and social incline with explicit or implicit motives is Accountable as well. The lack of Effective Code of Conduct, and absence of Proper Punitive Action, for everyone, will cause Authoritarianism (or just Genocide of Engineering Thought).
I do feel bad about the state of Linux Kernel development, but we'll have either to move on, and learn from This Mistake, or do something about it.